Do Nothing Standalone Daemon – A Template for Java Daemon Processes

Once again, as promised here’s my template for created Standalone Daemon Processes in Java. You need to actually combine this with the unix command nohup to actually have it run as a background daemon process, but it makes use of two separate Java Threads to have it correctly behave as a background daemon process based on a Sleep Interval based Timer as the event trigger.

This of course is the Daemon counterpart article to my Batch Process post: Do Nothing Standalone Process – A Template for Batch Jobs and Utility Commands.

I have used code like this for many daemon processes in my personal projects and profession experience, always deployed on either Solaris Unix or Linux (RedHat Enterprise).

Exactly for the same reasons for a template for Batch Processes, having a standard template which all your Daemon Processes will follow, cuts down on maintenance and production support costs.

Summary to Start the Daemon Process:

  1. Remove the previous Stop touch file. (See below for stopping the daemon)
  2. Start the JVM using the nohup command
  3. Redirect STDOUT (> [TEXT_FILE_PATH]) and STDERR (2> [TEXT_FILE_PATH]) to log files or use the > [TEXT_FILE_PATH] 2>&1 argument to redirect both to the same file.
  4. Run it as a background process using the & syntax in Unix SHell.
  5. Record the PID (Process ID) of the nohup’s child process (your Java Process in this case) to a text file for usage in monitoring and production support. If you simply echo the environmental variable $! immediately after executing the nohup command.

Stopping the Daemon Process:

Because my Daemon Template has a built in "Stop File Watcher" which put simply watches the file system for a specific file (which you pass as a command line argument to the process), that as soon as it finds that this file exists, it will execute the Daemon Graceful shutdown routine. So, given this built in capability, to stop the Daemon Process you can write a shell script which simply created a *.stop touch file (empty text file), using the unix touch command. Normally, in my production batch, I run a script that created this stop touch file during our “green zone” hours, which is the time of the week or month when we are scheduled with the business user base to bring down our system for maintenance.

Monitoring the Daemon Process:

When creating robust reliable systems, you MUST monitor all components. Daemon Processes are notorious for mysteriously becoming unavailable without anyone from the development or product support team knowing about it crashing or otherwise going down unexpectedly. In my professional experience, if there’s not a bug, it’s usually because a System Admin or someone else with ROOT access, or perhaps a production support person with the proper privileges accidentally kills (for some reason usually with kill -9 ) your daemon process, and either doesn’t know they did, or fails to report it for one reason or another usually to CYA/CTA. So monitor the daemon processes you create is essential when rolling out a new Daemon. Normally, I do this by creating a repeating Batch Job that will run every 5 or 10 minutes switch executes a simple script that uses the PS command combined with the saved PID from the daemon start script and grep to check if the process is still running. If it is not found, there’s two things you can do: 1) If you are using a robust scheduler like Autosys, you can simply fail the job by exiting non-zero, which will send out the normal Autosys alert escalation. Or 2) You can use sendmail to email a development and/or production support mail distribution list. I have used either ways, and even a combination of both in my professional experience. Because this monitor job runs every couple of minutes you don’t have to worry about someone killing it…

 This is pretty much a copy of paste statement from my last blog post: I heavily documented the class, which I wrote in Eclipse specifically for this blog post, so I’m actually going to rely on the code and comments itself to do most of the talking on this post. But it works perfectly here as well. Enjoy the overly commented code!

The Code:

Download the Java Code in PDF Format: DoNothingStandaloneDaemon

/*
Copyright 2012 Robert C. Ilardi
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
 */

/**
 * Created Aug 19, 2012
 */
package com.roguelogic.util;

import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.Properties;

/**
 * @author Robert C. Ilardi
 * 
 *         This is a Sample Class for a Standalone *Daemon* Process.
 *         Implementations that use this template may be run from a scheduler
 *         such as Cron or Autosys or as Manual Utility Processes using the UNIX
 *         Command NOHUP.
 * 
 *         IMPORTANT: This Java Process is intended to be ran with NOHUP.
 * 
 *         I have released this code under the Apache 2.0 Open Source License.
 *         Please feel free to use this as a template for your own Daemons or
 *         Utility Process Implementations.
 * 
 *         Finally, as you will notice I used STDOUT AND STDERR for all logging.
 *         This is for simplicity of the template. You can use Log4J or Java
 *         Logging or any other log library you prefer. In my professional
 *         experience, I also include an Exception or "Throwable" emailer
 *         mechanism so that our development team receives all exceptions from
 *         any process even front-ends in real time.
 * 
 */
public class DoNothingStandaloneDaemon {

  /*
   * I personally like having a single property file for the configuration of
   * all my batch jobs and utilities. In my professional projects, I actually
   * have a more complex method of properties management, where all properties
   * are stored in a database table, and I have something called a Resource
   * Bundle and Resource Helper facility to manage it.
   * 
   * My blog at EnterpriseProgrammer.com has more information on properties and
   * connection management using this concept.
   * 
   * However for demonstration purposes I am using a simple Properties object to
   * manage all configuration data for the Standalone Process Template. Feel
   * free to replace this field with a more advanced configuration management
   * mechanism that means your needs.
   */
  private Properties appProps;

  /*
   * This flag ensures that the Cleanup method only runs once. This is because I
   * wanted to have a shutdown hook, in case the process receives an interrupt
   * signal and in the main method, I explicitly call cleanup() from the finally
   * block. Technically the shutdown hook based on my implementation is only a
   * backup so it actually will never run unless there's a situation like an
   * interrupt signal.
   */
  private boolean ranCleanup = false;

  /*
   * If this variable is set to true, any exception caused in the cleanup
   * routine will cause the entire process to exit non-zero.
   * 
   * However in my professional experience, we usually just want to log these
   * exceptions, perhaps even email them to the team for investigation later,
   * and allow the process to exit ZERO, so that the batch job scheduler can
   * continue onto the next job, especially is the real execution is completed.
   */
  private boolean treatCleanupExceptionsAsFatal = false;

  /*
   * We need a object monitor to control the background thread used to run the
   * execution loop.
   */
  private Object loopControlLock = new Object();

  /*
   * A flag with tells the start and stop methods if the execution loop thread
   * has started or not.
   */
  private boolean loopStarted;

  /*
   * This flag tells the start, stop, and waitWhileExecution methods if the
   * process loop is running. It is also used to STOP the process loop from
   * running.
   */
  private boolean runProcessing = false;

  /*
   * This parameter needs to be set in order for the process loop to sleep a
   * certain number of seconds between each consecutive call to the actual
   * processing logic method.
   */
  private int processLoopSleepSecs;

  /*
   * This field is used as a counter for the number of processing loop
   * iterations. For debugging, logging, and even custom logic implementation
   * purposes, this is a nice piece of information to have.
   */
  private long loopIterationCnt;

  /*
   * This is the file path for the stop file watcher to watch. When the stop
   * file watcher thread finds the stop file at this location, it will
   * gracefully shutdown the daemon process.
   */
  private String stopFilePath;

  /*
   * We don't want to spend too many cycles watching for a stop file especially
   * since a daemon process normally runs for hours, days, or even weeks, so we
   * have a separate sleep seconds variable to control the interval between file
   * system checks.
   */
  private int stopFileSleepSecs;

  /*
   * This flag tells the start, stop file watcher methods if the file watcher
   * loop is running.
   */
  private boolean runStopFileWatcher;

  /*
   * We need a object monitor to control the background thread used to run the
   * stop file watcher loop.
   */
  private Object stopFileWatcherControlLock = new Object();

  /*
   * A flag with tells the start and stop methods if the stop file watcher loop
   * thread has started or not.
   */
  private boolean stopFileWarcherLoopStarted;

  /**
   * I'm not really using the constructor here. I purpose more explicit init
   * methods. It's a good practice especially if you work with a lot of
   * reflection, however feel free to add some base initialization here if you
   * prefer.
   */
  public DoNothingStandaloneDaemon() {}

  // Start public methods that shouldn't be customized by the user
  // ------------------------------------------------------------------->

  /**
   * The init method wraps two user customizable methods: 1. readProperties(); -
   * Use this to add reads from the appProps object. 2. customProcessInit() -
   * Use this to customize your process before the execution logic runs.
   * 
   * As stated previously, so not touch these methods, they are simple wrappers
   * around the methods you should customize instead and provide what in my
   * professional experience are good log messages for batch jobs or utilities
   * to print out, such as the execution timing information. This is especially
   * useful for long running jobs. You can eventually take average over the
   * course of many runs of the batch job, and then you will know when your
   * batch job is behaving badly, when it's taking too long to finish execution.
   */
  public synchronized void init() {
    long start, end, total;

    System.out.println("Initialization at: " + GetTimeStamp());
    start = System.currentTimeMillis();

    readProperties(); // Hook to the user's read properties method.
    customProcessInit(); // Hook to the user's custom process init method!

    end = System.currentTimeMillis();
    total = end - start;

    System.out.println("Initialization Completed at: " + GetTimeStamp());
    System.out.println("Total Init Execution Time: "
        + CompactHumanReadableTimeWithMs(total));
  }

  /**
   * Because we aren't using a more advanced mechanism for properties
   * management, I have included this method to allow the main() method to set
   * the path to the main properties file used by the batch jobs.
   * 
   * In my professional versions of this template, this method is embedded in
   * the init() method which basically will initialize the Resource Helper
   * component and obtain the properties from the configuration tables instead.
   * 
   * Again you shouldn't touch this method's implementation, instead use
   * readProperties() to customize what you do with the properties after the
   * properties load.
   */
  public void loadProperties(String appPropsPath) throws IOException {
    FileInputStream fis = null;

    try {
      fis = new FileInputStream(appPropsPath);
      appProps = new Properties();
      appProps.load(fis);
    } // End try block
    finally {
      if (fis != null) {
        try {
          fis.close();
        }
        catch (Exception e) {}
      }
    }
  }

  /**
   * This method sets the number of seconds the process loop will sleep between
   * each call to the logic processing method.
   * 
   * @param processLoopSleepSecs
   */
  public void setProcessLoopSleepSecond(int processLoopSleepSecs) {
    this.processLoopSleepSecs = processLoopSleepSecs;
  }

  /**
   * This method sets the number of seconds between each stop file check by the
   * stop file watcher.
   * 
   * @param stopFileSleepSecs
   */
  public void setStopFileWatcherSleepSeconds(int stopFileSleepSecs) {
    this.stopFileSleepSecs = stopFileSleepSecs;
  }

  /**
   * This method sets the file for the stop file watcher to loop for.
   * 
   * @param stopFilePath
   */
  public void setStopFilePath(String stopFilePath) {
    this.stopFilePath = stopFilePath;
  }

  /**
   * This method performs the cleanup of any JDBC connections, files, sockets,
   * and other resources that your execution process or your initialization
   * process may have opened or created.
   * 
   * Once again do not touch this method directly, instead put your cleanup code
   * in the customProcessCleanup() method.
   * 
   * This method is called automatically in the last finally block of the main
   * method, and if there's an interrupt signal or other fatal issue where
   * somehow the finally block didn't get called the Runtime shutdown hook will
   * invoke this method on System.exit...
   * 
   * @throws Exception
   */
  public synchronized void cleanup() throws Exception {
    long start, end, total;

    // This prevents cleanup from running more than onces.
    if (ranCleanup) {
      return;
    }

    try {
      System.out.println("Starting Cleanup at: " + GetTimeStamp());
      start = System.currentTimeMillis();

      stopStopFileWatcher(); // Make sure the stop file watcher is stopped!

      stopProcessingLoop(); // Make sure the processing loop is stopped!

      customProcessCleanup(); // Hook to the users Process Cleanup Method

      end = System.currentTimeMillis();
      total = end - start;

      System.out.println("Cleanup Completed at: " + GetTimeStamp());
      System.out.println("Total Cleanup Execution Time: "
          + CompactHumanReadableTimeWithMs(total));

      ranCleanup = true;
    } // End try block
    catch (Exception e) {
      /*
       * It is in my experience that the Operating System will cleanup anything
       * we have "forgotten" to clean up. Therefore I do not want to waste my
       * production support team members time at 3AM in the morning to handle
       * "why did a database connection not close" It will close eventually,
       * since it is just a socket, and even if it doesn't we'll catch this in
       * other jobs which may fail due to the database running out of
       * connections.
       * 
       * However I usually have these exceptions emailed to our development team
       * for investigation the next day. For demo purposes I did not include my
       * Exception/Stacktrace Emailing utility, however I encourage you to add
       * your own.
       * 
       * If you really need the process to exit non-ZERO because of the cleanup
       * failing, set the treatCleanupExceptionsAsFatal to true.
       */
      e.printStackTrace();

      if (treatCleanupExceptionsAsFatal) {
        throw e;
      }
    }
  }

  public void startStopFileWatcher() throws InterruptedException {
    Thread t;

    synchronized (stopFileWatcherControlLock) {
      if (runStopFileWatcher) {
        return;
      }

      stopFileWarcherLoopStarted = false;
      runStopFileWatcher = true;

      System.out.println("Starting Stop File Watcher at: " + GetTimeStamp());

      t = new Thread(stopFileWatcherRunner);
      t.start();

      while (!stopFileWarcherLoopStarted) {
        stopFileWatcherControlLock.wait();
      }
    }

    System.out.println("Stop File Watcher Thread Started Running at: "
        + GetTimeStamp());
  }

  public void stopStopFileWatcher() throws InterruptedException {
    synchronized (stopFileWatcherControlLock) {
      if (!stopFileWarcherLoopStarted || !runStopFileWatcher) {
        return;
      }

      System.out.println("Requesting Stop File Watcher Stop at: "
          + GetTimeStamp());

      runStopFileWatcher = false;

      while (stopFileWarcherLoopStarted) {
        stopFileWatcherControlLock.wait();
      }

      System.out.println("Stop File Watcher Stop Request Completed at: "
          + GetTimeStamp());
    }
  }

  /**
   * This method is used to start the processing loop's thread.
   * 
   * Again like the other methods in this section of the class, do not modify
   * this method directly.
   * 
   * @throws InterruptedException
   * 
   * @throws Exception
   */
  public void startProcessingLoop() throws InterruptedException {
    Thread t;

    synchronized (loopControlLock) {
      if (runProcessing) {
        return;
      }

      loopStarted = false;
      runProcessing = true;
      ranCleanup = false;

      System.out.println("Starting Processing Loop at: " + GetTimeStamp());

      t = new Thread(executionLoopRunner);
      t.start();

      while (!loopStarted) {
        loopControlLock.wait();
      }
    }

    System.out.println("Execution Processing Loop Thread Started Running at: "
        + GetTimeStamp());
  }

  /**
   * This method is used to stop or actually "request to stop" the processing
   * loop thread.
   * 
   * It waits while the processing loop is running.
   * 
   * @throws InterruptedException
   */
  public void stopProcessingLoop() throws InterruptedException {
    synchronized (loopControlLock) {
      if (!loopStarted || !runProcessing) {
        return;
      }

      System.out
          .println("Requesting Execution Loop Stop at: " + GetTimeStamp());

      runProcessing = false;

      while (loopStarted) {
        loopControlLock.wait();
      }

      System.out.println("Execution Loop Stop Request Completed at: "
          + GetTimeStamp());
    }
  }

  /**
   * This method will wait while the processing loop is running. Yes, I know we
   * can use Thread.join(), however, what if you want to embedded this class in
   * some other larger component, then you might not want to use the join method
   * directly. I personally like this implementation better, it tells me exactly
   * what I'm waiting on.
   * 
   * @throws InterruptedException
   */
  public void waitWhileExecuting() throws InterruptedException {
    synchronized (loopControlLock) {
      while (loopStarted) {
        loopControlLock.wait(1000);
      }
    }
  }

  /**
   * This is the runnable implementation as an anon inner class which contains
   * the actual execution loop of the Daemon. This execution loop is what really
   * separates the Daemon Process from the Standalone Process batch template.
   * While the Standalone Process template was meant for processes which run a
   * task and then exit once completed. This implementation is method to keep on
   * running for extended periods of time, re-executing the custom processing
   * logic over and over again after some sleep period.
   */
  private Runnable executionLoopRunner = new Runnable() {
    public void run() {
      try {
        synchronized (loopControlLock) {
          loopStarted = true;
          loopControlLock.notifyAll();
        }

        System.out.println("Executing Loop Thread Running!");

        while (runProcessing) {
          // Hook to the User's Custom Execute Processing
          // Method! - Where the magic happens!
          customExecuteProcessing();

          loopIterationCnt++;

          // Sleep between execution cycles
          try {
            for (int i = 1; runProcessing && i <= processLoopSleepSecs; i++) {
              Thread.sleep(1000);
            }
          }
          catch (Exception e) {}
        } // End while runProcessing loop
      } // End try block
      catch (Exception e) {
        e.printStackTrace();
      }
      finally {
        System.out.println("Execution Processing Loop Exit at: "
            + GetTimeStamp());

        synchronized (loopControlLock) {
          runProcessing = false;
          loopStarted = false;
          loopControlLock.notifyAll();
        }
      }
    }
  };

  /**
   * This is the runnable implementation as an anon inner class which contains
   * the Stop File Watcher loop. A Stop File Watcher is simply a standard file
   * watcher, except when it finds the target file, it will execute the daemon
   * shutdown routine. This is a form of inter-process communication via the
   * file system to enable a separate process or even a simple script to control
   * (or at least stop) the daemon process when it's running under NOHUP. You
   * can simple create a script which creates an empty file using the unix TOUCH
   * command.
   */
  private Runnable stopFileWatcherRunner = new Runnable() {
    public void run() {
      File f;

      try {
        synchronized (stopFileWatcherControlLock) {
          stopFileWarcherLoopStarted = true;
          stopFileWatcherControlLock.notifyAll();
        }

        System.out.println("Stop File Watcher Thread Running!");

        f = new File(stopFilePath);

        while (runStopFileWatcher) {
          // If we find the stop file
          // stop the processing loop
          // and exit this thread as well.
          if (f.exists()) {
            System.out.println("Stop File: '" + stopFilePath + "'  Found at: "
                + GetTimeStamp());
            stopProcessingLoop();
            break;
          }

          // Sleep between file existence checks
          try {
            for (int i = 1; runStopFileWatcher && i <= stopFileSleepSecs; i++) {
              Thread.sleep(1000);
            }
          }
          catch (Exception e) {}
        } // End while runStopFileWatcher loop
      } // End try block
      catch (Exception e) {
        e.printStackTrace();
      }
      finally {
        synchronized (stopFileWatcherControlLock) {
          runStopFileWatcher = false;
          stopFileWarcherLoopStarted = false;
          stopFileWatcherControlLock.notifyAll();
        }
      }
    }
  };

  /**
   * This is the method that adds the shutdown hook.
   * 
   * All this method does it property invokes the
   * Runtime.getRuntime().addShutdownHook(Thread t); method by adding an
   * anonymous class implementation of a thread.
   * 
   * This thread's run method simply calls the Process's cleanup method.
   * 
   * Whenever I create a class like this, I envision it being ran two ways,
   * either directly from the main() method or as part of a larger component,
   * which may wrap this entire class (A HAS_A OOP relationship).
   * 
   * In the case of the wrapper, adding the shutdown hook might be optional
   * since the wrapper may want to handle shutdown on it's own.
   * 
   */
  public synchronized void addShutdownHook() {
    Runtime.getRuntime().addShutdownHook(new Thread() {
      public void run() {
        try {
          cleanup();
        }
        catch (Exception e) {
          e.printStackTrace();
        }
      }
    });
  }

  /**
   * This method is only provided in case you are loading properties from an
   * input stream or other non-standard source that is not a File.
   * 
   * It becomes very useful in the wrapper class situation I described in the
   * comments about the addShutdownHook method.
   * 
   * Perhaps the wrapping process reads properties from a Database or a URL?
   * 
   * @param appProps
   */
  public void setAppProperties(Properties appProps) {
    this.appProps = appProps;
  }

  /**
   * Used to detect which mode the cleanup exceptions are handled in.
   * 
   * @return
   */
  public boolean isTreatCleanupExceptionsAsFatal() {
    return treatCleanupExceptionsAsFatal;
  }

  /**
   * Use this method to set if you want to treat cleanup exception as fatal. The
   * default, and my personal preference is not to make these exception fatal.
   * But I added the flexibility into the template for your usage.
   * 
   * @param treatCleanupExceptionsAsFatal
   */
  public void setTreatCleanupExceptionsAsFatal(
      boolean treatCleanupExceptionsAsFatal) {
    this.treatCleanupExceptionsAsFatal = treatCleanupExceptionsAsFatal;
  }

  // ------------------------------------------------------------------->
  // Start methods that need to be customized by the user
  // ------------------------------------------------------------------->
  /**
   * In general for performance reasons and for clarity even above performance,
   * I like pre-caching the properties as Strings or parsed Integers, etc,
   * before running any real business logic.
   * 
   * This is why I provide the hook to readProperties which should read
   * properties from the appProps field (member variable).
   * 
   * If you don't want to pre-cache your property values you can leave this
   * method blank. However I believe it's a good practice especially if your
   * batch process is a high speed ETL Loader process where every millisecond
   * counts when loading millions of records.
   */
  private synchronized void readProperties() {
    System.out.println("Add Your Property Reads Here!");
  }

  /**
   * After the properties are read from the readProperties() method this method
   * is called.
   * 
   * It is provided for the user to add custom initialization processing.
   * 
   * Let's say you want to open all JDBC connections at the start of a process,
   * this is probably the right place to do so.
   * 
   * For more complex implementations, this is the best place to create and
   * initialize all your sub-components of your process.
   * 
   * Let's say you have a DbConnectionPool, a Country Code Mapping utility, an
   * Address Fuzzy Logic Matching library.
   * 
   * This is where I would initialize these components.
   * 
   * The idea is to fail-fast in your batch processes, you don't want to wait
   * until you processed 10,000 records before some logic statement is triggered
   * to lazy instantiate these components, and because of a network issue or a
   * configuration mistake you get a fatal exception and your process exists,
   * and your data is only partially loaded and you or your production support
   * team members have to debug not only the process but debug the portion of
   * the data already loaded make it in ok. This is extremely important if your
   * batch process interacts is real-time system components such as message
   * publishers, maybe you started publishing the updated records to downstream
   * consumers?
   * 
   * Fail-Fast my friends... And as soon as the process starts if possible!
   */
  private synchronized void customProcessInit() {
    System.out.println("Add Custom Initialization Logic Here!");
  }

  /**
   * This is where you would add your custom cleanup processing. If you open and
   * connections, files, sockets, etc and keep references to these
   * objects/resources opened as fields in your class which is a good idea in
   * some cases especially long running batch processes you need a hook to be
   * able to close these resources before the process exits.
   * 
   * This is where that type of logic should be placed.
   * 
   * Now you can throw any exception you like, however the cleanup wrapper
   * method will simply log these exceptions, the idea here is that, even though
   * cleanup is extremely important, the next step of the process is a
   * System.exit and the operating system will most-likely reclaim any resources
   * such as files and sockets which have been left opened, after some bit of
   * time.
   * 
   * Now my preference is usually not to wake my production support guys up
   * because a database connection (on the extremely rare occasion) didn't close
   * correctly. The process still ran successfully at this point, so just exit
   * and log it.
   * 
   * However if you really need to make the cleanup be truly fatal to the
   * process you will have to set treatCleanupExceptionsAsFatal to true.
   * 
   * @throws Exception
   */
  private synchronized void customProcessCleanup() throws Exception {
    System.out.println("Add Custom Cleanup Logic Here!");
  }

  private synchronized void customExecuteProcessing() throws Exception {
    System.out.println("Loop Iteration Count = " + loopIterationCnt
        + " - Add Custom Processing Logic Here!");

    // Uncomment for testing if you want to see the behavior...
    if (loopIterationCnt == 5) {
      throw new Exception(
          "Testing what happens if an exception gets thrown here!");
    }
  }

  // ------------------------------------------------------------------->
  /*
   * Start String Utility Methods These are methods I have in my custom
   * "StringUtils.java" class I extracted them and embedded them in this class
   * for demonstration purposes.
   * 
   * I encourage everyone to build up their own set of useful String Utility
   * Functions please feel free to add these to your own set if you need them.
   */
  // ------------------------------------------------------------------->
  /**
   * This will return a string that is a human readable time sentence. It is the
   * "compact" version because instead of having leading ZERO Days, Hours,
   * Minutes, Seconds, it will only start the sentence with the first non-zero
   * time unit.
   * 
   * In my string utils I have a non-compact version as well that prints the
   * leading zero time units.
   * 
   * All depends on how you need to presented in your logs.
   */
  public static String CompactHumanReadableTimeWithMs(long milliSeconds) {
    long days, hours, inpSecs, leftOverMs;
    int minutes, seconds;
    StringBuffer sb = new StringBuffer();

    inpSecs = milliSeconds / 1000; // Convert Milliseconds into Seconds
    days = inpSecs / 86400;
    hours = (inpSecs - (days * 86400)) / 3600;
    minutes = (int) (((inpSecs - (days * 86400)) - (hours * 3600)) / 60);
    seconds = (int) (((inpSecs - (days * 86400)) - (hours * 3600)) - (minutes * 60));
    leftOverMs = milliSeconds - (inpSecs * 1000);

    if (days > 0) {
      sb.append(days);
      sb.append((days != 1 ? " Days" : " Day"));
    }

    if (sb.length() > 0) {
      sb.append(", ");
    }

    if (hours > 0 || sb.length() > 0) {
      sb.append(hours);
      sb.append((hours != 1 ? " Hours" : " Hour"));
    }

    if (sb.length() > 0) {
      sb.append(", ");
    }

    if (minutes > 0 || sb.length() > 0) {
      sb.append(minutes);
      sb.append((minutes != 1 ? " Minutes" : " Minute"));
    }

    if (sb.length() > 0) {
      sb.append(", ");
    }

    if (seconds > 0 || sb.length() > 0) {
      sb.append(seconds);
      sb.append((seconds != 1 ? " Seconds" : " Second"));
    }

    if (sb.length() > 0) {
      sb.append(", ");
    }

    sb.append(leftOverMs);
    sb.append((seconds != 1 ? " Milliseconds" : " Millisecond"));

    return sb.toString();
  }

  /**
   * NVL = Null Value, in my experience, most times, we want to treat empty or
   * whitespace only strings are NULLs
   * 
   * So this method is here to avoid a lot of if (s == null || s.trim().length()
   * == 0) all over the place, instead you will find if(IsNVL(s)) instead.
   */
  public static boolean IsNVL(String s) {
    return s == null || s.trim().length() == 0;
  }

  /**
   * Check is "s" is a numeric value We could use Integer.praseInt and just
   * capture the exception if it's not a number, but I think that's a hack...
   * 
   * @param s
   * @return
   */
  public static boolean IsNumeric(String s) {
    boolean numeric = false;
    char c;

    if (!IsNVL(s)) {
      numeric = true;
      s = s.trim();

      for (int i = 0; i < s.length(); i++) {
        c = s.charAt(i);

        if (i == 0 && (c == '-' || c == '+')) {
          // Ignore signs...
          continue;
        }
        else if (c < '0' || c > '9') {
          numeric = false;
          break;
        }
      }
    }

    return numeric;
  }

  /**
   * Simply returns a timestamp as a String.
   * 
   * @return
   */
  public static String GetTimeStamp() {
    return (new java.util.Date()).toString();
  }

  // ------------------------------------------------------------------->
  // Start Main() Helper Static Methods
  // ------------------------------------------------------------------->
  /**
   * This method returns true if the command line arguments are valid, and false
   * otherwise.
   * 
   * Please change this method to meet your implementation's requirements.
   */
  private static boolean CheckCommandLineArguments(String[] args) {
    boolean ok = false;

    ok = args.length == 4 && !IsNVL(args[0]) && IsNumeric(args[1])
        && !IsNVL(args[2]) && IsNumeric(args[3]);

    return ok;
  }

  /**
   * This prints to STDERR (a common practice), the command line usage of the
   * program.
   * 
   * Please change this to meet your implementation's command line arguments.
   */
  private static void PrintUsage() {
    StringBuffer sb = new StringBuffer();
    sb.append("\nUsage: java ");
    sb.append(DoNothingStandaloneDaemon.class.getName());

    /*
     * Modify this append call to have each command line argument name example:
     * sb.append(
     * " [APP_PROPERTIES_FILE] [SOURCE_INPUT_FILE] [WSDL_URL] [TARGET_OUTPUT_FILE]"
     * );
     * 
     * For demo purposes we will only use [APP_PROPERTIES_FILE]
     */
    sb.append(" [APP_PROPERTIES_FILE] [PROCESS_LOOP_SLEEP_SECONDS] [STOP_FILE_PATH] [STOP_WATCHER_SECONDS]");
    sb.append("\n\n");
    System.err.print(sb.toString());
  }

  /**
   * I usually like the Batch and Daemon Processes or Utilities to print a small
   * Banner at the top of their output.
   * 
   * Please change this to suit your needs.
   */
  private static void PrintWelcome() {
    StringBuffer sb = new StringBuffer();
    sb.append("\n*********************************************\n");
    sb.append("*       Do Nothing Standalone Daemon        *\n");
    sb.append("*********************************************\n\n");
    System.out.print(sb.toString());
  }

  /**
   * This method simple prints the process startup time. I found this to be very
   * useful in batch job logs. I probably wouldn't change it, but you can if you
   * really need to.
   */
  private static void PrintStartupTime() {
    StringBuffer sb = new StringBuffer();
    sb.append("Startup Time: ");
    sb.append(GetTimeStamp());
    sb.append("\n\n");
    System.out.print(sb.toString());
  }

  // Start Main() Method
  // ------------------------------------------------------------------->
  /**
   * Here's your standard main() method which allows you to start a Java program
   * from the command line. You can probably use this as is, once you rename the
   * DoNothingStandaloneProcess class name to a proper name to represent your
   * implementation correctly.
   * 
   * MAKE SURE: To change the data type of the process object reference to the
   * name of your process implementation class. Other than that, you are good to
   * go with this main method!
   */
  public static void main(String[] args) {
    int exitCode;
    DoNothingStandaloneDaemon daemon = null;

    if (!CheckCommandLineArguments(args)) {
      PrintUsage();
      exitCode = 1;
    }
    else {
      try {
        PrintWelcome();
        PrintStartupTime();
        daemon = new DoNothingStandaloneDaemon();

        // I don't believe cleanup exceptions
        // area really fatal, but that's up to you...
        daemon.setTreatCleanupExceptionsAsFatal(false);

        // Load properties using the file way.
        daemon.loadProperties(args[0]);

        // Set process loop sleep seconds
        daemon.setProcessLoopSleepSecond(Integer.parseInt(args[1]));

        // Set the stop file watcher file path
        daemon.setStopFilePath(args[2]);

        // Set the stop file watcher sleep seconds
        daemon.setStopFileWatcherSleepSeconds(Integer.parseInt(args[3]));

        // Performance daemon Initialization,
        // again I don't like over use of the constructor.
        daemon.init();

        daemon.addShutdownHook(); // Just in case we get an interrupt signal...

        // Star the Stop File Watcher!
        // It is not enabled automatically
        // to make this template more flexible
        // if you want to embedded it in a larger component
        daemon.startStopFileWatcher();

        // Do the actually business logic execution!
        // If we made it to this point without an exception, that means
        // we are successful, the daemon exit code should be ZERO for SUCCESS!
        daemon.startProcessingLoop();

        // Wait while the execution loop is running!
        daemon.waitWhileExecuting();

        exitCode = 0;
      } // End try block
      catch (Exception e) {
        exitCode = 1; // If there was an exception, the daemon exit code should
        // be NON-ZERO for FAILURE!

        e.printStackTrace(); // Log the exception, if you have an Exception
        // email utility like I do, use that instead.
      }
      finally {
        if (daemon != null) {
          try {
            daemon.stopStopFileWatcher(); // Just in case stop file watcher
          }
          catch (Exception e) {
            e.printStackTrace();
          }

          try {
            daemon.stopProcessingLoop(); // Just in case stop processing loop
          }
          catch (Exception e) {
            e.printStackTrace();
          }

          try {
            // Technically we don't need to do this because
            // of the shutdown hook
            // But I like to be explicit here to show when during a
            // normal execution, when the call
            // to cleanup should happen.
            daemon.cleanup();
          }
          catch (Exception e) {
            // We shouldn't receive an exception
            // But in case there is a runtime exception
            // Just print it, but treat it as non-fatal.
            // Technically most if not all resources
            // will be reclaimed by the operating system as an
            // absolute last resort
            // so we did our best attempt at cleaning things up,
            // but we don't want to wake our developers or our
            // production services team
            // up at 3 in the morning because something weird
            // happened during cleanup.
            e.printStackTrace();

            // If we set the daemon to treat cleanup exception as fatal
            // the exit code will be set to 1...
            if (daemon != null && daemon.isTreatCleanupExceptionsAsFatal()) {
              exitCode = 1;
            }
          }
        }
      } // End finally block
    } // End else block

    // Make sure our standard streams are flushed
    // so we don't miss anything in the logs.
    System.out.flush();
    System.err.flush();
    System.out.println("Daemon Exit Code = " + exitCode);
    System.out.flush();

    // Make sure to return the exit code to the parent process
    System.exit(exitCode);
  }
  // ------------------------------------------------------------------->

} 

Thanks to http://www.palfrader.org/code2html/code2html.html for the Java Code to HTML Conversion…

Closing Remarks:

I believe this post is really self explanatory, but I’m extremely interested in hearing from you on any comments, question, or enhancements to my code you may have.

Again, this code is released under the Apache 2.0 Open Source License, so please feel free to use it in your own projects.

Just Another Stream of Random Bits…
– Robert C. Ilardi
 
Posted in Development | Leave a comment

Do Nothing Standalone Process – A Template for Batch Jobs and Utility Commands

As promised, I’m sharing my template for how I want batch jobs and other standalone processes such as utility programs to be based on. This solves the problem I mentioned in my previous post “Helping your developers to maintain other people’s code“.

Having a standard template which all your batch jobs will follow, cuts down on maintenance and production support costs.

 I heavily documented the class, which I wrote in Eclipse specifically for this blog post, so I’m actually going to rely on the code and comments itself to do most of the talking on this post.

The Code:

Download the Java Code in PDF Format: DoNothingStandaloneProcess

/*
 Copyright 2012 Robert C. Ilardi
 
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at
 
 http://www.apache.org/licenses/LICENSE-2.0
 
 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
 */
 
/**
 * Created Aug 3, 2012
 */
package com.roguelogic.util;
 
import java.io.FileInputStream;
import java.io.IOException;
import java.util.Properties;
 
/**
 * @author Robert C. Ilardi
 *
 *         This is a Sample Class for a Standalone Process that would run as
 *         part of a Batch. Implementations that use this template may be run
 *         from a scheduler such as Cron or Autosys or as Manual Utility
 *         Processes.
 *
 *         I have released this code under the Apache 2.0 Open Source License.
 *         Please feel free to use this as a template for your own Batch Job or
 *         Utility Process Implementations.
 *
 *         Finally, as you will notice I used STDOUT AND STDERR for all logging.
 *         This is for simplicity of the template. You can use Log4J or Java
 *         Logging or any other log library you prefer. In my professional
 *         experience, I also include an Exception or "Throwable" emailer
 *         mechanism so that our development team receives all exceptions from
 *         any process even front-ends in real time.
 *
 */
 
public class DoNothingStandaloneProcess {
 
  /*
   * I personally like having a single property file for the configuration of
   * all my batch jobs and utilities. In my professional projects, I actually
   * have a more complex method of properties management, where all properties
   * are stored in a database table, and I have something called a Resource
   * Bundle and Resource Helper facility to manage it.
   *
   * My blog at EnterpriseProgrammer.com has more information on properties and
   * connection management using this concept.
   *
   * However for demonstration purposes I am using a simple Properties object to
   * manage all configuration data for the Standalone Process Template. Feel
   * free to replace this field with a more advanced configuration management
   * mechanism that means your needs.
   */
  private Properties appProps;
 
  /*
   * This flag ensures that the Cleanup method only runs once. This is because I
   * wanted to have a shutdown hook, in case the process receives an interrupt
   * signal and in the main method, I explicitly call cleanup() from the finally
   * block. Technically the shutdown hook based on my implementation is only a
   * backup so it actually will never run unless there's a situation like an
   * interrupt signal.
   */
  private boolean ranCleanup = false;
 
  /*
   * If this variable is set to true, any exception caused in the cleanup
   * routine will cause the entire process to exit non-zero.
   *
   * However in my professional experience, we usually just want to log these
   * exceptions, perhaps even email them to the team for investigation later,
   * and allow the process to exit ZERO, so that the batch job scheduler can
   * continue onto the next job, especially is the real execution is completed.
   */
  private boolean treatCleanupExceptionsAsFatal = false;
 
  /**
   * I'm not really using the constructor here. I purpose more explicit init
   * methods. It's a good practice especially if you work with a lot of
   * reflection, however feel free to add some base initialization here if you
   * prefer.
   */
  public DoNothingStandaloneProcess() {}
 
  // Start public methods that shouldn't be customized by the user
  // ------------------------------------------------------------------->
 
  /**
   * The init method wraps two user customizable methods: 1. readProperties(); -
   * Use this to add reads from the appProps object. 2. customProcessInit() -
   * Use this to customize your process before the execution logic runs.
   *
   * As stated previously, so not touch these methods, they are simple wrappers
   * around the methods you should customize instead and provide what in my
   * professional experience are good log messages for batch jobs or utilities
   * to print out, such as the execution timing information. This is especially
   * useful for long running jobs. You can eventually take average over the
   * course of many runs of the batch job, and then you will know when your
   * batch job is behaving badly, when it's taking too long to finish execution.
   */
  public synchronized void init() {
    long start, end, total;
 
    System.out.println("Initialization at: " + GetTimeStamp());
    start = System.currentTimeMillis();
 
    readProperties(); // Hook to the user's read properties method.
 
    customProcessInit(); // Hook to the user's custom process init method!
 
    end = System.currentTimeMillis();
 
    total = end - start;
 
    System.out.println("Initialization Completed at: " + GetTimeStamp());
    System.out.println("Total Init Execution Time: "
        + CompactHumanReadableTimeWithMs(total));
  }
 
  /**
   * Because we aren't using a more advanced mechanism for properties
   * management, I have included this method to allow the main() method to set
   * the path to the main properties file used by the batch jobs.
   *
   * In my professional versions of this template, this method is embedded in
   * the init() method which basically will initialize the Resource Helper
   * component and obtain the properties from the configuration tables instead.
   *
   * Again you shouldn't touch this method's implementation, instead use
   * readProperties() to customize what you do with the properties after the
   * properties load.
   */
  public synchronized void loadProperties(String appPropsPath)
      throws IOException {
    FileInputStream fis = null;
 
    try {
      fis = new FileInputStream(appPropsPath);
      appProps = new Properties();
      appProps.load(fis);
    } // End try block
    finally {
      if (fis != null) {
        try {
          fis.close();
        }
        catch (Exception e) {}
      }
    }
  }
 
  /**
   * This method performs the cleanup of any JDBC connections, files, sockets,
   * and other resources that your execution process or your initialization
   * process may have opened or created.
   *
   * Once again do not touch this method directly, instead put your cleanup code
   * in the customProcessCleanup() method.
   *
   * This method is called automatically in the last finally block of the main
   * method, and if there's an interrupt signal or other fatal issue where
   * somehow the finally block didn't get called the Runtime shutdown hook will
   * invoke this method on System.exit...
   *
   * @throws Exception
   */
  public synchronized void cleanup() throws Exception {
    long start, end, total;
 
    // This prevents cleanup from running more than onces.
    if (ranCleanup) {
      return;
    }
 
    try {
      System.out.println("Starting Cleanup at: " + GetTimeStamp());
      start = System.currentTimeMillis();
 
      customProcessCleanup(); // Hook to the users Process Cleanup Method
 
      end = System.currentTimeMillis();
 
      total = end - start;
 
      System.out.println("Cleanup Completed at: " + GetTimeStamp());
      System.out.println("Total Cleanup Execution Time: "
          + CompactHumanReadableTimeWithMs(total));
 
      ranCleanup = true;
    } // End try block
    catch (Exception e) {
      /*
       * It is in my experience that the Operating System will cleanup anything
       * we have "forgotten" to clean up. Therefore I do not want to waste my
       * production support team members time at 3AM in the morning to handle
       * "why did a database connection not close" It will close eventually,
       * since it is just a socket, and even if it doesn't we'll catch this in
       * other jobs which may fail due to the database running out of
       * connections.
       *
       * However I usually have these exceptions emailed to our development team
       * for investigation the next day. For demo purposes I did not include my
       * Exception/Stacktrace Emailing utility, however I encourage you to add
       * your own.
       *
       * If you really need the process to exit non-ZERO because of the cleanup
       * failing, set the treatCleanupExceptionsAsFatal to true.
       */
      e.printStackTrace();
 
      if (treatCleanupExceptionsAsFatal) {
        throw e;
      }
    }
  }
 
  /**
   * This method wraps the customExecuteProcessing() method where you should add
   * your customize process execution logic to.
   *
   * Again like the other methods in this section of the class, do not modify
   * this method directly.
   *
   * For demo purposes I made it throw the generic Exception object so that your
   * customExecuteProcessing() method can throw any Exception it likes.
   *
   * @throws Exception
   */
  public synchronized void executeProcessing() throws Exception {
    long start, end, total;
 
    ranCleanup = false;
 
    System.out.println("Start Processing at: " + GetTimeStamp());
    start = System.currentTimeMillis();
 
    customExecuteProcessing(); // Hook to the User's Custom Execute Processing
                               // Method! - Where the magic happens!
 
    end = System.currentTimeMillis();
 
    total = end - start;
 
    System.out.println("Processing Completed at: " + GetTimeStamp());
    System.out.println("Total Processing Execution Time: "
        + CompactHumanReadableTimeWithMs(total));
  }
 
  /**
   * This is the method that adds the shutdown hook.
   *
   * All this method does it property invokes the
   * Runtime.getRuntime().addShutdownHook(Thread t); method by adding an
   * anonymous class implementation of a thread.
   *
   * This thread's run method simply calls the Process's cleanup method.
   *
   * Whenever I create a class like this, I envision it being ran two ways,
   * either directly from the main() method or as part of a larger component,
   * which may wrap this entire class (A HAS_A OOP relationship).
   *
   * In the case of the wrapper, adding the shutdown hook might be optional
   * since the wrapper may want to handle shutdown on it's own.
   *
   */
  public synchronized void addShutdownHook() {
    Runtime.getRuntime().addShutdownHook(new Thread() {
      public void run() {
        try {
          cleanup();
        }
        catch (Exception e) {
          e.printStackTrace();
        }
      }
    });
  }
 
  /**
   * This method is only provided in case you are loading properties from an
   * input stream or other non-standard source that is not a File.
   *
   * It becomes very useful in the wrapper class situation I described in the
   * comments about the addShutdownHook method.
   *
   * Perhaps the wrapping process reads properties from a Database or a URL?
   *
   * @param appProps
   */
  public synchronized void setAppProperties(Properties appProps) {
    this.appProps = appProps;
  }
 
  /**
   * Used to detect which mode the cleanup exceptions are handled in.
   *
   * @return
   */
  public boolean isTreatCleanupExceptionsAsFatal() {
    return treatCleanupExceptionsAsFatal;
  }
 
  /**
   * Use this method to set if you want to treat cleanup exception as fatal. The
   * default, and my personal preference is not to make these exception fatal.
   * But I added the flexibility into the template for your usage.
   *
   * @param treatCleanupExceptionsAsFatal
   */
  public void setTreatCleanupExceptionsAsFatal(
      boolean treatCleanupExceptionsAsFatal) {
    this.treatCleanupExceptionsAsFatal = treatCleanupExceptionsAsFatal;
  }
 
  // ------------------------------------------------------------------->
 
  // Start methods that need to be customized by the user
  // ------------------------------------------------------------------->
 
  /**
   * In general for performance reasons and for clarity even above performance,
   * I like pre-caching the properties as Strings or parsed Integers, etc,
   * before running any real business logic.
   *
   * This is why I provide the hook to readProperties which should read
   * properties from the appProps field (member variable).
   *
   * If you don't want to pre-cache your property values you can leave this
   * method blank. However I believe it's a good practice especially if your
   * batch process is a high speed ETL Loader process where every millisecond
   * counts when loading millions of records.
   */
  private synchronized void readProperties() {
    System.out.println("Add Your Property Reads Here!");
  }
 
  /**
   * After the properties are read from the readProperties() method this method
   * is called.
   *
   * It is provided for the user to add custom initialization processing.
   *
   * Let's say you want to open all JDBC connections at the start of a process,
   * this is probably the right place to do so.
   *
   * For more complex implementations, this is the best place to create and
   * initialize all your sub-components of your process.
   *
   * Let's say you have a DbConnectionPool, a Country Code Mapping utility, an
   * Address Fuzzy Logic Matching library.
   *
   * This is where I would initialize these components.
   *
   * The idea is to fail-fast in your batch processes, you don't want to wait
   * until you processed 10,000 records before some logic statement is triggered
   * to lazy instantiate these components, and because of a network issue or a
   * configuration mistake you get a fatal exception and your process exists,
   * and your data is only partially loaded and you or your production support
   * team members have to debug not only the process but did the portion of the
   * data already loaded make it in ok. This is extremely important if your
   * batch process interacts is real-time system components such as message
   * publishers, maybe you started publishing the updated records to downstream
   * consumers?
   *
   * Fail-Fast my friends... And as soon as the process starts if possible!
   */
  private synchronized void customProcessInit() {
    System.out.println("Add Custom Initialization Logic Here!");
  }
 
  /**
   * This is where you would add your custom cleanup processing. If you open and
   * connections, files, sockets, etc and keep references to these
   * objects/resources opened as fields in your class which is a good idea in
   * some cases especially long running batch processes you need a hook to be
   * able to close these resources before the process exits.
   *
   * This is where that type of logic should be placed.
   *
   * Now you can throw any exception you like, however the cleanup wrapper
   * method will simply log these exceptions, the idea here is that, even though
   * cleanup is extremely important, the next step of the process is a
   * System.exit and the operating system will most-likely reclaim any resources
   * such as files and sockets which have been left opened, after some bit of
   * time.
   *
   * Now my preference is usually not to wake my production support guys up
   * because a database connection (on the extremely rare occasion) didn't close
   * correctly. The process still ran successfully at this point, so just exit
   * and log it.
   *
   * However if you really need to make the cleanup be truly fatal to the
   * process you will have to set treatCleanupExceptionsAsFatal to true.
   *
   * @throws Exception
   */
  private synchronized void customProcessCleanup() throws Exception {
    System.out.println("Add Custom Cleanup Logic Here!");
  }
 
  private synchronized void customExecuteProcessing() throws Exception {
    System.out.println("Add Custom Processing Logic Here!");
  }
 
  // ------------------------------------------------------------------->
 
  /*
   * Start String Utility Methods These are methods I have in my custom
   * "StringUtils.java" class I extracted them and embedded them in this class
   * for demonstration purposes.
   *
   * I encourage everyone to build up their own set of useful String Utility
   * Functions please feel free to add these to your own set if you need them.
   */
  // ------------------------------------------------------------------->
 
  /**
   * This will return a string that is a human readable time sentence. It is the
   * "compact" version because instead of having leading ZERO Days, Hours,
   * Minutes, Seconds, it will only start the sentence with the first non-zero
   * time unit.
   *
   * In my string utils I have a non-compact version as well that prints the
   * leading zero time units.
   *
   * All depends on how you need to presented in your logs.
   */
  public static String CompactHumanReadableTimeWithMs(long milliSeconds) {
    long days, hours, inpSecs, leftOverMs;
    int minutes, seconds;
    StringBuffer sb = new StringBuffer();
 
    inpSecs = milliSeconds / 1000; // Convert Milliseconds into Seconds
    days = inpSecs / 86400;
    hours = (inpSecs - (days * 86400)) / 3600;
    minutes = (int) (((inpSecs - (days * 86400)) - (hours * 3600)) / 60);
    seconds = (int) (((inpSecs - (days * 86400)) - (hours * 3600)) - (minutes * 60));
    leftOverMs = milliSeconds - (inpSecs * 1000);
 
    if (days > 0) {
      sb.append(days);
      sb.append((days != 1 ? " Days" : " Day"));
    }
 
    if (sb.length() > 0) {
      sb.append(", ");
    }
 
    if (hours > 0 || sb.length() > 0) {
      sb.append(hours);
      sb.append((hours != 1 ? " Hours" : " Hour"));
    }
 
    if (sb.length() > 0) {
      sb.append(", ");
    }
 
    if (minutes > 0 || sb.length() > 0) {
      sb.append(minutes);
      sb.append((minutes != 1 ? " Minutes" : " Minute"));
    }
 
    if (sb.length() > 0) {
      sb.append(", ");
    }
 
    if (seconds > 0 || sb.length() > 0) {
      sb.append(seconds);
      sb.append((seconds != 1 ? " Seconds" : " Second"));
    }
 
    if (sb.length() > 0) {
      sb.append(", ");
    }
 
    sb.append(leftOverMs);
    sb.append((seconds != 1 ? " Milliseconds" : " Millisecond"));
 
    return sb.toString();
  }
 
  /**
   * NVL = Null Value, in my experience, most times, we want to treat empty or
   * whitespace only strings are NULLs
   *
   * So this method is here to avoid a lot of if (s == null || s.trim().length()
   * == 0) all over the place, instead you will find if(IsNVL(s)) instead.
   */
  public static boolean IsNVL(String s) {
    return s == null || s.trim().length() == 0;
  }
 
  /**
   * Simply returns a timestamp as a String.
   *
   * @return
   */
  public static String GetTimeStamp() {
    return (new java.util.Date()).toString();
  }
 
  // ------------------------------------------------------------------->
 
  // Start Main() Helper Static Methods
  // ------------------------------------------------------------------->
 
  /**
   * This method returns true if the command line arguments are valid, and false
   * otherwise.
   *
   * Please change this method to meet your implementation's requirements.
   */
  private static boolean CheckCommandLineArguments(String[] args) {
    boolean ok = false;
 
    /*
     * This is configured to make sure we only have one parameter which is the
     * app properties. We could have made it more advanced and actually checked
     * if the file exists, but just checking is the parameter exists for demo
     * purposes is good enough.
     */
    ok = args.length == 1 && !IsNVL(args[0]);
 
    return ok;
  }
 
  /**
   * This prints to STDERR (a common practice), the command line usage of the
   * program.
   *
   * Please change this to meet your implementation's command line arguments.
   */
  private static void PrintUsage() {
    StringBuffer sb = new StringBuffer();
 
    sb.append("\nUsage: java ");
 
    sb.append(DoNothingStandaloneProcess.class.getName());
 
    /*
     * Modify this append call to have each command line argument name example:
     * sb.append(
     * " [APP_PROPERTIES_FILE] [SOURCE_INPUT_FILE] [WSDL_URL] [TARGET_OUTPUT_FILE]"
     * );
     *
     * For demo purposes we will only use [APP_PROPERTIES_FILE]
     */
    sb.append(" [APP_PROPERTIES_FILE]");
 
    sb.append("\n\n");
 
    System.err.print(sb.toString());
  }
 
  /**
   * I usually like the Batch and Daemon Processes or Utilities to print a small
   * Banner at the top of their output.
   *
   * Please change this to suit your needs.
   */
  private static void PrintWelcome() {
    StringBuffer sb = new StringBuffer();
 
    sb.append("\n*********************************************\n");
    sb.append("*       Do Nothing Standalone Process       *\n");
    sb.append("*********************************************\n\n");
 
    System.out.print(sb.toString());
  }
 
  /**
   * This method simple prints the process startup time. I found this to be very
   * useful in batch job logs. I probably wouldn't change it, but you can if you
   * really need to.
   */
  private static void PrintStartupTime() {
    StringBuffer sb = new StringBuffer();
 
    sb.append("Startup Time: ");
    sb.append(GetTimeStamp());
    sb.append("\n\n");
 
    System.out.print(sb.toString());
  }
 
  // Start Main() Method
  // ------------------------------------------------------------------->
 
  /**
   * Here's your standard main() method which allows you to start a Java program
   * from the command line. You can probably use this as is, once you rename the
   * DoNothingStandaloneProcess class name to a proper name to represent your
   * implementation correctly.
   *
   * MAKE SURE: To change the data type of the process object reference to the
   * name of your process implementation class. Other than that, you are good to
   * go with this main method!
   */
  public static void main(String[] args) {
    int exitCode;
    DoNothingStandaloneProcess process = null;
 
    if (!CheckCommandLineArguments(args)) {
      PrintUsage();
      exitCode = 1;
    }
    else {
      try {
        PrintWelcome();
 
        PrintStartupTime();
 
        process = new DoNothingStandaloneProcess();
 
        process.setTreatCleanupExceptionsAsFatal(false); // I don't believe
                                                         // cleanup exceptions
                                                         // are really fatal,
                                                         // but that's up to
                                                         // you...
 
        process.loadProperties(args[0]); // Load properties using the file way.
 
        process.init(); // Performance Process Initialization, again I don't
                        // like over use of the constructor.
 
        process.addShutdownHook(); // Just in case we get an interrupt signal...
 
        process.executeProcessing(); // Do the actually business logic
                                     // execution!
 
        // If we made it to this point without an exception, that means
        // we are successful, the process exit code should be ZERO for SUCCESS!
        exitCode = 0;
      } // End try block
      catch (Exception e) {
        exitCode = 1; // If there was an exception, the process exit code should
                      // be NON-ZERO for FAILURE!
        e.printStackTrace(); // Log the exception, if you have an Exception
                             // email utility like I do, use that instead.
      }
      finally {
        if (process != null) {
          try {
            process.cleanup(); // Technically we don't need to do this because
                               // of the shutdown hook
            // But I like to be explicit here to show when during a
            // normal execution, when the call
            // to cleanup should happen.
          }
          catch (Exception e) {
            // We shouldn't receive an exception
            // But in case there is a runtime exception
            // Just print it, but treat it as non-fatal.
            // Technically most if not all resources
            // will be reclaimed by the operating system as an
            // absolute last resort
            // so we did our best attempt at cleaning things up,
            // but we don't want to wake our developers or our
            // production services team
            // up at 3 in the morning because something weird
            // happened during cleanup.
            e.printStackTrace();
 
            // If we set the process to treat cleanup exception as fatal
            // the exit code will be set to 1...
            if (process != null && process.isTreatCleanupExceptionsAsFatal()) {
              exitCode = 1;
            }
          }
        }
      } // End finally block
    } // End else block
 
    // Make sure our standard streams are flushed
    // so we don't miss anything in the logs.
    System.out.flush();
    System.err.flush();
 
    System.out.println("Process Exit Code = " + exitCode);
 
    System.out.flush();
 
    // Make sure to return the exit code to the parent process
    System.exit(exitCode);
  }
 
  // ------------------------------------------------------------------->
 
}
 

Closing Remarks:

I hope you can see why such a simple template for Batch Jobs and other Standalone Processes, such as Utilities Commands can really help keep your code base clean and ensure anyone within your organization can debug, enhance, and support most if not all processes based on this template.

I’m very interested in any comments about how you use this template or one like it in your professional and personal programming projects, and if this template has given you any ideas, if you made any improvements to it, and in general any other comments, you may have.

In my next post, I plan on discussing and sharing my template for Daemon Processes, which I call DoNothingDaemonProcess. It is very similar to this template, except when combined with the Unix command nohup it will run as a background process on a Unix/Linux Server. The process itself has some special utility functions to help make it an enterprise caliber daemon process, which can be controlled via a Batch Scheduler or other external Control Processes.

Just Another Stream of Random Bits…
– Robert C. Ilardi
 
Posted in Development | 1 Comment

Project Thunderbolt – Robert’s Tesla Coil Project

===================================
Project Name: Thunderbolt
Project Domain: High Voltage Physics
===================================

Goal: To create a full scale Tesla Coil with that produces at least 6in sparks aka “artificial lightning”.

Current Status: Project was a success, with a complete full scale, full power test on Friday, July 15, 2011!

Me and My Tesla Coil:

Tesla Coils and Nikola Tesla: Check out WikiPedia for more information on what a Tesla Coil is and the history about them. Also, please read about their inventor, definitely one of the most important inventors throughout Human History, Nikola Tesla.

My interest in Tesla Coils started when I first visited Liberty Science Center as a child. They had a fully working Tesla Coil on display and would run demos, creating Artificial Lightning at will with the flip of a switch.

Ever since then I wanted to build and possess my very own Tesla Coil.

When I was younger I never attempted it because of the huge voltages involved, as well as a general lack of funding. My 2011 build cost approximately $1000.00 in parts and materials. I could probably build one for less now, having mastered a build, but a lot of my first Tesla Coil build was trial and error and there was a bunch of failed parts and ideas, which increased the total cost of completing a working Tesla Coil.

Here’s the Part List of my Thunderbolt Tesla Coil:

  1. 800 Feet of 24 AWG Magnet Wire
  2. 12,000V, 30MA Neon Sign Transformer (Purchased used from eBay) non-Fault Tolerant (this is important, a Fault Tolerant Transformer will cause your Tesla Coil not to work).
  3. 48X 0.49nF 20,000V Capacitors (I purchased 200 of these  for about a dollar a piece off eBay. They are normally used in High Frequency Pulse Lasers.)
  4. 30 feet of 12 gauge solid copper wire (bare).
  5. 3in diameter, 24in length of PVC Pipe.
  6. 2X matching 3in PVC flange
  7. 2X 24x24in plywood board
  8. 16in 1×4 Wood Plank
  9. 2X 2in wide Steel brackets
  10. 2X 3in screws with matching washers and nuts.
  11. Line Filter to protect the house mains.
  12. 100 feet of 14 gauge insulated copper wire
  13. Erector Set for building metal frames for Capacitor Tank.
  14. Aluminum Dryer Vent Flex Hose.
  15. Various bits of 2×4 Wood for standoffs, etc.
  16. Various screws for mounting everything
  17. Various cuts of thin plywood for primary coil mounting.
  18. Liquid Nails glue for mounting parts that cannot be held together by conductive material such as screws.

Circuit Diagram:

Special thanks to the Tesla Coil Wikipedia Article for supplying the Circuit Diagram, and specifically the creator: Wikipedia User: Omegatron.

Creating the Spark:

I used the two 2in steel brackets, and the 3in screws to create a static spark gap. It’s not the most efficient spark gap for Tesla Coils these days, but it’s the easiest to build and if you add a PVC pipe to cover the gap and connect a high power vacuum like a shop vac/wet-dry vac you can create a so-called “sucker spark gap” which will increase the efficiency of the Spark Gap. However, even without the Vacuum enhancement the static spark gap still works well for creating 6in – 12in arcs and a wireless energy field that will light up a 18in fluorescent tube 5-6 feet away from the Tesla Coil.

Here’s the fully constructed Spark Gap. I did re-align it, as you can see in this picture the screws are not facing each other perfectly square.

Here’s a picture of the complete spark gap in operation (the full Tesla Coil setup is pictured and visible in the background):

Here’s a test of the completed Spark Gap on my dinning room table, with the gap connected directly to the Neon Sign Transformer, this is for fine tuning of the gap. You need to ensure it can arc with the Transformer connected alone to prevent a full short circuit:

Check out this video of the spark gap in operation when connected to the Tesla Coil. What you will notice is that the spark is of much greater size, brightest and overall power than the video of the spark gap simply connected directly to the Neon Sign Transformer which I tested on my dinning room table:

Creating the Capacitor Tank:

From a capacitance standpoint, the value is quite low compared to say an LED flasher or something like that, it’s in the nano-farad range. However you need extremely high voltage and the capacitors need to be able to withstand high frequency charging-discharging cycles.

I used 48 of my 0.49nF Pulse Capacitors in the following layout:

2 capacitors in series connected in banks of 4 of these pairs in Parallel, for a total of 8 capacitors per rack created out of erector set pieces. The banks are pictured here:

I then connected these of these banks in parallel with duplicate banks totalling 6-banks of 8 capacitors giving a total of 48 capacitors in this matrix configuration. This matrix of serial and parallel capacitors gives a total measured capacitance of between 6.11 and 5.95 nano-farads by my multi-meter.

Here’s the completely Assembled Capacitor “Tank”:

This physical layout cause arcs between the banks which were in parallel, plus the wires would arc right threw the insulator, so I refactored it into a double stacked layout:

Assembling the Secondary Coil:

It took me around 4 hours straight to wound the 800 feet of 24 gauge Magnet wire around the 3in PVC. 800 feet wound over the 9.43 in circumference over the PVC pipe gives you a little over 1000 winds. This is perfect, because my goal was a primary to secondary coil ratio of 1:100. That is 1 turn of the primary to 100 turns of the secondary.

The secondary itself is the nice upright red magnet wire most people associate with Tesla Coils. It usually has a tubular or spherical terminal at the top where the discharges occur.

Here’s me, winding the secondary coil:

What you need to try to do is not have any overlaps in your coil. a few here and there doesn’t hurt the result, I found however. But take your time, otherwise it’s a waste.

Here’s the completed secondary coil without terminal:

I mentioned at the top of the secondary coil you need a spherical or tubular terminal.

I used 3in diameter Aluminum Dryer Vent Flex Hose to create a circular tube at the top of my secondary coil.

Here’s a picture of the terminal:

As you can see I used Aluminum foil to close off any gaps in the flex hose, and the shape is more of an oval. I also used Liquid Nails glue to fasten the tube down to a piece of wood, which I then fasten to one of the PVC flanges, so that I can connect easily to the top end of my secondary coil.

I spent a lot of time thinking about how to make this entire setup modular, so I could transport it from my house to my mom’s and friend’s houses for demonstrations. The flanges worked great for this purpose.

Here’s the completed secondary coil setup with the base board, I used the second PVC flange to connect the secondary coil to the plywood board, again using liquid nails, as I didn’t want any conductive materials where I could have stray arc hitting the structure.

Creating the Primary Coil:

Although the primary coil only has 10 turns, it was more challenging to build for me than the secondary coil. I went through a few iterations before getting it right.

I used the the 12 gauge solid bare copper wire to create a secondary coil.

I eventually added one more turn of this wire to create a strike rail, which is connected right to ground to ensure there is no arc from the secondary to the primary, which would destroy the entire coil. The strike rail, is simply another turn of copper wire at the very top of the primary coil structure, which is importantly not a complete circle, it needs to be left opened, and it has one connection straight to true ground (the Earth).

Other then this you just need to connect everything up as shown in the circuit diagram above.

Here’s some photos of the Tesla Coil completed and operating as well as some video:

The Tesla Coil Operating in my backyard. The target object is a steel wrench on a camera tri-pod:

Here’s a video of me holding a 18in fluorescent bulb, acting as human ground, it proves my Tesla Coil is capable of Wireless Energy Transmission:

Wireless Electricity works! And yes, my Tesla Coil produces it!

Here’s a video of the Tesla Coil just striking a grounded target, the camera is directly below the coil, so it’s a great view of the lightning:

This is a head on view of the lightning. It’s a pretty interesting angle:

One of the first Full Power Tests:

For more videos please check out the following from the Ilardi.com Tesla Coil Page.

I hope this post was fun and interesting. I would really appreciate your comments!

Just Another Stream of Random Bits…
– Robert C. Ilardi
 
Posted in Randomness | 1 Comment

Helping your developers to maintain other people’s code

What is so difficult about maintaining another developers code? How can you as a development manager or architect ensure that all developers on your team can maintain any other team member’s code?

These are the two questions I want to quickly answer in this post. This post will lead into two other posts about creating a model or template for Batch Jobs and another one for Standalone Daemon Processes.

To answer the first question I posed above, ask yourself “When I was a developer what was the hardest thing about trying to fix a bug in someone else’s code? Where do I start?” You will quickly realize that answer to the question lays in the question itself. “Where do I start?” This is usually the most difficult problem faced by a developer when they begin looking at some else’s process.

As a developer the first thing you want and need to do is run the process on your own machine or in your own development environment so that you can watch what it’s doing and try to observe the bug for yourself. You need to do this on your own machine for a variety of reasons, the first being, it’s obviously not safe and in most industries or companies not even allowed to debug in a production environment.

Setting up your development environment to correct run and then debug someone else’s code is usually the most difficult thing about fixing a bug. The bug itself while challenging is usually a secondary issue when you first start to take over maintain someone’s code.

As a development manager or architect you can help eliminate this issue all together by creating a template or model for ALL Batch Jobs in your system, and then again for ALL Daemon Processes in your system.

It sounds simple, and when you think about you, you will probably ask yourself “Aren’t most systems architected in this way?” The answer from my experience is NO. Usually the collection of Batch Jobs and Daemon Processes in large enterprise class systems vary as much as the number of core developers or team leads your have in your development group.

This presents a big problem when it comes to maintenance of these jobs and processes because you cannot quick ramp up fixing bugs or making enhancements, especially when a developer who original wrote the process leaves the company or moves on to another project.

Also, the ability to reverse engineer someone’s process is a special skill that not all developer possess. I have founded that usually if I can setup a process to run on someone’s development environment in my team, they can then fix the bug or make the enhancement, but the actual setup usually is the problem which I can only rely on a handful of people to cover.

I have touched upon this issue in my other posts on How we Build Software; especially around resource management, where I spoke about ensuring loading of properties or configuration and database connections.

I’m limiting this post to Batch Jobs and Daemon Processes, however the same issue exists across all types of components of an enterprise system, from Middleware to UIs. I choose to limit our discussion today to Batch Jobs and Daemons because they are the most common and simplest examples where issues of different coding styles directly affects the teams ability to turn around fixes and enhancements quickly.

Also, these elements are usually where the most freedom of developer’s coding expression can take form.

By creating a template for ALL Batch Jobs and Daemon Processes within your systems which you mandate are followed by all your team leads, architects, and developers on your teams, you will ensure that the maintenance responsibilities for these processes are more easily be transferred from team member to team member.

Once your developers can run one of your Jobs or Processes, you can be sure they can run ANY of these jobs, and once they have them running in their own development environment, debugging and enhancements follow much more quickly.

If you combine this with my recommendations on resource management, building an Enterprise Commons, and everything else I mentioned in my how we build software post, I’m sure you will have a consistent system which can be maintained for decades.

In my next two posts, I plan on going over my own two templates for Batch Jobs and Standalone Daemon Processes. I will even give you the actual code of the templates.

Just Another Stream of Random Bits…
– Robert C. Ilardi

Posted in Software Management | 1 Comment

Data Record Request Framework

In this post we will discuss a framework which I have designed and implemented which is used to store and manage “Request Data” separately from the golden source or actual database.

The root concept and benefit of this, is that the Request Framework plus the Request Data Model enables a system or application to keep in-flight data that may or may not be associated with a workflow process separate from the live production data.

Because of this, we sometimes refer to this as In-Flight or Staging Area data.

Another common use for this framework is in large scale complex web applications such as a Tax Forms application or some other web application with potentially Hundreds of fields.

While a user is working on the Data Record (which includes the sum total of fields a Request Type or set of screens within the application supports editing on), which we refer to as the “Request” itself, the system has the freedom of saving the data at any time without affecting the Live Production or what we refer to as the Golden Source Record or Copy of the data.

A common web application scenario where you might want to use this, is when for scalability purposes you don’t want to store the Request object in the HTTP Session and instead you just store the Request Id which associates the user with a record in the Request Data Model. Now, say in a “Wizard-like” application, every time the user clicks the Next button to proceed to the next section of the set of forms, the app can save the request updates to the database, again without affecting the Golden Source until the entire full set of form pages in the Wizard’s guided path are completed.

In a workflow based system, you can use this concept to be able to store the data in a persist data store such as a database while the Workflow request itself is traveling from step to step or queue to queue in the Process path. Complex business workflows sometimes take days or even weeks to complete a single request in some certain circumstances (say if you want to open an account for a client of a bank and you are waiting for them to send you signed documents) and therefore keeping the request data in a persisted state while in-flight is invaluable in a workflow based system.

Some workflow engines support this concept out of the box, that is storing user defined fields in the workflow database tables itself, however I have found this to be inflexible, and if we refer back to my Adapter-Factory model for Vendor Product Integration, you want to minimize the use of “extended” non-core product functions for the sake of portability.

What is the Request Framework exactly?

The Request Framework is a combination of three components.

  1. Request APIs
    1. Store
      1. Stores in a target abstract data store (aka the Request Database), the Name-Value Pair set, transformed from the in-memory Request Object Model, via the Object Codec (Could be a database or a file, or any other persistent data storage mechanism. I have also used the transformed name-value pairs, to serialize an object over sockets).
    2. Load
      1. This is simply the opposite I/O operation of the Store API. It loads the Name-Value Pair form from the data store, and using the Object Codec transforms the data back into an in-memory Request Object Model object.
    3. Archive
      1. I use this API to move Requests that have completed their workflow process to duplicate Request Data Map and Narrows Map tables which I call the archive version of these tables.
      2. This is used to ensure the performance of loading and storing of the requests which are still active in the workflow process is maintained over the lifetime of the application. As Request Counts grow, we don’t want completed, requests which will not be loaded often to slow down the performance of the main tables. The form of the tables, which are described below are very narrow, but become very tall due to the nature of the highly normalized form of name-value pair storage.
      3. I have put a check in my implementations of the load API to detect if a Request is in the Active or Archive tables, and load the request no matter where it is. This is useful when an auditor comes and wants to see a request from N-number of years ago.
    4. Clone
      1. This API again is self explanatory. Often users want to “copy” a request they already submitted and then just change the few fields they need to create the new request. This is one of the key user activities that benefits from this API. However some internal system operations, can also benefit from this API.
      2. It can also used to clone a request from a production environment to a UAT environment for production support testing and debugging of a production issue with a particular request.
    5. Delete
      1. Depending on the nature of the business, you may need to differentiate between physically removing a request from the database and simply marking it as deleted often referred to as a Logical Delete.
      2. You can put a flag in your request transfer object to this API so the implementation can support both physical and logical deletes.
      3. Logical deletes are used very often over physical deletes in highly regulated industries, due to auditing requirements.
  2. The Object Codec
    1. The Object Codec implementation that I prefer to use in my own systems will be saved for the next article I post. However for now, all you need to know is that you need a way to “Serialize” a Request Object to some text-based format for fast and easy storage to a Persistent Data Store such as a Database; that’s the Encode Half of the Codec. And the Decode Half of the Codec is the implementation to take the Text-Based form of the Request Object and “De-Serialize” it back to the In-Memory Request Object, once retrieved from the Data Store. The actual Data Store functions are separate from the Object Codec by design, so that many different types of Data Storage implementations can be used without bloating the code of the Object Codec. The only job of the Object Codec should be to Serialize and De-Serialize the Request Object.
  3. The Request Data Model
    1. This is the final piece of the puzzle. The Request Data Model is designed to extremely quickly (in the cases of my systems, sub-second) store and load any single Request. In my experience we usually test the performance of the Data Model with an Request Object payload of around 500 to 1000 fields per request.
    2. The data model must be designed to accommodate the Serialized Form produced by your Object Codec Implementation.

The Request Framework

The Request Framework is the set of APIs that wrap the calls to the Object Codec and the Data Store Persistence layer to interact with the Request Data Model, in my systems this is usually JDBC. I prefer direct JDBC over ORM Frameworks, for both speed and fine-grain control over the SQL to keep to sub-second store and load times usually required by my application users.

Solution Overview:

  • Request Objects Flexibility
    • Developers can design any complex Java-Bean Compliant Object as a request object, without having to take into consideration the database model.
    • Request Objects should encapsulate all fields related to the Golden Source Data Model as Object Model Objects within a root Request object class.
    • If it’s a workflow driven system, they Workflow Process Keys should also be contained within the Request Object.
    • Request Processing, Golden Source Writes, and Workflow Actions can eventually be handled in a layer I refer to as Smart Persistence. Which we will discuss in a separate article.
    • If the Golden Source Data Model contains distinct data entities, than there should be one Request Class for each Data Model Entity.
    • Also if required by business requirements, there can be combination Request Types; requests that combine multiple entity types from the Data Model.
      • However in my experience you should always start with a single Request Object for each Data Model Root Entity. (Examples: Account Request, Client Request, Product Request)

Serialized Form:

I prefer to serialize or “transform” an Object in-memory to text based Name-Value Pairs. The Name or Key of the pair is the fully-qualified Variable or Field Name using the “.” (period/dot) object notation and “[ ]” array notation for array elements.

There are only name-value pairs for “scalar” non-user-defined objects. Therefore only built-in types, plus Strings, Dates, Enums, and other basic types can be stored as a name value pair. But since all user defined data types are simple Objects which contain the native or built in types for the actual data elements, user-defined objects are stored as multiple name-value pairs, one pair for each variable within the user-defined type.

Expanding upon this, we can store N-level nested object’s data using the Dot object dereferencing notation to create the fully-qualified names.

Examples of Names:

Note: Root Object Name is: AccountRequest (this will NOT be included in the fully-qualified name).

    • addresses[0].line1
    • addresses[1].type
    • ratings.sAndP.ratingValue
    • requestorName
    • requestId

The values of the name-value pairs are the String representation of the field or variable’s actual value. For a String, this would be the value itself, for numbers (int, float, double, long, short), these are easily converted to text representations. Other built in types such as Date objects which most modern languages support, can be converted either as a parsable Date-Timestamp string which the Decoder/Deserializer can convert back into the data object, or even as a Long integer which is the date’s representation as milliseconds elapsed since some Epoch. The value can be any text representation of the variables value which can be efficiently parsed back into the native data type in-memory once the name-value pair is processed by the Deserializer/Decoder of the ObjectCodec.

Examples of name-value pairs:

    • addresses[0].line1 = 123 Main Street
    • addresses[1].type = Mailing Address
    • ratings.sAndP.ratingValue = AAA
    • requestorName = John Smith
    • requestId = 6474721

The Request Data Model

The Request Data Model can be reduced to a Conceptual Model of only THREE basic entities or tables. The diagram below shows these tables and their cardinality.

Conceptual Model:

Logical Model:

The Tables:

  • Request
    • This is the “main” table of the request data model.
    • Contained within it is the basic data about a request, otherwise called the “header”
    • For each unique Request Id there is one and only one row in this table.
    • Table Structure:

  • Data Map
    • The data map table stores the Name-Value Pairs of the requests.
    • For a single unique Request Id, there may be N-number of rows of Name-Value Pairs within the Data Map table.
    • There is at least ONE row in this table for every primitive/native built in data type or ObjectCodec supported Data Type within the Java Bean compliant Request Object model.
      • The value field is NOT defined as a CBLOB/BLOB, instead for efficiency its defined as a VARCHAR.
        • For elements whose data length is longer than the length of the VARCHAR field defined in the database table, we introduce a sequence number field, and the name-value pair is split across the multiple rows.
          • When the Request Data Map is being loaded back from the database, the name-value pairs which have been split into multiple rows, will be concatenated back into a single row, using the sequence number to ensure the proper ordering when reassembling the string representation of the variable value.
          • If you divide the LENGTH of the VALUE by the MAX LENGTH of the defined VARCHAR field in the database, you will get the number of rows the name-value pair needs to be split into (if it doesn’t divided, evenly just add 1, you can either use modulus for this, or use integer division, then times the result by the length of the VARCHAR field, and minus that from the actual data length. If the result is great than ZERO, add 1 row).
    • Table Structure:

  • Narrows Map
    • This table is only used when a variable or field within the Request Object Model is a base or abstract type (basically we are using Polymorphism), and the field references some sub-class or concrete type.
    • The concrete data type information, mainly the fully-qualified class name is stored in this table, associated with the object notation path of the field that references it.
    • This is so the ObjectCodec can properly decode complex Request Objects where the original creation code of the Request Object leverage the properties of the language to use Polymorphism.
    • This is sort of an extended feature, and in general in your own projects if you want to use this name-value pair design for storing request data, you can leave this part out and just make the coding convention for your project restrict using polymorphism within your request object model.

  • Request Xref
    • Xref of course is short form for Cross Reference. A commonly defined table in many relational database schemas.
    • The Request Cross Reference in this case, is used to store Unique ID or Keys other than the Request ID itself, that are related to the Request.
    • These can be IDs for the workflow engine to use.
    • They can also be application specific IDs, such as a Golden Source primary key, so that we can track which requests have been associated with that Golden Source record for reporting and audit trail purposes. (Although there are many other ways to achieve this, depending on your data model).
    • It can also be used to relate this request to a request within another system, in the case when you have programmatic inter-system integration. (An external system can raise or update data on a request within your system / Enterprise Application Integration).
    • Table Structure:

  • Workflow States
    • This may be a set of tables, depending on your workflow audit trail requirements.
    • These tables are defined to store Workflow Step Audit information, such as the usernames and actions the user took at each step within a workflow process for a particular Request.
    • Now, the workflow engine itself stores this information, however in my systems I duplicate this outside of the workflow’s native data store, to maintain a loosely coupled state, between my system and the vendor supplied workflow engines; again see my Adapter-Factory Vendor Project Integration Model for more information on this. 

The Request Framework Advantage:

I hope from the above description of my Request Framework and Data Model, you can see real world applications where this would be extremely useful in your own applications. I know for me, both on my professional projects and my personal programming projects, I have seen this framework and data model grow and become the most useful tool in my arsenal for tackling complex Golden Source and In-Flight data separation issues, as well as delivering a solution to business requirements of being able to change the Request Model quickly for short time to market releases to production. The framework and data model above definitely delivers to the agile development world. In an upcoming article I will dive deeper into the Object Codec utility which I use in conjunction with the request framework.

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Architecture | Leave a comment

Windows verses Mac verses Linux? What do you use?

So I’m writing this quick little post from my favorite Starbucks, on my Mac Book Pro. Do you think I’m a Mac-preferred User?

Well, Mac OS X is BSD (Unix) based, if you are a programmer you probably already know this, but I wonder how many normal users actually do, or even care?

The first computer I owned was a Commodore 64 in the early 80’s. Which is the first computer I learned to Program on when I was 7 years old (See Commodore BASIC).

I started using MS-DOS in the late 80’s and Windows in the early 90’s. I still occasionally run FreeDOS using Virtual Box for virtualization on Windows.

I first installed Linux on my home PC, RedHat 5.1 in 1998.

I’m actually a late bloomer in the Apple world. I never even touched a Mac until I was being interviewed by Apple at their HQ in Cupertino (Yes, I actually have been to One Infinite Loop) and they asked me to program (I won’t give the interview question away out of respect to the team), but it was a component that had to implement a pre-defined interface, designed to watch me in real time write the code for a Producer-Consumer, which they ran using a Multi-Thread test driver program they already had prepared.

Here’s a picture I snapped an entrance to One Infinite Loop:

Bragging Rights: I did it in record time according to the team.

And yes, I did get the job, but I had to turn it down for personal reasons.

Here’s me studying the night before my Full Tech Interview at Apple’s Campus:

Anyway, back to my original point, on the Apple Interview, I told them this is my first time programming on a Mac, they said that’s ok we don’t care, but you are going to love it, and just told me the difference for copying and pasting using the Command key instead of the Control key.

Truth, is today, I have multiple Windows boxes, a couple of Linux Servers, and even embedded Linux single-board computers, and a Mac Book Pro at home.

This post is not an argument for or against any one particular Operating System. I now think the common argument over which is better Windows or Mac, is as irrelevant as the arguments I had in the early 90’s over which Gaming Console was better, Super Nintendo or Sega Genesis.

None of them are better! They are all just different and each have pluses and minuses. Each have their own cool features and each have some ways to do thing that suck, and make you ask yourself “WHY?”. Each have their benefits, and finally each have their own vulnerabilities.

If you really believe whatever OS you use do not have any vulnerabilities, you are fooling yourself.

So my take on it is, why not have one of each! If you are a professional software developer, you should be trying out different Operating Systems, and since developers make decent money, why not buy one of each.

In my case, I mainly run three OSes at home: Windows, Mac OS X, and Linux (currently Ubuntu, Fedora, and Angstrom Distros).

So what Operating System does an Enterprise Programmer use? All of them…

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Randomness | Leave a comment

Adapter Factory Design Pattern

It is often the case where Enterprise Application require one or more Vendor based products to be integrated into the home grown system.

While sometimes useful, there are many issues that arise from simply embedding a product into your code.

In my own past experience, I have integrated everything from Workflow Engines to Unstructured Data Search Indexes.

Some of the common issues that come up when integrating a product or service (it can be commercial or open source or even another home grown framework used within the organization) are:

  • Deployment of new versions of the product.
  • A high level architectural or firm wide product support or vendor change.
  • Trying to integrate multiple products of the same type supplied by multiple vendors seamlessly in your system.
  • Having to transition to a new product or version over an extended period of time or more than one release version of your application.

Many years ago when I was faced with firm wide IT political issues around Workflow Engine products; at the time I was using a home-grown patented engine, and the firm’s architecture group decided that all workflow based applications must use Tibco’s Staffware product, I came up with a strategy of being able to support both own in-house engine and Staffware simultaneously using two patterns from the Gang of Four (GoF) playbook.

I combined the Adapter Pattern with the Factory Pattern to create what I call the Adapter-Factory Mechanism for Product Integration.

Before we get into the details on how it actually works, I want to share with everyone the diagram…

High Level Design Diagram:

How does it works?

If we take my workflow engine example, I think it will be pretty simple to explain.

Note: Most modern workflow engine’s are large software products which usually include entire UI builders, and even their own application servers in some instances. In my experience I only leverage workflow engine packages for their workflow processing; so basically I use their APIs to interact programmatically with their engines to move requests around a workflow process.

Each Workflow Engine exposes it’s own set of APIs, in the case of Java it’s usually a set of JARs and the APIs can be rather complex including admin functions, and various other things that we might not be interested in.

The first step in the process of creating an Adapter-Factory is to declare a new Interface for which every Adapter will implement. This Interface declares certain methods, that are required by your application and are somewhat common across the multiple vendors or products you need to integrate with.

In the case of the workflow engine example, these methods are things like GetQueues(), TransitionWorkflow(), etc.

One of the fundamental ideas is that you do NOT want anyone outside of the Adapter layer to deal with the native objects used by the vendor or product specific APIs. So the second step is creating what I refer to as “Proxy” objects which may have a mirror image of fields as the Vendor specific objects, but they can NEVER reference any vendor data types.

Part of the job of the Adapter is to translate to and from these proxy objects and the native vendor objects.

The next step is to implement one Adapter per Vendor/Product, or even one Adapter per Vendor/Product-Version combination (in the case of where you need to support multiple versions of the same Product).

The ability to add a new Adapter at any time mitigates the risk that a Vendor may produce a new version of a product which you for one reason or another (such as support contracts) need to migrate to in the future. You simply add a new Adapter for the new version of the Product, and keep the old version active in your code base as a fall back strategy or during a staggered rollout.

Because we “proxy” or mirror every native vendor/product object and never expose those native objects above the Adapter level, this besides making it possible to support multiple vendors or versions at the same time, minimizes the changes to the rest of the system if a new vendor or version comes on board.

Once we have implemented one or more Adapters for the Vendors or Products we need to support in this Adapter-Factory implementation, the next step is to create the Factory itself.

Normally, I make it pretty simple, I have a “Default” Adapter type that will be returned if the caller of the factory does not pass the Name or Flag representing a specific Adapter Type to return. This default Adapter is usually configured via a property so I can change the default version without having to change and recompile the factory itself. I usually make Factory objects like these, Singletons. Other than these two specifications, the Factory just follows the normal Factory Design Pattern.

Finally, I always wrap the Factory and the calls to the Adapter implementations in a Facade. This simplifies the client code’s interaction with the Adapter-Factory itself and makes it very easy to use this design pattern without putting a burden of understanding the pattern itself on the side of the client code developers.

This pretty much sums my Adapter-Factory Design Pattern. I have used it in Production systems very heavily and my experience with having to work with multiple vendor products that provides the same type of service within a single application has become a lot easier to deal with because of this design. I hope this pattern becomes a useful tool in your toolbox when designing and developing your own systems.

Final Note: If you think about it JDBC itself is an Adapter-Factory!

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Architecture | 2 Comments

How We Develop Software – Creating a New System/Application Platform

As follow up on my last post on “SDLC Methodology Styles”; in this article we are going to discuss my method of creating a Foundation Platform for the development of a new System or Application.

It is my opinion that for a Software Development Manager to be successful, they need to be intimately involved in the creation of the foundation and frameworks that their development team will use going forward to create and expand the system. I believe in Leading through Example, and that means that your Team Leads and even yourself need to not only be involved in the architecture and design, but the implementation itself (Yes, that means I believe a good manager of a successful system needs to at least at the very beginning write code).

The key to building a successful system lays in the foundation of the code base and it’s overall organization. A successful Development Manager will require their team to adhere to specific design principals, third party tools, and an implementation strategy that the Development Manager needs to set at the very start of a new project. A project starts to crack once developers start injecting into the code base and approved third-party library set new frameworks or large scale third party mechanisms using 5% or less of what the library provides to build a new feature for the application. The reason a develop would normally do this is just to gain experience with a product, so that they can add it on to there resume. A successful Development Manager needs a trusted set of team leads to have a watchful eye over what the developers check in to the code base, to avoid this Resume Building Code Base Pollution.

My own answer to developers that want to learn a new product is to do it on their own time, I’ll even allow developers to “evaluate” new products and libraries during work hours so long as it’s during their down-time. My belief is that even in a high active development project there’s always periods of down-time for each develop as it’s the nature of large teams and OOP in general.

Setting Design Strategy

At the very start of a new System or Application, there MUST be a design strategy set for each of the following areas:

  • Commons
  • Batch Processes
  • Standalone Daemon Processes
  • Middleware APIs
  • Messaging
    • Publishers
    • Listeners
  • User Interfaces (Depending on the project one or more of the following)
    • Web Applications
    • Mobile Platforms
    • Desktop Clients

But before we start talking about each of these development areas, I want to focus on what I feel is the MOST important aspect of designing an architecture. That is what I refer to as “Resource Management”.

Resource Management

What I consider “resources” that are critical to control and deal with across all aspects of an application are: Configurations, Database Connections, and Out-of-Band Communications.

Configurations can be anything from simple Name-Value Pair Properties, or complex XML documents. The problem is that everyone usually needs to store configuration data for a process or function to run correctly outside of the binary itself. The issue I feel that needs to be solved by a robust architecture is how Configurations are loaded by any process or component, and ensuring that this method is reused easily across all tiers of an application.

I think Database Connections are self-explainatory, however obtaining the connections is the critical point, which I feel must be consistent throughout an architecture. For example in a multi-tier application, connections to databases can be obtained via Connection Pools in Enterprise Application Servers (Container Servers such as WebLogic, WebSphere, JBoss), or when working outside of Containers, working directly with Drivers or Driver Managers to obtain a direct connection to a database. I usually create an abstraction layer, so that a developer working on a new Middleware API or a Batch Process doesn’t know if their are working with Connection Pools or Direct Connections. In the passed a lot of developers create ODBC, ADO, ADO.net or JDBC wrappers that everyone uses in a particular project. Because this was such a common process, a lot of open source solutions have popped up, such as iBatis/MyBatis, Hibernate, and other ORM tools.

Personally I’m not a fan of ORM tools, and I think applications with large or complex data models are better off writing direct SQL or Stored Procedures and interacting directly with the database via JDBC or ODBC, or DBI for Perl, etc… Usually one of my biggest requirements when I write a Job Description for a new hire for my own teams is that they know SQL and Direct JDBC (without and ORM frameworks). But this is a topic for another article.

Finally, I believe a robust architecture that provides services for Resource Management, include a method for functions and/or components to send data between each other in an Out-of-Band manner. A lot of current scalable architectures call for Stateless designs, which means usually sending database from one component to another has to rely on method arguments and return parameters. However sometimes to simplify the passing of data, we as developers want to naturally fall back on Class or Object Fields or “Global Variables”. However sometimes this can cause scalability issues, or if not designed carefully multi-threading issues, specially when a developer is creating a component that will be used in an Application Server and threads are implied. A robust architecture can allow for “global” like data to be transient in regards to having a lifetime relating to the call stack, but still shared safely between all different levels of the call stack and other components.

In my Architecture called the Data Services Framework, which we will cover in a future article, I combine all three of these key resources, Property Management, Database Connection Abstraction, and Out-of-Band data transport, into a single structure called a “Resource Bundle” which a new instance of one is passed to each business logic component when a new invocation on a middleware API occurs. I have also created something known as the “Standalone Resource Helper” which allows processes running outside of a Container Server, such as Batch Processes or Standalone Daemons to be able to obtain an instance of a resource bundle, so both Middleware, and Standalone processes can both deal directly with resource bundles, instead of trying out how to read and store properties, and obtain database connections.

By setting a design strategy at the beginning of a system may take more time to get developers implementing the targeted Application at full speed, but it will ensure that the code produced by many difference developers each having their own unique approach and conventions at writing code, all contribute to a code base that is easily maintainable, extended, and worked on by all individuals, including new team members in the future. It creates a code based that is Manageable.

Commons

Most developers when they hear the word “Commons”, think Apache Commons or something similar. However when I use the word and concept of “Commons” in my projects, I mean a separate module or directory in the Source Code Control Repository (SVN, GIT, CVS, etc), that contains common utilities and frameworks used by all other modules within the Application’s Source Tree. It can contains simple things like a custom “StringUtils” class which contains commonly reused String Manipulation functions, to larger scale mechanisms such as SQL Result Set Paging Systems, or Socket Wrapper Libraries. The goal of the Commons is to encourage the creation of reusable components both large and small by the entire development team so that we have consistent implementations of varied business functions using a robust and maintained common component set that may be highly customized for a particular organization or project. I normally encourage my own developers to constantly look for opportunities to contribute to our Commons; if they see a function or component that is likely going to be written again for a separate business requirement, I ask that they try to create an abstract reusable component and then customize it for their use case and add it to the Common’s source tree. The easiest example of this is String Utility functions, or SQL Utility functions, I always ask that if you create an interesting utility method dealing with Strings or SQL Result Sets, SQL Statements, etc, to add it to the commons instead of directly embedding it in your code.

Your own implementations of Resource Management, for me specifically my Resource Bundle Framework is probably the first component that needs to be build and is the most important component of the Commons module of any project following my design strategy. When I first come onboard as an Architect, Head Development Lead or Development Manager, having this framework build is the very first thing I do when the development phase of a project begins. Usually I take the time between meetings with the Business Analysts, Users, and Project Management Office teams during the phases before the actual development phase of a project to develop this component. BA’s, Users, and sometimes even management, won’t see direct value in developing a robust Resource Management implementation, so it is up to you as a Development Manager or Architect to ensure this component gets built; trust me, getting something like this on the project plan, will save you a lot of future grief.

A robust Resource Management framework is the key to creating Stable, Scalable, Flexible, Extendible, and easily Maintainable Systems and Applications!

Batch Processes

Batch Processes are usually deployed on a backend Linux or Unix Server (although it can be on Windows as well), which are executed via a Scheduler. A simple one that every Unix programmer knows is Cron. There are also commercial and open source Schedulers such as Computer AssociatesAutosys that are much more robust, and allows for small scripting languages (such as Autosys’s JIL), that enable developers not only to run jobs on a time based schedule, but also using some logic, such as detecting the failure or success of other jobs running from the scheduler, and therefore taking appropriate actions.

A Development Manager must design an approach to handling Batch Processes. In my mind, the first thing that must be done is creating an easy to follow startup procedure for each Process the developers will write. This may sound simple, but the worst thing that I have seen in my professional career is when a medium to large development team has a different startup procedure for each of their individual team members. Usually half the problem with having a developer debug or maintain another developer’s batch process is trying to figure out how to start the thing. If you can’t get it running for a couple of days you really can’t start the debugging process, delaying a potentially critical release.

Enforcing that all batch processes use your Resource Management framework helps to ensure that processes will have similar startup processes, as most process’s startup procedures involves bootstrapping the process with configurations, database connections, etc.

As a positive side affect of using something like Resource Bundles to pass around connections and configuration data, you will soon find that components such as Data Access Objects can be easily reused between both Batch Processes and Middleware Components.

Standalone Daemon Processes

The only real difference between a Batch Process and a Standalone Daemon Process in my mind is that a Batch process usually runs on a schedule, it starts at a specific time or a combination of a specific time and an event occurring, and it stops once it finishes processing a finite set of data.

In the case of a Standalone Daemon Process, the idea is it start up at some point, usually say on a Sunday morning, and it runs continually, processing data a random times, depending if events occur such as a messaging arriving in a queue, or a file arriving in a public FTP/SFTP directory, for which the process is watching. And this process doesn’t stop unless the system owners choose to manually stop it or invoke some programmatic shutdown method intended to bring the process down for weekly or monthly server maintenance.

I’m really not going to spend too much time on this section, because a Standalone Daemon should follow the same design strategy as a Batch Process, especially making use of the Resource Management and Startup procedure, however the one addition which I think a robust architecture must have is how a process “becomes” a daemon.

Usually, it’s done via some type of event loop which never ends until some signal for shutdown occurs. This loop can have certain conventions set, as well as the shutdown procedure, so that all Daemon Processes within a System work the same way. Like the startup procedure, we spoke about in the Batch Process section, it’s all about maintenance. You don’t want to waste a lot of developer cycles trying to figure out how the daemon process remains running. Having a common convention and set of utilities, such as perhaps even abstracting the event loop itself will ensure any developer on your team once familiar with a single Daemon Process, can work on any other daemon process in your system.

Middleware APIs

In the Java world, it’s always easier to find good Core Java developers then JavaEE/J2EE developers. And in my opinion you have to be a good Core Java developer to be a JavaEE developer anyway. It always amuses me when a candidate on a technical interview prefixes an answer to a question about a core concept such as Collection as they re “rusty” because they are a JavaEE developer… What does that even mean? Business Logic is always in Core Java! It makes no sense to call yourself a JavaEE developer. In fact if you apply for a Java developer position are don’t consider yourself a Core Java developer, you need not apply (at least that’s my opinion)!

Ok, we got a little off topic, but what I stated above leads into my core design strategy for Middleware APIs. I like to implement an architect that abstracts the developers from having to deal with any of the EJB, SOAP, or other RPC concepts of JavaEE. I do this again in my Data Services Framework architecture, but right now all you need to know is that I believe in creating an architecture that allows developers to focus 99% of their development time on implementing the business logic or the objective of the business requirements, not worrying about the plumbing.

Over the years I have refined a design over the cause of 10 years which actually allows developers to create and run Middleware APIs from unit test classes right out of an IDE such as Eclipse without having to build and deploy the middleware to a container server such as WebLogic and without remote debugging! Their code is automatically included in the build process which will deploy it to the contain application server without a single line of code change! This is what my Data Services Framework does, and is exactly why I’m saving it for it’s own article.

A successful Development Manager MUST create an architecture or at least a design convention for each of their Middleware APIs to follow. This will simplify maintaining these APIs over time, and if done in a certain way, such as leveraging the Resource Management / Resource Bundle design I have mentioned in this article, a lot of code can be reused by non-middleware components.

Messaging – Publishing / Listening

There are two method of creating publishers and listeners. One method which I am strongly against is writing publishers or listeners that are deployed as components within an Application Server. Instead I mandate that all publishers and listeners (except Messaging Driven Beans) must be written as standalone daemon processes. This usually means that there has to be some mechanism for transferring data from middleware APIs to publishers running as separate processes.

Most times this is done via an event table in the database, and the publisher process includes some type of database table poller, which constantly reads the event table looking for new events to send out as messages.

In the case of Listeners, it really depends on if you are using a listener as a device to update your database from upstream or source systems automatically in real-time without user intervention, or if you are using listeners as a RPC (Remote Procedure Call) mechanism, for external systems to interact with your system components programmatically in real-time via messaging instead of an API approach like SOAP or RESTful Web Services. But in either case I keep these listeners as external standalone daemon processes. In the case of the real-time database loader, there’s no question about how this works, it just executes SQL, a Stored Procedure, or a DAO method each time a message arrives. In terms of the RPC usage of a Listener, I treat these as proxies to Middleware APIs, basically my listener will call the API on behalf of the publishing client, each time a new method arrives.

The benefit of keeping publishers and listeners outside of the Application Server is that they are more stable and scalable in my experience. Especially in the case of persistent or reliable messaging, these types of publishers and listeners have things such as Ledger files or some type of non-volitile storage backing the in memory queues, so that messages are not lost, and occasionally these storage mechanisms either get overloaded or otherwise get corrupted, and it’s usually a lot easier to deal with if you went through the pain of creating event tables in order to republish outgoing messages or reprocess incoming messages when production support issues arise. Also there are special considerations you have to deal with when your Application Contain Severs are running in a multi-node clustered environment. Sometimes you have to bind your listener to a single node, and the fail-over procedure in that type of environment becomes much more complex. Same is true for publishers in a multi-node clustered environment. Usually to ensure the ordering of data you need to only have a single sender publishing at any one time; so which node in the cluster publishes?

All this is removed by creating Publishers and Listeners as standalone processes. It’s sometimes a little more work upfront, but it’s worth it in the end.

finally, since all Publishers and Listeners are forms of Daemon Processes, the event loops, etc, which I mentioned in the section on Standalone Daemon Processes should be adhered to when developing these types of processes.

User Interfaces – Web Apps, Mobile Apps, Desktop Clients

I consider myself more of a Server Side Developer than a Client or UI Developer. However you can not discount User Interfaces when designing your system architecture. This is a fatal flaw I have seen in a lot of projects when the Managers start out on the Server or Batch side and look at User Interfaces as the nice add on of their system for the users to use. However leaving the User Interface as an afterthought like this can cause you to mis-design other aspects of your application such as the Middleware APIs and even the Data Model.

How I like to split the team is a Server Side development branch, which can build Middleware, Batch, Database, etc, and a separate branch of the team for User Interfaces. The reason for this is that it take a special set of skills to develop good User Interfaces. It’s somewhat of an Art rather than a science. And based on my professional and personal experience, you usually need to hire specific UI developers if you want your system to be a success. Also if the budget allows for it, I also feel you should hire Designers separate from the actual UI developers to design the templates and screen layouts used in the UI.

From an architectural standpoint one of the most important aspects of the User Interface on Day One, is the Client Library of the Middleware. I believe the middleware development team should wrap the middleware APIs in a client library in the native language which the client uses. This usually is a thin Facade (or wrapper) around SOAP Stubs or RESTful Web Service.

As most modern front-end architectures I believe in the N-Tier architecture, where you minimally have a Front-End, a Middleware, and a Database. All business logic, data access, even validation logic (other than simple syntax validation) should be embedded in the Middleware, I call this being UI-Agnostic.

Being UI-Agnostic allows you to build multiple front-end such as a Web Application, a Desktop Client, and Mobile Apps for different mobile platforms all leveraging the same middleware without much if any at all code duplication for the business logic, data access, and validation logic layer.

Also, although this is becoming less the common case and more of the exception, since Server and Front-end environments are becoming more heterogeneous then ever before, especially with the mobile platform, if your front-end is written in the same language as your Middleware and Batch, I would enforce that the User Interface developers use the same Commons as the server side developers. This is more easy say with Traditional Web Apps in the Enterprise, where you might have a Java middleware and Java based web front-ends.

Creating Robust Enterprise Systems

What is a Robust Enterprise System? It is any system which is designed to be Stable, Scalable, Flexible, Extendible, and easily Maintainable (SSFEM). By creating an architecture and a common set of utilities at the very onset of your projects, you will help to ensure that you have a robust enterprise system. In future articles we will discuss specific architectural design I believe enable Systems to be SSFEM. If you can do this in your career you will not only be a successful Development Manager or Architect or Developer, but you will also have pride in your systems, which will be in use for many years to come, perhaps even decades. The goal I always have is to design systems that have the capability of lasting between 10 and 20 years. People may think that in these times where technology is changing faster than any of us even in the industry can keep up, that talking about systems that last this long is absurd, but if the systems you build are SSFEM, you will find that it is cheaper to extend the system to meet the needs of the business than for the business to just replace them system.

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Software Management | 1 Comment

How We Develop Software – SDLC Methodology

What I want to discuss in this article is my own mytholody for Software Development. I am going to do a few segments on this topic, but starting with this post, I want to specifically discuss how I believe the Requirements Gathering part of the SDLC (Software Development Lifecycle) process *should* be handled.

With this in mind, this article will go over the following key points:

  • Two Styles of SDLC
    1. Non-Interactive Process
    2. Interactive Process
  • My Preferred Method, which I dub “The RogueLogic Method”
  • Conclusion

Two Styles of SDLC

  • There are two “styles” of the Software Development Lifecycle (SDLC) that applies to Enterprise Software Development.
  • Either style can follow traditional Waterfall models or more modern Agile, Scrum, and other Iterative, quick time-to-market models.

Style One: Non-Interactive

  • The First style, is what I refer to as the “Non-Interactive Process”, where representatives, usually labeled as Business Analysts acts as a liaison between the actual Users (aka the Business Community) and the Development Team.
  • Requirements are defined by the business either explicitly through written examples or implicitly through walk-throughs of their day to day activities and it is the BA’s job to record these requirements into a format that is easily digested by the Development Team, which may or may not be familiar with the business domain themselves.
  • Priorities are worked out with the business for each requirement by the BA Team.
  • The Development Team then reviews the Requirements one by one with the Business Analyst team and works to create technology solutions for each requirement line by line, and each requirement is delivery according to the priorities given by the Business.
  • The issue with this approach is that without input from the Development Team on the requirements, systems are usually built in a fashion that is neither stable nor scalable, because it takes a lot of “hacking” just to make a requirement appear to be working as requested by the users.
  • Based on my professional experience, most systems build using this Style require either huge overhauls or complete re-engineering within a few short years.

Here’s a diagram of the Non-Interactive Requirements Process:

Style Two: Interactive

  • The second style is what I refer to as the “Interactive Process”.
  • The primary goal of the Interactive Process is to create a Partnership, where all parties involved, from the Business Users, to the Business Analyst Team, to the Development Team have an “Ownership” stake in the system or application which they are building and investing in.
  • The start of the process is exactly the same as Style One.
  • The process diverts from the first style at the point where the Business Analyst team engages the Development Team.
  • At this point, the real value of the Development Team’s IT knowledge and experience and perhaps even prior experience in the Business Domain, really starts to become useful to the process.
  • Requirements are treated as “requested” functions of features, and each function has an associated importance and priority by the business.

Additional Considerations for the Interactive Process

  • It is the job of the Development Team to review each Requirement and based on various inputs, the requirements may be reworked, reordered, or deferred to future releases.
  • These inputs are:
    1. Human Resources
    2. Technology Resources (Servers, Disk Space, Network, etc)
    3. Time to Market Issues
    4. Current Technology Limitations or Capabilities
    5. Architectural Standards
    6. IT Cost Issues
  • Finally, each feature MUST be built in such a way that ensure the broader system or application is Stable, Scalable, Flexible, Extendible, and easily Maintainable.

Deferring Requirements, Phasing and De-scoping

  • To be clear, not a single requirement is de-scoped, however based on the technology inputs, the Development team will strongly suggest and even influence the decisions to “defer” certain functions for future releases.
  • By deferring a function or feature, this buys time for the development team to resolve technological hurtles that may have caused the requirement to be deferred in the first place.
  • Deferring a feature implies that we will have multiple phases or releases in the project. I usually think of these as “Versions”. And like all real-world software, it is natural for systems and applications to go through many releases over the years it is in Production. So this approach seems most natural.

Here’s a diagram of the Interactive Requirements Process:

The RogueLogic Method

  • The RogueLogic Method is to use the Interactive Process to build Software.
  • Software is broken down into Phases or Planned Major Versions, where deferred features and new requirements will be scheduled for future releases.
  • Also, I believe that some good Development Teams know how systems “should” work, and therefore some features requested by the users may end up being put into the system in a radically different fashion then originally envisioned by the Business Community or Business Analysts. However the original need is preserved and perhaps even enhanced to deliver more functionality.
  • The Interactive Process does take a lot of trust building, and a lot of time is needed to be spent on getting buy in from the business to allow for things like the deferment and the possible rework of a solution proposed by the Business or Business Analysts.
  • Requirements are “refined” by the Users, Business Analysts, and Developers over time, through an iterative review process of the Requirements by both the Business Analysts and Developers.
  • In the end it is my believe that the Interactive Approach which in itself is an iterative approach, is the right way to develop software in today’s world. Everyone adds value to the refinement of requirements and the process of Phased Delivery of software produces a better product, especially when there is NO end-state, and the product will continue to be enhanced as the business needs evolve of the period of many years in the case of Enterprise Systems.

In conclusion, I believe the best approach and my preferred approach is the “Interactive Process” to Software Requirements Gathering, and Releasing of those Requirements to Production. I believe in Deferring requiremen and not De-scope of Requirements by the Development teams. There MUST be a Phased Approach to Deliveries, and Refinement of Requirements through an iterative process involving the Users, Business Analysts, and Development Team, is the only real sustainable way of creating medium to large scale enterprise systems and applications.

In my next article I would like to discuss the next segment on “How we Develop Software”, except we will focus on how I believe successful Architects and Software Development Managers Start the Development process, going over things like “component-ize” the source code repository at a high level in-order to create a sustainable code base for the long running enterprise class project.

I hope this article was helpful and as always would appreciate your comments and feedback!

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Software Management | Leave a comment

Hello World!

Hello and Welcome to EnterpriseProgrammer.com! My name is Robert Ilardi; I am an Director of Application Development in the Financial Services Industry in the New York City area. On my blog “Enterprise Programmer” I plan on publishing articles (hopefully on a weekly basis at first, and depending on feedback, perhaps eventually every couple of days) on all topics related to Enterprise Application Development and Architecture, with the aim of helping professional and aspiring software developers create and promote Stable, Scalable, Flexible, Extendible, and easily Maintainable solution for enterprises in all industries and sizes.

I plan on growing EnterpriseProgrammer.com organically, so please visit back often for new and updated articles. I also appreciate any feedback you may have on both the articles and the site itself. Please feel free to contact me with your questions and feedback…

Hopefully a long the way we’ll have some time for fun geeky articles on things like Tesla Coils and Chumby Hacker Boards… Until then here’s a picture of me next to my first Tesla Coil name “Thunderbolt”! Enjoy!

Robert and his "Thunderbolt" Tesla Coil

If you would like more information on my Tesla Coil, check out my Project Thunderbolt page on RogueLogic.com.

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Programming General | Leave a comment