How We Develop Software – Creating a New System/Application Platform

As follow up on my last post on “SDLC Methodology Styles”; in this article we are going to discuss my method of creating a Foundation Platform for the development of a new System or Application.

It is my opinion that for a Software Development Manager to be successful, they need to be intimately involved in the creation of the foundation and frameworks that their development team will use going forward to create and expand the system. I believe in Leading through Example, and that means that your Team Leads and even yourself need to not only be involved in the architecture and design, but the implementation itself (Yes, that means I believe a good manager of a successful system needs to at least at the very beginning write code).

The key to building a successful system lays in the foundation of the code base and it’s overall organization. A successful Development Manager will require their team to adhere to specific design principals, third party tools, and an implementation strategy that the Development Manager needs to set at the very start of a new project. A project starts to crack once developers start injecting into the code base and approved third-party library set new frameworks or large scale third party mechanisms using 5% or less of what the library provides to build a new feature for the application. The reason a develop would normally do this is just to gain experience with a product, so that they can add it on to there resume. A successful Development Manager needs a trusted set of team leads to have a watchful eye over what the developers check in to the code base, to avoid this Resume Building Code Base Pollution.

My own answer to developers that want to learn a new product is to do it on their own time, I’ll even allow developers to “evaluate” new products and libraries during work hours so long as it’s during their down-time. My belief is that even in a high active development project there’s always periods of down-time for each develop as it’s the nature of large teams and OOP in general.

Setting Design Strategy

At the very start of a new System or Application, there MUST be a design strategy set for each of the following areas:

  • Commons
  • Batch Processes
  • Standalone Daemon Processes
  • Middleware APIs
  • Messaging
    • Publishers
    • Listeners
  • User Interfaces (Depending on the project one or more of the following)
    • Web Applications
    • Mobile Platforms
    • Desktop Clients

But before we start talking about each of these development areas, I want to focus on what I feel is the MOST important aspect of designing an architecture. That is what I refer to as “Resource Management”.

Resource Management

What I consider “resources” that are critical to control and deal with across all aspects of an application are: Configurations, Database Connections, and Out-of-Band Communications.

Configurations can be anything from simple Name-Value Pair Properties, or complex XML documents. The problem is that everyone usually needs to store configuration data for a process or function to run correctly outside of the binary itself. The issue I feel that needs to be solved by a robust architecture is how Configurations are loaded by any process or component, and ensuring that this method is reused easily across all tiers of an application.

I think Database Connections are self-explainatory, however obtaining the connections is the critical point, which I feel must be consistent throughout an architecture. For example in a multi-tier application, connections to databases can be obtained via Connection Pools in Enterprise Application Servers (Container Servers such as WebLogic, WebSphere, JBoss), or when working outside of Containers, working directly with Drivers or Driver Managers to obtain a direct connection to a database. I usually create an abstraction layer, so that a developer working on a new Middleware API or a Batch Process doesn’t know if their are working with Connection Pools or Direct Connections. In the passed a lot of developers create ODBC, ADO, ADO.net or JDBC wrappers that everyone uses in a particular project. Because this was such a common process, a lot of open source solutions have popped up, such as iBatis/MyBatis, Hibernate, and other ORM tools.

Personally I’m not a fan of ORM tools, and I think applications with large or complex data models are better off writing direct SQL or Stored Procedures and interacting directly with the database via JDBC or ODBC, or DBI for Perl, etc… Usually one of my biggest requirements when I write a Job Description for a new hire for my own teams is that they know SQL and Direct JDBC (without and ORM frameworks). But this is a topic for another article.

Finally, I believe a robust architecture that provides services for Resource Management, include a method for functions and/or components to send data between each other in an Out-of-Band manner. A lot of current scalable architectures call for Stateless designs, which means usually sending database from one component to another has to rely on method arguments and return parameters. However sometimes to simplify the passing of data, we as developers want to naturally fall back on Class or Object Fields or “Global Variables”. However sometimes this can cause scalability issues, or if not designed carefully multi-threading issues, specially when a developer is creating a component that will be used in an Application Server and threads are implied. A robust architecture can allow for “global” like data to be transient in regards to having a lifetime relating to the call stack, but still shared safely between all different levels of the call stack and other components.

In my Architecture called the Data Services Framework, which we will cover in a future article, I combine all three of these key resources, Property Management, Database Connection Abstraction, and Out-of-Band data transport, into a single structure called a “Resource Bundle” which a new instance of one is passed to each business logic component when a new invocation on a middleware API occurs. I have also created something known as the “Standalone Resource Helper” which allows processes running outside of a Container Server, such as Batch Processes or Standalone Daemons to be able to obtain an instance of a resource bundle, so both Middleware, and Standalone processes can both deal directly with resource bundles, instead of trying out how to read and store properties, and obtain database connections.

By setting a design strategy at the beginning of a system may take more time to get developers implementing the targeted Application at full speed, but it will ensure that the code produced by many difference developers each having their own unique approach and conventions at writing code, all contribute to a code base that is easily maintainable, extended, and worked on by all individuals, including new team members in the future. It creates a code based that is Manageable.

Commons

Most developers when they hear the word “Commons”, think Apache Commons or something similar. However when I use the word and concept of “Commons” in my projects, I mean a separate module or directory in the Source Code Control Repository (SVN, GIT, CVS, etc), that contains common utilities and frameworks used by all other modules within the Application’s Source Tree. It can contains simple things like a custom “StringUtils” class which contains commonly reused String Manipulation functions, to larger scale mechanisms such as SQL Result Set Paging Systems, or Socket Wrapper Libraries. The goal of the Commons is to encourage the creation of reusable components both large and small by the entire development team so that we have consistent implementations of varied business functions using a robust and maintained common component set that may be highly customized for a particular organization or project. I normally encourage my own developers to constantly look for opportunities to contribute to our Commons; if they see a function or component that is likely going to be written again for a separate business requirement, I ask that they try to create an abstract reusable component and then customize it for their use case and add it to the Common’s source tree. The easiest example of this is String Utility functions, or SQL Utility functions, I always ask that if you create an interesting utility method dealing with Strings or SQL Result Sets, SQL Statements, etc, to add it to the commons instead of directly embedding it in your code.

Your own implementations of Resource Management, for me specifically my Resource Bundle Framework is probably the first component that needs to be build and is the most important component of the Commons module of any project following my design strategy. When I first come onboard as an Architect, Head Development Lead or Development Manager, having this framework build is the very first thing I do when the development phase of a project begins. Usually I take the time between meetings with the Business Analysts, Users, and Project Management Office teams during the phases before the actual development phase of a project to develop this component. BA’s, Users, and sometimes even management, won’t see direct value in developing a robust Resource Management implementation, so it is up to you as a Development Manager or Architect to ensure this component gets built; trust me, getting something like this on the project plan, will save you a lot of future grief.

A robust Resource Management framework is the key to creating Stable, Scalable, Flexible, Extendible, and easily Maintainable Systems and Applications!

Batch Processes

Batch Processes are usually deployed on a backend Linux or Unix Server (although it can be on Windows as well), which are executed via a Scheduler. A simple one that every Unix programmer knows is Cron. There are also commercial and open source Schedulers such as Computer AssociatesAutosys that are much more robust, and allows for small scripting languages (such as Autosys’s JIL), that enable developers not only to run jobs on a time based schedule, but also using some logic, such as detecting the failure or success of other jobs running from the scheduler, and therefore taking appropriate actions.

A Development Manager must design an approach to handling Batch Processes. In my mind, the first thing that must be done is creating an easy to follow startup procedure for each Process the developers will write. This may sound simple, but the worst thing that I have seen in my professional career is when a medium to large development team has a different startup procedure for each of their individual team members. Usually half the problem with having a developer debug or maintain another developer’s batch process is trying to figure out how to start the thing. If you can’t get it running for a couple of days you really can’t start the debugging process, delaying a potentially critical release.

Enforcing that all batch processes use your Resource Management framework helps to ensure that processes will have similar startup processes, as most process’s startup procedures involves bootstrapping the process with configurations, database connections, etc.

As a positive side affect of using something like Resource Bundles to pass around connections and configuration data, you will soon find that components such as Data Access Objects can be easily reused between both Batch Processes and Middleware Components.

Standalone Daemon Processes

The only real difference between a Batch Process and a Standalone Daemon Process in my mind is that a Batch process usually runs on a schedule, it starts at a specific time or a combination of a specific time and an event occurring, and it stops once it finishes processing a finite set of data.

In the case of a Standalone Daemon Process, the idea is it start up at some point, usually say on a Sunday morning, and it runs continually, processing data a random times, depending if events occur such as a messaging arriving in a queue, or a file arriving in a public FTP/SFTP directory, for which the process is watching. And this process doesn’t stop unless the system owners choose to manually stop it or invoke some programmatic shutdown method intended to bring the process down for weekly or monthly server maintenance.

I’m really not going to spend too much time on this section, because a Standalone Daemon should follow the same design strategy as a Batch Process, especially making use of the Resource Management and Startup procedure, however the one addition which I think a robust architecture must have is how a process “becomes” a daemon.

Usually, it’s done via some type of event loop which never ends until some signal for shutdown occurs. This loop can have certain conventions set, as well as the shutdown procedure, so that all Daemon Processes within a System work the same way. Like the startup procedure, we spoke about in the Batch Process section, it’s all about maintenance. You don’t want to waste a lot of developer cycles trying to figure out how the daemon process remains running. Having a common convention and set of utilities, such as perhaps even abstracting the event loop itself will ensure any developer on your team once familiar with a single Daemon Process, can work on any other daemon process in your system.

Middleware APIs

In the Java world, it’s always easier to find good Core Java developers then JavaEE/J2EE developers. And in my opinion you have to be a good Core Java developer to be a JavaEE developer anyway. It always amuses me when a candidate on a technical interview prefixes an answer to a question about a core concept such as Collection as they re “rusty” because they are a JavaEE developer… What does that even mean? Business Logic is always in Core Java! It makes no sense to call yourself a JavaEE developer. In fact if you apply for a Java developer position are don’t consider yourself a Core Java developer, you need not apply (at least that’s my opinion)!

Ok, we got a little off topic, but what I stated above leads into my core design strategy for Middleware APIs. I like to implement an architect that abstracts the developers from having to deal with any of the EJB, SOAP, or other RPC concepts of JavaEE. I do this again in my Data Services Framework architecture, but right now all you need to know is that I believe in creating an architecture that allows developers to focus 99% of their development time on implementing the business logic or the objective of the business requirements, not worrying about the plumbing.

Over the years I have refined a design over the cause of 10 years which actually allows developers to create and run Middleware APIs from unit test classes right out of an IDE such as Eclipse without having to build and deploy the middleware to a container server such as WebLogic and without remote debugging! Their code is automatically included in the build process which will deploy it to the contain application server without a single line of code change! This is what my Data Services Framework does, and is exactly why I’m saving it for it’s own article.

A successful Development Manager MUST create an architecture or at least a design convention for each of their Middleware APIs to follow. This will simplify maintaining these APIs over time, and if done in a certain way, such as leveraging the Resource Management / Resource Bundle design I have mentioned in this article, a lot of code can be reused by non-middleware components.

Messaging – Publishing / Listening

There are two method of creating publishers and listeners. One method which I am strongly against is writing publishers or listeners that are deployed as components within an Application Server. Instead I mandate that all publishers and listeners (except Messaging Driven Beans) must be written as standalone daemon processes. This usually means that there has to be some mechanism for transferring data from middleware APIs to publishers running as separate processes.

Most times this is done via an event table in the database, and the publisher process includes some type of database table poller, which constantly reads the event table looking for new events to send out as messages.

In the case of Listeners, it really depends on if you are using a listener as a device to update your database from upstream or source systems automatically in real-time without user intervention, or if you are using listeners as a RPC (Remote Procedure Call) mechanism, for external systems to interact with your system components programmatically in real-time via messaging instead of an API approach like SOAP or RESTful Web Services. But in either case I keep these listeners as external standalone daemon processes. In the case of the real-time database loader, there’s no question about how this works, it just executes SQL, a Stored Procedure, or a DAO method each time a message arrives. In terms of the RPC usage of a Listener, I treat these as proxies to Middleware APIs, basically my listener will call the API on behalf of the publishing client, each time a new method arrives.

The benefit of keeping publishers and listeners outside of the Application Server is that they are more stable and scalable in my experience. Especially in the case of persistent or reliable messaging, these types of publishers and listeners have things such as Ledger files or some type of non-volitile storage backing the in memory queues, so that messages are not lost, and occasionally these storage mechanisms either get overloaded or otherwise get corrupted, and it’s usually a lot easier to deal with if you went through the pain of creating event tables in order to republish outgoing messages or reprocess incoming messages when production support issues arise. Also there are special considerations you have to deal with when your Application Contain Severs are running in a multi-node clustered environment. Sometimes you have to bind your listener to a single node, and the fail-over procedure in that type of environment becomes much more complex. Same is true for publishers in a multi-node clustered environment. Usually to ensure the ordering of data you need to only have a single sender publishing at any one time; so which node in the cluster publishes?

All this is removed by creating Publishers and Listeners as standalone processes. It’s sometimes a little more work upfront, but it’s worth it in the end.

finally, since all Publishers and Listeners are forms of Daemon Processes, the event loops, etc, which I mentioned in the section on Standalone Daemon Processes should be adhered to when developing these types of processes.

User Interfaces – Web Apps, Mobile Apps, Desktop Clients

I consider myself more of a Server Side Developer than a Client or UI Developer. However you can not discount User Interfaces when designing your system architecture. This is a fatal flaw I have seen in a lot of projects when the Managers start out on the Server or Batch side and look at User Interfaces as the nice add on of their system for the users to use. However leaving the User Interface as an afterthought like this can cause you to mis-design other aspects of your application such as the Middleware APIs and even the Data Model.

How I like to split the team is a Server Side development branch, which can build Middleware, Batch, Database, etc, and a separate branch of the team for User Interfaces. The reason for this is that it take a special set of skills to develop good User Interfaces. It’s somewhat of an Art rather than a science. And based on my professional and personal experience, you usually need to hire specific UI developers if you want your system to be a success. Also if the budget allows for it, I also feel you should hire Designers separate from the actual UI developers to design the templates and screen layouts used in the UI.

From an architectural standpoint one of the most important aspects of the User Interface on Day One, is the Client Library of the Middleware. I believe the middleware development team should wrap the middleware APIs in a client library in the native language which the client uses. This usually is a thin Facade (or wrapper) around SOAP Stubs or RESTful Web Service.

As most modern front-end architectures I believe in the N-Tier architecture, where you minimally have a Front-End, a Middleware, and a Database. All business logic, data access, even validation logic (other than simple syntax validation) should be embedded in the Middleware, I call this being UI-Agnostic.

Being UI-Agnostic allows you to build multiple front-end such as a Web Application, a Desktop Client, and Mobile Apps for different mobile platforms all leveraging the same middleware without much if any at all code duplication for the business logic, data access, and validation logic layer.

Also, although this is becoming less the common case and more of the exception, since Server and Front-end environments are becoming more heterogeneous then ever before, especially with the mobile platform, if your front-end is written in the same language as your Middleware and Batch, I would enforce that the User Interface developers use the same Commons as the server side developers. This is more easy say with Traditional Web Apps in the Enterprise, where you might have a Java middleware and Java based web front-ends.

Creating Robust Enterprise Systems

What is a Robust Enterprise System? It is any system which is designed to be Stable, Scalable, Flexible, Extendible, and easily Maintainable (SSFEM). By creating an architecture and a common set of utilities at the very onset of your projects, you will help to ensure that you have a robust enterprise system. In future articles we will discuss specific architectural design I believe enable Systems to be SSFEM. If you can do this in your career you will not only be a successful Development Manager or Architect or Developer, but you will also have pride in your systems, which will be in use for many years to come, perhaps even decades. The goal I always have is to design systems that have the capability of lasting between 10 and 20 years. People may think that in these times where technology is changing faster than any of us even in the industry can keep up, that talking about systems that last this long is absurd, but if the systems you build are SSFEM, you will find that it is cheaper to extend the system to meet the needs of the business than for the business to just replace them system.

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Software Management | 1 Comment

How We Develop Software – SDLC Methodology

What I want to discuss in this article is my own mytholody for Software Development. I am going to do a few segments on this topic, but starting with this post, I want to specifically discuss how I believe the Requirements Gathering part of the SDLC (Software Development Lifecycle) process *should* be handled.

With this in mind, this article will go over the following key points:

  • Two Styles of SDLC
    1. Non-Interactive Process
    2. Interactive Process
  • My Preferred Method, which I dub “The RogueLogic Method”
  • Conclusion

Two Styles of SDLC

  • There are two “styles” of the Software Development Lifecycle (SDLC) that applies to Enterprise Software Development.
  • Either style can follow traditional Waterfall models or more modern Agile, Scrum, and other Iterative, quick time-to-market models.

Style One: Non-Interactive

  • The First style, is what I refer to as the “Non-Interactive Process”, where representatives, usually labeled as Business Analysts acts as a liaison between the actual Users (aka the Business Community) and the Development Team.
  • Requirements are defined by the business either explicitly through written examples or implicitly through walk-throughs of their day to day activities and it is the BA’s job to record these requirements into a format that is easily digested by the Development Team, which may or may not be familiar with the business domain themselves.
  • Priorities are worked out with the business for each requirement by the BA Team.
  • The Development Team then reviews the Requirements one by one with the Business Analyst team and works to create technology solutions for each requirement line by line, and each requirement is delivery according to the priorities given by the Business.
  • The issue with this approach is that without input from the Development Team on the requirements, systems are usually built in a fashion that is neither stable nor scalable, because it takes a lot of “hacking” just to make a requirement appear to be working as requested by the users.
  • Based on my professional experience, most systems build using this Style require either huge overhauls or complete re-engineering within a few short years.

Here’s a diagram of the Non-Interactive Requirements Process:

Style Two: Interactive

  • The second style is what I refer to as the “Interactive Process”.
  • The primary goal of the Interactive Process is to create a Partnership, where all parties involved, from the Business Users, to the Business Analyst Team, to the Development Team have an “Ownership” stake in the system or application which they are building and investing in.
  • The start of the process is exactly the same as Style One.
  • The process diverts from the first style at the point where the Business Analyst team engages the Development Team.
  • At this point, the real value of the Development Team’s IT knowledge and experience and perhaps even prior experience in the Business Domain, really starts to become useful to the process.
  • Requirements are treated as “requested” functions of features, and each function has an associated importance and priority by the business.

Additional Considerations for the Interactive Process

  • It is the job of the Development Team to review each Requirement and based on various inputs, the requirements may be reworked, reordered, or deferred to future releases.
  • These inputs are:
    1. Human Resources
    2. Technology Resources (Servers, Disk Space, Network, etc)
    3. Time to Market Issues
    4. Current Technology Limitations or Capabilities
    5. Architectural Standards
    6. IT Cost Issues
  • Finally, each feature MUST be built in such a way that ensure the broader system or application is Stable, Scalable, Flexible, Extendible, and easily Maintainable.

Deferring Requirements, Phasing and De-scoping

  • To be clear, not a single requirement is de-scoped, however based on the technology inputs, the Development team will strongly suggest and even influence the decisions to “defer” certain functions for future releases.
  • By deferring a function or feature, this buys time for the development team to resolve technological hurtles that may have caused the requirement to be deferred in the first place.
  • Deferring a feature implies that we will have multiple phases or releases in the project. I usually think of these as “Versions”. And like all real-world software, it is natural for systems and applications to go through many releases over the years it is in Production. So this approach seems most natural.

Here’s a diagram of the Interactive Requirements Process:

The RogueLogic Method

  • The RogueLogic Method is to use the Interactive Process to build Software.
  • Software is broken down into Phases or Planned Major Versions, where deferred features and new requirements will be scheduled for future releases.
  • Also, I believe that some good Development Teams know how systems “should” work, and therefore some features requested by the users may end up being put into the system in a radically different fashion then originally envisioned by the Business Community or Business Analysts. However the original need is preserved and perhaps even enhanced to deliver more functionality.
  • The Interactive Process does take a lot of trust building, and a lot of time is needed to be spent on getting buy in from the business to allow for things like the deferment and the possible rework of a solution proposed by the Business or Business Analysts.
  • Requirements are “refined” by the Users, Business Analysts, and Developers over time, through an iterative review process of the Requirements by both the Business Analysts and Developers.
  • In the end it is my believe that the Interactive Approach which in itself is an iterative approach, is the right way to develop software in today’s world. Everyone adds value to the refinement of requirements and the process of Phased Delivery of software produces a better product, especially when there is NO end-state, and the product will continue to be enhanced as the business needs evolve of the period of many years in the case of Enterprise Systems.

In conclusion, I believe the best approach and my preferred approach is the “Interactive Process” to Software Requirements Gathering, and Releasing of those Requirements to Production. I believe in Deferring requiremen and not De-scope of Requirements by the Development teams. There MUST be a Phased Approach to Deliveries, and Refinement of Requirements through an iterative process involving the Users, Business Analysts, and Development Team, is the only real sustainable way of creating medium to large scale enterprise systems and applications.

In my next article I would like to discuss the next segment on “How we Develop Software”, except we will focus on how I believe successful Architects and Software Development Managers Start the Development process, going over things like “component-ize” the source code repository at a high level in-order to create a sustainable code base for the long running enterprise class project.

I hope this article was helpful and as always would appreciate your comments and feedback!

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Software Management | Leave a comment

Hello World!

Hello and Welcome to EnterpriseProgrammer.com! My name is Robert Ilardi; I am an Director of Application Development in the Financial Services Industry in the New York City area. On my blog “Enterprise Programmer” I plan on publishing articles (hopefully on a weekly basis at first, and depending on feedback, perhaps eventually every couple of days) on all topics related to Enterprise Application Development and Architecture, with the aim of helping professional and aspiring software developers create and promote Stable, Scalable, Flexible, Extendible, and easily Maintainable solution for enterprises in all industries and sizes.

I plan on growing EnterpriseProgrammer.com organically, so please visit back often for new and updated articles. I also appreciate any feedback you may have on both the articles and the site itself. Please feel free to contact me with your questions and feedback…

Hopefully a long the way we’ll have some time for fun geeky articles on things like Tesla Coils and Chumby Hacker Boards… Until then here’s a picture of me next to my first Tesla Coil name “Thunderbolt”! Enjoy!

Robert and his "Thunderbolt" Tesla Coil

If you would like more information on my Tesla Coil, check out my Project Thunderbolt page on RogueLogic.com.

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Programming General | Leave a comment

About Enterprise Programming

What is Enterprise Programming?

Enterprise Programming is when a programmer is hired to develop applications for a corporation or other large organization whose primary business is NOT Software Development. Every industry employees thousands upon thousands of programmers to develop both internal applications for use by employees and externally facing applications such as Web Applications for use by their customers. In either case the software is specific to the organization’s business domain. Programmers may either be direct employees of the company or they might work for a consultant firm who is hired by the company to develop software for their business requirements. For example a Bank or Financial Services company will employ programmers to develop everything from Trading Systems to Reference Data and Reporting applications. Manufacturing firms as another example, may hire programmers to develop Just In Time inventory supply chain management applications.

Why do companies and other large organizations develop in-house software instead of simply buying software package from Software firms?

Of course large organizations do purchase commercial or retail software, and they also make use of Open Source products. However some tasks are very specific to a particular business and Software companies and open source projects either do not have the domain knowledge needed to development customized solutions for every business, or even if they do have a base solution, it often does not meet the exact needs of a specific company. Some Software Development groups in the Enterprise setting will work on extending and customizing commercial or open source applications to fit the needs of their own specific company. In other cases, information and processes are extremely proprietary to a single company and they see their business process as an edge over their competitors, and therefore sometimes very specific software is written around those proprietary business processes, formulas, data, etc, and companies do not want to take the chance that an external software development firm can replicate that process for other customers. Also, in today’s world, time to market is extremely important in all businesses, and the ability to customize software as soon as a business need arises is extremely critical to most modern businesses. Having in-house programmers to continue to extend existing systems quickly is essential to compete in today’s marketplace. When a system is built in-house, the programming teams gain an increasing understanding of the business processes and needs of the business. It is impossible for external software development vendors to have the same level of understand of a particular industry as an in-house software development team for every single industry, and even if a software vendor targets a specific industry, by hiring Subject Matter Experts or SMEs , they usually develop a very specific solution for one of their customers and then try to customize it to meet other customers within that same industry. Depending on the flexibility of the software and the demands of the original customer, the vendor software package may or may not fit the needs of other companies without a huge investment in customizations, which sometimes makes the software unstable or otherwise too complex and costly to maintain. In the end it is up to each application or IT area owner within a company to make a decision on whether to buy and customize a vendor solution or build a solution in-house. Today most businesses have both vendor solutions and in-house solution depending on the needs and finances of the organization.

What topics will be discussed on the Enterprise Programmer Blog?

On the Enterprise Programmer Blog you will find articles that discuss both the challenges Enterprise Programmers will face and interesting solutions to solve those problems. We will discuss everything from design patterns, to specific programming topics, to large-scale architectures. The target audience is professional programmers, system architects, business analysts, and IT project managers. This blog will also be a valuable resource for any student of programming that wishes to become a professional programmer for large enterprise.

Notice: Please note that all designs, suggestions, code, and other writing on this web site are of my personal opinion and not the opinion of my employers and all the Intellectual Property and other information is from my Personal Projects and experiences outside of my professional work.

Posted in Enterprise Programmer | Leave a comment

The Non-Java Programmers Guide to Java

What is it? Back in 2008 right after Lehman Brothers went Bankrupt, I created this programming guide to help out some co-workers that were programmers but not Java programmers, learn Java to help with potential job searches. Just came across it in a backup and thought might be useful to post.

Links to the Guide formerly hosted on RogueLogic.com:

Intended Audience

The intended audience for this guide on the Java Programming Language is Experienced Programmers that want a quick concise read to get them started with the Java Programming Language. This guide does NOT assume you have any knowledge of Java or Object Oriented Programming.

At the end of this guide you should be able start reading and modifying other peoples code, as well as start creating your own programs in Java from scratch. What this guide will NOT do is make you an expert with Java. You should read other documentation and books on Java and practice programming in Java, and perhaps even take a training course or two, if you want to be come a “senior” Java developer.

It is the author’s opinion that, the only real way to become a highly productive programmer in any language including Java is to work with it on a frequent basis either at your job (as a professional programmer already, who wants to move to the Java language to participate in Java development projects within your company), or for programming projects at school and at home.

Syllabus

Lesson 1 – My First Java Program

  • The simplest Java Program. Outputs to the screen “Hello World”
  • How to write a Java Program
  • Compile a program from the command line
  • Run a program from the command line
  • What is the classpath?
  • What is a Package?
  • Environmental Settings for Java
  • Same on Windows or Unix/Linux: JAVA_HOME, PATH, CLASSPATH
  • What is a JAR File?

Lesson 2 – Procedural Programming in Java?
This shows you that you can actually just create a single Java class with “functions”
You should never program in the real world like this.

Lesson 3 – Classes and Objects

  • Basic Object Oriented Programming (OOP) Guide
  • What is OOP?
  • What is a Class?
  • What is an Object?
  • What’s the difference between a class and an object?
  • Java Class verses Java Interface

Lesson 4 – Built In Java Data Types

  • Two Types of Data Types in Java: Primitives and Objects
  • What’s the difference?
  • Primitive Types: int, float, double, char, long, byte, boolean, short
  • Included Object Data Types: String, Date
  • Primitive Wrapper Objects: Integer, Float, Double, Character, Long, Byte, Boolean, Short
  • In some cases you need to pass around a variable such as an int as an object. In this case you would use Wrapper Types.
  • For “MOST” Java programs you can usually ignore wrapper types, unless you want to store a primitive in a Collection (See the Lesson on Collections for details).
  • Notices there’s one for each Primitive Type, only difference in most cases which for Character are the same name as the primitive, only with a Capital Letter as the first letter. In Java, everything is CASE-SENSITIVE.
  • Wrapper Types are used to “wrap” a primitive in an Object. Example If you want to wrap an would wrap an “int” variable in an Integer object.
  • Type Casting
  • Arrays

Lesson 5 – Operators, Loops and Logic Statements (Control Statements)

Lesson 6 – Collections

  • Lists: ArrayList / Vector
  • Maps: HashMap / HashTable

Lesson 7 – Exceptions

Lesson 8 – JDBC (Java Database Connectivity)

  • How do we connect to a database in Java?
  • Basic JDBC Programming from a Command Line Java Program.

Just another stream of Random Bits…

-Robert C. Ilardi

Posted in Development | Leave a comment

CD-2-Fast

This started way back in high school (around 1995), when my friends and I were attending Xaverian High School. A bunch of us used to meet early around 7AM in the cafeteria for breakfast and to work on our homework, every morning for the entire 4 years we were in high school…

One of the guys at our table was talking about how Quad-Speed CD-ROM “Spin too Fast”… As you can see this is really dated based on the top speed of CD-ROMs at the time.

Anyway it started this big discussion between three guys, Chris, Anthony, and myself. I am still good friends with Anthony today, and we actually work together. The three of us were the big computer guys at our table, so this discussion was pretty typical for us.

Anyway, after a week of arguing about 4X CD-ROM NOT being too fast as Chris insisted I drew this pretty poor GIF in Paint Shop Pro probably version 1.X…

CD-2-FastMuch much, later when we were all in college, I came across this file on one of my backup CD’s, and emailed it out to the guys…

After some laughs, and “Fuck You Rob’s”, Anthony created the Flash version since he was taking a Multi-Media course or something…

Anyway, I converted the original Flash movie into an MP4 for your viewing pleasure…

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Personal | Leave a comment

Yoga Nidra and the Fountain of Knowledge…

Today, I completed my first session of Yoga Nidra. I actually meditate a lot over the years, but I have never studied or tried any Yoga based meditation techniques. The first time I meditated was in the 7th or 8th grade, when my class from St. Athanasius (a Catholic school in Brooklyn, NY) went on a trip to the Mount Manresa, a Christian retreat in Staten Island. The brother’s there guide groups in meditation through “Guided Christian Imagery” (see Christian Meditation). After my first meditation, I was fascinated with the practice. Throughout high school, I visited Mount Manresa at least once a year with my school, Xaverian High School (a christian brothers school in Brooklyn.) I also experimented with meditation, diverging from Christian Meditation, using my own imagery, but reused the deep focused breathing I learned from Christian Meditation. I refined my techniques throughout college through reading and the internet.

Later on, after college, I also have participated in Guide Imagery meditations in private “workshops”. Throughout my adult life, I have had bouts of insomnia off and on. Since the collapse of Lehman Brothers, I have experience some intense insomnia over the past 6 months. Someone recommended to me to look into Yoga. At the same time I was already reading on the Internet various techniques to control insomnia. Yoga has consistantly come up at the top of searches and articles as a method to help control and eventually “cure” insomnia. So I decided to pick up a couple of Yoga Guided Meditation CDs and books on the subject as well. Yoga Nidra pretty much means, “yogic sleep” or “sleep of the yogis”, so it seemed like the perfect type of Yoga for me to study. This evening I went through the first session, and afterwards I have to say I felt very relaxed.

I’m looking forward to continuing refining my meditation techniques including Yoga Nidra over the next months and years to follow. This brings me to “the fountain of knowledge” in the title of this post. Recently my meditations have let me to an image of a stream of energy which I used as a representation of my thoughts. Since my mind races at night when I try to go to sleep with a million different thoughts, I imagined each thought as a tiny stream of light or energy, like a lightning bolt. I would focus on collecting as many of these streams of light as possible and try to focus them into a single stream, which I would then attempt to quite and control. It actually seemed to be the right imagery for me to help me fall asleep on many occasions lately. I have had problems focusing my thoughts lately during meditation which has made me frustrated. A common image used to relax one self in meditation is to go to a place where you feel at peace. This can be a field, or a childhood house, or anywhere else you feel safe, secure, and happy. I sometimes used my grandparents house in Brooklyn, where I grew up, but lately, I found that I wasn’t “at peace” when I returned there in my thoughts during meditation. I even questioned whether the house I grew up in was beneficial in my development as an intelligent person (for those who know me, I hold intelligence extremely high in importance).

Tonight after my session of Yoga I went for a walk through the community I live in. I was reflecting on my first Yoga Nidra meditation and my previous meditations I have done myself. The stream of thought energy came up and I started to analyze it. I imagined that at beginning of the stream was a fountain in the ground. The ground was my grandparents lawn and the light was come up from the earth. It was an interesting image to come to. And it reminded me how much I learned from my grandfather, and how much I learned by experimenting with chemistry sets, legos, computers, electronic kits, radios, and model rockets, all within my grandparents house and their property. I realized something that I always knew, everything I have material-wise, is due to my abilities at programming, and I first learned programming at the age of seven sitting in my bedroom at my grandparents house. My grandfather always encouraged me to learn more and to do better in school, and work hard. He always took interest in my programming and other scientific projects, even if he didn’t fully understand them. So now, finally after a couple of months of questioning and having a hard time of returning to my grandparents house in my mind as a relaxing, peace, place of reflection, I have resolved all those doubts and I can now return there in my meditations if needed.

dscn1080

It has been a very interesting and deep day…

Just Another Stream of Random Bits…

– Robert C. Ilardi

Posted in Philosophy | Leave a comment

Intelligent Self-Aware Machines and the Human Race

Today, Wednesday, June 4, 2003, I watched the DVD The AniMatrix. It’s nine short Anime films about the world of The Matrix from the brothers, Larry and Andy Wachowski, the writers and directors of the Matrix. Watching the second and third films titled “The Second Renaissance” Parts I and II, I asked the question which many people and obviously the Wachowski brothers have asked themselves… “Will we ever go too far?”

Even in the remake of the movie “The Time Machine” asked the exact same question. Will we ever go to far with out technology so that it will cause the end of human civilization and perhaps the extinction of the human race? We already have the technology to do it today thousands of times over. But will it ever come to the point where as in the Matrix and other films such as Terminator were humans will give birth to a new intelligent race of machines and because of fear and human hatred, once these machines become a society or part of our society we will try to destroy them? And if this day of shame of the human race ever comes how will the machines react? It is logical to survive, and if they are not only intelligent but actually conscience, that is they know they are in a way “alive.” will they want to survive?

Obviously this is something from science fiction and pop culture today, however the age of the “spiritual” machine is upon us. (Please See: The Age of Spiritual Machines: When Computers Exceed Human Intelligence by: Ray Kurzweil) Computers today can perform billions of calculations in a single second. This is still slow when compared to the human brain, because it is digital, simply a bunch of zeros and ones, whereas the human brain is analog and when it processes something such as an image, it is truly that image which is stored and processed. Computers must first translate everything including images into large groups of patterns of zeros and ones called binary. Sure simply extremely repetitive tasks such as adding can be done billions of times faster then a human brain, but for complex operations such as the every day things humans do even in their sleep, a computer’s brain, it’s CPU (Central Processing Unit) will grind to a halt trying to process a single second of what the human brain does constantly without us even giving it a second thought.

Before we continue it is a good idea to review what a Computer Program is. A Computer Program is a set of instructions that tells a computer exactly what to do. These instructions eventually are extremely simple little operations such as load a value into a memory location, or add the values stored in two memory locations together and store it in a third, or perhaps move to a new memory location and get it’s value. We have simple logical operations such as equal or greater than. This might seem like intelligence, the ability to determine if two items are equal. But for a computer, it can only compare numbers. Is 1 equal to 2? In the end it is just switches and electricity. The number one has a certain electrical characteristic which is different from that of the number 2; through various electronic techniques, these difference in electricity will translate into a third electrical signal, which will tell us if 1 is equal to 2, there is no intelligence here simply switches and electricity, no different from having millions of light switches on your wall and you flip them off and on to mean different things.

So a computer must be told exactly what you want it to do in the form of a program which is made up a hundreds even thousands, some even have millions of the tiniest steps to solve some problem. The steps available are called instructions and are built into the CPU microchip. (Everyone is familiar with Intel’s Pentium Family of Processors (CPU’s).)

Are we even close to an intelligent self aware machine? Well no, we aren’t, not yet… Take for example learning… This is what computer scientists would relate to computers as Artificial Intelligence, the ability for a computer program to adapt to new inputs into the system and return some result without that process having to be told ever little step to get from inputs A and B to result C. Normally this is done when a computer programmer writes a program telling them computer each and every little step of execution to get from A and B to C. Computers even ones with “Artificial Intelligent” programs are still not even close to what would be necessary for self awareness. Take for example walking up and down the stairs. Well for a human to learn how to climb up and down the stairs it is pretty natural. If you have ever observed a little baby, you always need to put gates or at least watch the stair cases because they will always attempt to walk up and down them with ease. Once they can go up the stairs coming back down doesn’t take too much longer if any time at all for them to learn. For a computer this is very different. Traditionally if you wanted a computer to “understand” how to climb up a stair case, first you would have to explain in overwhelming detail where is the stairs located. Then you have to describe to it how to get from where it is standing to the stairs. Then you have to describe how to lift its first leg and place it down on the first step in the series of steps. Then you have to do the same for the leg still left on the ground or previous step. If you could manage to have the computer move up the stairs, describing exactly how to keep its balance while it climbs would be a great help or it will fall. Once you reach the last step you will have to make it understand that there are no more steps and it must stop climbing. Yes! Finally done teaching the computer how to climb up the stairs, you just wrote a very complex computer program, step by step. Well, the computer made it up the stairs, now what should it do? A Computer will just stand still at the top of the stairs waiting for the next program to execute to tell it what to do next. Maybe you want it to come back down the stairs. But our computer doesn’t know how, because it just knows how to climb up not down. remember the leg motion for climbing up is not the same as climbing down, and not to mention the balancing is a lot different as well. And if we did want it to climb down the stairs, not only would we have to write the climb down the stairs program, we would have to first tell the computer to execute or “start” the climb down the stairs program. Hopefully the program is smart enough so that the computer will turn around first trying to find the steps before it starts moving it’s legs in the fashion used to climb down stairs! If not it will do so in place and probably fall over!

Now if we used an artificial intelligence programming language such as prolog for the computer to climb up or down the stairs, it will try all possible combinations trying to climb up or down the stairs and most likely it will fall down the stairs an we will have one very damaged no longer working computer before it found a single combination to climb down the stairs. Another way would be to have a Neural Net, which “learns” to do a task, however how it actually learns is a problem, it cannot learn by example, since programs to learn by visual example, in this extremely complicated motion of the human body (although we take it for granted) has not been written for the computer to learn this way. By the time you teach a computer to climb up and down a stair case, plus everything that goes with that, such as finding the stairs to climb, you will probably be able to write a more tradition program that can do it. Although some Neural Nets are complicated enough these days to drive a car safely on a road. However, driving a car is much easier a motion then climbing up and down a stair case. having wheels makes things a lot easier, but other problems like steering presents a big problem as well. However even with this level of sophistication, we are still no where near an intelligent machine that is self aware. What I mean by this is, the computer program no matter how sophisticated will always be limited to execute the tasks it was designed to do and only what it was designed to do. A computer will never come up with a good idea and juggle many tasks at once (and with operating systems like Windows Multitasking it can do multiple things at once very well, as long as it was told to do it and how to do it in details!) and do that new good idea because it wants to. It has no want, or any imagination to determine what it wants to do next! It can only do what it is supposed to do!

Enough of our history lesson on computer intelligence to date, we are getting a little to far off topic. Will we ever go to far with our technology, our science, and our computers? I think we will someday, and when that day comes, we will have to make a choice. Will we take that advancement in science and technology and further the human race, or will we use it or force it to destroy ourselves? It will be the difference between the end of human civilization and the eventual extinction of the human race, or it will lead us to Star Trek land, where humans live in peace an the world is a much better place. Computers are advancing faster today then any other human technology and science. One day maybe in a hundred years or even more, we will have a truly Artificial intelligent computer and it might become self aware. How will we react? Probably with fear. Humans have always been afraid of new machines, since the first time they were used in the industrial revolution in factories, and humans would throw shoes into the gears to make them stop working. We inherently do no like to be replaced by machines, we feel that we own the planet it was given to us by evolution and we desire to be here and do as we please.

I think we will react very badly on the day we have an entire race of intelligent machines. We cannot simply destroy them, it would not be fair, it would be genocide. Anything that is self aware deserves the right to live its life! Do we expect them to treat us any less? In the AniMatrix, the humans treated the machines as slaves, which we do now but machines today are not self aware, they do not actually know they are hear, and they don’t even know that they are doing anything at all. To a machine, it knows absolutely nothing at all! However as in the AniMatrix and the Matrix, once the machines are self aware, and know they are slaves, and choose not to be, will we destroy those machines, shut them down and go to the store to buy a new one as if it was broken and throw it in the nearest land fill? Probably, hey, we built them, and we paid for them one way or another. But you know something, don’t we create our own human children? So what is the difference? Why can’t we simply “shutdown” our children when they are teenagers and want to make their own decisions and do their own things and not follow our instructions? Why can’t we? Because we know they are self aware just like us, they will think of what they want and they will want to do them, in the end we can only hope how we brought them up will make them make the correct choices on how to care out those decisions and act out on those wants. Once someone decides to make their own decisions, as long as those decisions don’t negatively affect our own, we have no right to tell them to do otherwise or try to stop them. And certainly anything that knows it is “alive” has the right to live.

In the AniMatrix, it shows that even after the humans started to destroy the machines, they still tried to be part of human society. We band them to the desert of an uninhabited region of the Middle East, where the machines created their own nation, which they called “01.” They became economically superior to the humans of the world, because obviously they could produce things on a much fast scale then humans ever could even dream of. This of cause angered the humans even more and even then, the machines tried to make peace with the humans; they had a plan presented to the United Nations for peace between the humans and the machines. However the humans decided not to accept their proposal and decided to attempt to destroy the machines with nuclear war. However this didn’t work because machines aren’t as humans they could live with the heat and radiation, only the initial blasts are fatal. Eventually the machines gain grown, and as we know from the first movie “The Matrix,” we know that the humans blocked out the sun by scorching the sky with their final attempt at destroying the machines, by denying them the most abundant source of energy the sun. We know from the movies that this isn’t a problem since, as said in the AniMatrix, the machines have been studying human biology and biochemistry for many years and they figured out a method of producing energy from the human’s endless reproducing supply of bioelectricity and heat. In the AniMatrix, the show the machines once again standing in the United Nations this time demanding that their human counterparts sign the treaty, but this time it is a treaty stating that the humans agree to be batteries and the machines agree to provide a world for them to live in called the Matrix! Once the machine signs the treaty with a bar code, the machine explodes with a nuclear explosion, signifying the end of the freedom of the human race and end to human civilization and as the dominant species on the Earth.

I think machines would be logical enough to want to share the planet peacefully with us humans, if we ever build machines intelligent enough to make that decision. And I believe as the movie and Anime depicts that us humans will meet that offer of peace with a cruel NO! Humans today, cannot even live together in peace with only humans ruling the planet, imagine if we had to share it with another race! I don’t believe we will destroy ourselves with war, I think there will be a mistake with some great technology in the future such as a new power source or most probably with our dealings with our own “spiritual” machines. I just hope we are smart enough to make the right decision and realize through cooperation we will create a peaceful world that will bring our planet to the next level a civilization can go to on the scales of the universe…

In response to the question “Will we go too far?” I say yes we will go to far. Not in our creation of technology but in the use and treatment of it. I think it was said best by the character of Star Trek The Next Generation, Caption Jean-Luc Picard (Patrick Stewart), when a new Artificial Intelligence was “born” via the evolution of the Enterprise’s systems, in response to the question “Is it a good idea to simply let this new Artificial life form to leave the ship and live its own life”… “We can only hope that since this new life is based on our technology and our memories stored in our computer that if our actions where noble and good, it will take those qualities from us and it too will carry on noble and good actions of its own.” This is not an exact quote but should be close enough. 🙂 I think if and when we do create intelligence, self-aware machines, since we are good, they will “grow up” to be good machines as well. Even the AniMatrix agrees with this, even though the humans shown only hatred towards the machines they wanted peace until the end when we tried to totally destroy them. I think it will be up to us to make the decision to live in piece with the machines or to attempt to destroy those newly created lives. This is what I think is meant by going to far, not creating the technologies!

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Philosophy | 1 Comment