At Data Automation Professionals, we follow a strict set of self-imposed guidelines to ensure our custom application designs are solid, and our automation code is optimally structured and thoroughly documented. We rigorously use unit, integration, and regression testing to optimize performance and minimize issues. Our extensive collection of code libraries provides generic and tested utility functionality that minimizes the amount of custom code in any automation solution. We employ powerful code generators that instantly create low-level accessor and maintenance code that further alleviates the amount of custom code.

Use Cases

We try to employ user cases wherever possible. A use case is a real use scenario documenting the input, process, and expected output or result of a bounded usage situation. When capturing use cases, our goal is to define enough to define the entire system. Overlapping use cases are often created to ensure ample coverage. Use cases can also cover exception cases such as how missing or incorrect information is handled. The creation of use cases is your responsibility and represents an aspect of the overall solution design. Our responsibility is to challenge the set of use cases with the goal of ensuring ample coverage. We will do this by asking you “what if” questions given the existing use cases—your response will lead to an affirmation of completion or a request for additional use cases to cover the unhandled scenarios. Use cases not only help define the design, they guide development and aid testing and help ensure a custom solution that works for you.

Documentation

You may want, at some point, to bring your own Excel hot shots on board and pass maintenance responsibility onto them. We strive to document our code as much as possible without losing the code itself so that your folks can easily slip into our shoes and make those changes you need. But we don’t overdo it—yes, some Excel jocks love documenting code so much (an example) that you have to look under the bed to see the actual code. Documentation also means smart and intuitive variable names. You will never see variables named “i” and “j” in our code!

Structure

Structure, also known as structured or modular programming, is a classic technique of ensuring that code is organized in narrowly defined routines and that as many routines as possible are made generic so they can be reused. We are relentless structured programming hounds!

Testing

We employ all the classic testing strategies: unit, integration, and regression. Unit testing is employed as the code is being written and involves applied as many scenarios as we can think of to the routine being constructed. This is a good time to ensure that we have enough pre-flight code (normalization and validation of parameters) in the routine so that the routine doesn’t blow up when something bizarre is thrown at it.

Integration testing is a higher level test process where we take use cases and run end-to-end tests ensuring that any and all user input and imported data is handled to the database reads and writes to the final reports and exported files are solid and per specification.

Regression testing is used to handle situations where there is a reasonable probability of “whack-a-mole” and/or there are complex inputs. The “whack-a-mole” conundrum is when we change one low-level routine to handle a high-level change and one or more other medium and high-level functions break. Regression tests can reveal these phenomena earlier versus later while we are buried in the code. Of course, if we’ve been doing a good job with the structure, this should never be a problem. When we have complex inputs, regression testing can help ensure that adding functionality to handle a few new data use cases doesn’t break the processing of the thousands of other data use cases already implemented and tested.

Code Libraries

We’ve been at this for forever and have been adding to and reworking our massive collection of routines in our code libraries. Having built hundreds of world-class solutions that run the gamut of tasks our clients have dreamed up, we have just about everything covered from basic importing of crazy data from the IT department’s decades old database to scraping web pages to punching out PDF reports and PowerPoint presentations. These libraries not only save us time, they provide us with a stable base of code that has minimal risk of breaking under pressure.

Code Generators

Yes, we employ code generators that punch out the code we need to access various Excel objects without having to manually write a routine for each and every one. One example is our worksheet accessor code generator that generates a complete set of accessors for the locally defined range names and all Excel Tables defined on the worksheet. Speaking of range names, we employ a powerful range name generation and maintenance tool that makes maintaining all of those range names a breeze.