PDoc-Software Development Procedure

Created by: Lester Caine, Last modification: 12 Oct 2010 (13:07 BST)
Software Development Procedure

Outline of current status

All software is developed using a modular approach and based on broadly on an Object Orientated approach. Actual code development is supported by two strings, C++ for high level code, and Assembly Code for all firmware. The use of Assembly Code is justified for providing code on low level processor elements of the system for which good C and C++ compilers are not available. In addition, driver level modules for PC based systems also benefit from the tight coding provided by Assembly Code, built on top of the structure provided by Microsoft. All C++ programming is carried out in the Borland Development Environment, and is therefore structured by the facilities provided in that environment.

Due to the presence of a large amount of legacy code, older development environments have been maintained, along with links to external contractors. Documentation exists, or is being created for all external code, and where the functionality of thst code is required in current projects, it is being ported to the Borland environment.

With the rapidly changing equipment base, the maintainence and development of legacy systems has become difficult. To that end, some systems are only available on older versions of operating systems and equipment, and copies of these are maintained as part of the production release of a version of a product. Where the product is supported by an ongoing development cycle and on site maintenance, previous issues of the software are archived once all users have been updated to later issues. Where a user opts to cease maintainence, support for that site becomes unavailable, and the user is removed from the upgrade cycle. If they restore maintainence, the site will be brought up to date as part of the agreement. These developments may involve upgrade of hardware as well as software, and are covered by the maintainence agreements.

In order to reduce the overhead of ongoing support, archived versions are ignored once an update of the development environment is undertaken. Only currently supported products are reviewed to ascertain the need to retain a particular state of the development environment. Updates due to maintainence releases of development software will reflect an update in all software being handled with the current state of the development system, on the basis that any bug fixes that are implemented may improve the performance of the target systems.

Major changes to the development environment will be assessed as required. The Borland Delphi environment was ignored as not part of the main path, but the current release of Borland C++ Builder will need more careful investigation. At present it is believed that it does not support 16 bit code and is not compatible with Windows 3.11 but its tools for network developments are more powerful than those currently available.

Overall Design Procedure for New Product

New products can arise as a result of an evolutionary process as well as the result of top down design. In most cases relating to software operation, the use of the system is easier to asses by actually playing with it than by developing a complex design brief. The modular nature of C++ along with the 'design rules' layed down by the Windows operating system means that shells of a design can be constructed to test out particular ideas and the best ones put together to build the product. The omission of the word 'final' in the previous sentence would not have been noticed, but most of the current products, while fully operational, are continually being expanded and extended as required, so that there may never be a 'final' product.

Whilst the existance of bugs in the software products of today is not appreciated, they are inevitable. Most of the 'outstanding' bugs in Microsoft products can be avoided, but are usually not found until they occure. A lot of these 'bugs' are actually simple quirks in the way things are done, and do not affect the reliability of the system. That being the case, the 'bug' reporting system is an essential part of the maintainence of the system, and must result in accurate information to be able to recreate and eliminate a bug. A number of bug fixes implemented in recent years are correcting for problems in the operating system, rather than real faults in the software, and some occasional problems still occur that are known to be the operating system, but which can not be recreated. Because of these cases the systems have been designed so that a complete restart of the system can be undertaken quickly, hopefully without loss of information. The Codebase Library is still in use, despite the existance of more powerful replcments, as these replacements are still proving less than reliable. Even the database manager included with Borland C++ is quite prone to problems if power is removed from the system. The recomended solution is a UPS on the computer, but this is neither practical, nor acceptable for small systems in the field. Software needs to be engineered to be safe as its first priority rather than blaming someone else.

The above explanations are intended as a backgound to a design methodology that is not normally acceptable, but which fits our method of working. The top down design sets goal posts that must be fitted within, but these may be moved (by agreement) if they are restrictive. The operator may not actually see the structure that is hidden behind the system, so that operationally changes can be made without changing the operators view. Similarly the operator view may change without affecting the actual output of the system. Experience has been built up by developing systems that provide the basic core operations, and then expanding as feedback results from real use of the systems. The core operations are defined by the specification, but actual operational requirements need the input of real users. Our method of working allows changes to be built into the system as a result of the feedback from users.

Rules for Design Review

Most projects are undertaken on a custom basis, and therefore each one requires discussion with the customer as to how it will be undertaken. The specification supplied will probably not contain all of the information necessary for a full top down design, and so each stage will require customer input. The aim of the design review is to provide an overall shell within which the project may evolve.

The first step is to build a structure which, while flexible, will form the basis on which the project can be built. Modules (Objects in C++ talk) provide management of functions in a manner that can be changed without a complete rebuild of the system. This is more normally refered to as Functional Decomposition. Complex systems do not always lend themselves to simple function decomposition. The cultural difference between the users, and engineers is often the cause of misunderstanding, and this can be futher complicated when the clients system specifiers are not the actual end users. Diplomacy in the project review stage can be quite difficult, but the most successfull results are achived when actual users are involved in it.

The next step is to sort out the hardware onto which the system is to be fitted, infact it will be necessary to do this as part of the Functional Decomposition, but the final system achieved may be compromised if it is shoehorned into the wrong hardware. While it is expected that the main processing power of the system will be PC based, it may not be the best solution to the problem and so flexability in selecting the base must be available. The method of working may then change depending on the target system identified, as the development path will depend on the tools available.

In all cases the project will be broken down into identifiable modules and a detailed specification will be built for each module. This would need to include details of interface standards, and links to other modules in the system. The sub-specifications may be passed to external contractors to carry out the work required, using the best options for each part of a project.

At this stage most formal design procedures start refering to Data Flow Analysis, Control and Data Structures or Object-Orientation Methodology. This is fine if the customer has a formal structure, but it will probably not match an in-house selection. The actual approach should be selected on the basis of the module under discussion, using a manner that the customer can follow in order to prevent misunderstandings. The formal requirements for Control and Data Structures do need to be kept in mind, as we must identify all of the source information, the structure of the output information along with the operations that are required. It should not, however, be coloured by jargon that the customer does not fully understand. The different design methodologies all claim that their method is best, while in reality they are just different languages.

RULE NUMBER ONE - "You get what you want" not "You asked for it, you got it"

(If the customer does not understand the question, change it. )

By this stage of the design procedure, most of the problems will have been solved, and the real work can begin. The target system will introduce jargon and rules that will disrupt the design flow, and apply restrictions that may not be immediately apparent, but this will have to be accepted by both sides as each module is refined. Too much flexibility can lead to 'thrashing', where there are a number of equally acceptable paths. Changes in the path selected in other modules can result in the selection in the first module becoming less prefered. Cost, time or performance constraints can help to stabilise the design, but may involve the construction of that section of the project in order to establish the correct solution. While some modules can be well contained, others will be dependent on one another for input and output.

If the target system is to be PC based, then the currently prefered method of working is to build the application code under Borland C++. The compilers can produce code for DOS, Windows and O/S2 systems in both 16 and 32 bit formats. The use of Windows 95 is currently being avoided, with 32 bit platforms being supported by Windows NT. The reason for this is that we are having to develop interface code for custom hardware, and since different drivers are needed for each, we are ignoring the probable 'dead end'. (If a proper design review had been done on Windows 95 we whould now have a much better Windows NT, rather than yet another bodge job, so even the big boys can get it wrong).

Most systems will require an operator interface, and users are now expecting a Windows look and feel, so it is expected that this will be the prefered path. The Borland Environment allows both 3.11 (16bit) and NT (32bit) versions of a system to be coded at the same time and provides the ideal inteface between legacy 3.1 and 3.11 systems and users, and new 'power' users with NT based systems. The OWL (Borland's Object Window Library) provides a well defined structure on which to build programs, and although it does have some problems which are accentuated by the restrictions on access to hardware from Windows, it has proven a stable base on which to build. For that reason, it is used as the 'standard' for all windows code.

The creation of screen layouts to be inclulded in the design review in the Resource Workshop allows programs for the operators interface to be built using the exact information that was used to expand the requirement. As changes are made in the functionality of the system it can be copied back to the master specification.

The Borland IDE provides a fairly strict regime for the structure of code sections of a program, and a library of additional modules (Objects) are being constructed to provide standard functions across projects. All new projects are build to some extent on existing modules, and the methods currently used on the operators displays can be used to expand on those areas of the specification. 'Dirty' mods to a current software package are useful to provide answers on a discussion point in a design review, and that code can then be ported to the new project directory.

Managing The Master Specification

Fundamental to this method of working is a master reference document. This may well be electronic, with links to each of the necessary sub-systems, specifications and standards. It will, naturally, cover both hardware and software, and updates to this document will control the ongoing development of the project. A version control mechanism which maintains a library of change notes will be essential to formalising this aspect of the design process.

The next step is to start building library of online specifications that can be used as required in development process. This WILL involve links to the internet where the current copies of most PC related specifications can be found, along with details of a large amount of current hardware and software. Correlation of information in an electronic form is now becoming essential in order to manage even the simplest of projects, and since all of the data can be held in an electronic form, the re-use and regurgitation of large volumes of documentation can be avoided. What is currently lacking is a formal system for managing change notes on this vast archive of information. Links are lost to historic issues of specifications when the new one is posted, so that an off-line library of information will have to be maintained as part of the Change Note system.

As all drawings are currently available in electronic form, control of all the hardware aspects of a project can be maintained, and links included to this information. In addition, the project costs and planning can also be maintained on the system. Parts lists in an Excel spreadsheet format with a set of procedures would form the basis of a hierarchic structure, with hardware, software, documentation and time included in the master 'Parts List' for a project.