This is the Part 2 of the Software Engineering Design – Concepts and Principles, you can read Part 1 here. If you are redirected from Part 1, you can continue reading from here.
In Part 2 of Software Engineering Design – Concepts and Principles, I’ll be discussing about Software Architecture Designs, Data Design, Requirement mapping.
Design has been described as a multistep process in which representations of data and program structure, interface characteristics, and procedural detail are synthesized from information requirements.
Architectural design represents the structure of data and program components that are required to build a computer-based system. Architectural design begins with data design and then proceeds to the derivation of one or more representations of the architectural structure of the system.
The software architecture of a program or computing system is the structure or structures of the system, which comprise software components, the externally visible properties of those components, and the relationships among them.
The architecture is not the operational software. Rather, it is a representation that enables a software engineer to:-
1. Analyze the effectiveness of the design in meeting its stated requirements.
2. Consider architectural alternatives at a stage when making design changes is still relatively easy.
3. Reducing the risks associated with the construction of the software.
Software architecture considers two levels of the design Pyramid:
1. Data design: data design enables us to represent the data component of the architecture.
2. Architectural design: Architectural design focuses on the representation of the structure of software components, their properties, and interactions.
Three key reasons/ significance that software architecture is important:
1. Representations of software architecture enable communication between all parties (stakeholders) interested in the development of a computer-based system.
2. The architecture highlights early design decisions that will have a profound impact on all software engineering work that follows and, as important, on the ultimate success of the system as an operational entity.
3. Architecture “constitutes a relatively small, intellectually graspable model of how the system is structured and how its components work together”
Data design creates a model of data and/or information that is represented at a high level of abstraction (the customer/user’s view of data).
The structure of data has always been an important part of software design. Data design plays an important role in following cases:-
1. At the program component level, the design of data structures and the associated algorithms required to manipulate them is essential to the creation of high-quality applications.
2. At the application level, the translation of a data model into a database is pivotal to achieving the business objectives of a system.
3. At the business level, the collection of information stored in disparate databases and reorganized into a “data warehouse” enables data mining or knowledge discovery that can have an impact on the success of the business itself.
Data Modeling, Data Structures, Databases, and the Data Warehouse:
The data objects defined during software requirements analysis are modeled using entity/relationship diagrams and the data dictionary.
The data design activity translates these elements of the requirements model into data structures at the software component level and, when necessary, a database architecture at the application level.
Data mining techniques, also called knowledge discovery in databases (KDD) that navigate through existing databases in an attempt to extract appropriate business-level information.
Data warehouse encompasses all data that are used by a business. The intent is to enable access to “knowledge” that might not be otherwise available.
Characteristics that differentiate a data warehouse from the typical database:-
1. Subject orientation: A data warehouse is organized by major business subjects, rather than by business process or function.
2. Integration: Regardless of the source, the data exhibit consistent naming conventions, units and measures, encoding structures, and physical attributes, even when inconsistency exists across different application-oriented databases.
3. Time variance: For a data warehouse, however, data can be accessed at a specific moment in time (e.g., customers contacted on the date that a new product was announced to the trade press). The typical time horizon for a data warehouse is five to ten years.
4. Non-volatility: Unlike typical business application databases that undergo a continuing stream of changes (inserts, deletes, and updates), data are loaded into the warehouse, but after the original transfer, the data do not change.
Data Design at the Component Level:
Data design at the component level focuses on the representation of data structures that are directly accessed by one or more software components.
Principles for data specification:
1. The systematic analysis principles applied to function and behavior should also be applied to data.
2. All data structures and the operations to be performed on each should be identified.
3. A data dictionary should be established and used to define both data and program design.
4. Low-level data design decisions should be deferred until late in the design process.
5. The representation of data structure should be known only to those modules that must make direct use of the data contained within the structure.
6. A library of useful data structures and the operations that may be applied to them should be developed.
7. A software design and programming language should support the specification and realization of abstract data types.
These principles form a basis for a component-level data design approach that can be integrated into both the analysis and design activities.
The software that is built for computer-based systems also exhibits one of many architectural styles. Each style describes a system category that encompasses:-
1. A set of components (e.g., a database, computational modules) that perform a function required by a system;
2. A set of connectors that enable “communication, co ordinations and cooperation” among components;
3. Constraints that define how components can be integrated to form the system; and
4. Semantic models that enable a designer to understand the overall properties of a system by analyzing the known properties of its constituent parts.
Commonly used architectural pattern:
1 .Data Centered Architecture:
A data store (e.g., a file or database) resides at the center of this architecture and is accessed frequently by other components that update, add, delete, or otherwise modify data within the store. Here client software accesses the data independent of any changes to the data or the actions of other client software.
In some cases the data repository is passive. That is, client software accesses the data independent of any changes to the data or the actions of other client software.
Data can also be passed among clients using blackboard mechanism that is sends notifications to client software when data of interest to the client change.
Data-centered architectures promote integrability. That is, existing components can be changed and new client components can be added to the architecture without concern about other clients.
2. Data-flow architectures:
This architecture is applied when input data are to be transformed through a series of computational or manipulative components into output data.
A pipe and filter pattern has a set of components, called filters, connected by pipes that transmit data from one component to the next and each filter works independently.
However, the filter does not require knowledge of the working of its neighboring filters.
If the data flow degenerates into a single line of transforms, it is termed batch sequential.
This pattern accepts a batch of data and then applies a series of sequential components (filters) to transform it.
3. Call and return architectures: This architectural style enables a software designer (system architect) to achieve a program structure that is relatively easy to modify and scale.
Sub styles of this type:-
- Main program/subprogram architectures: This classic program structure decomposes function into a control hierarchy where a “main” program invokes a number of program components, which in turn may invoke still other components.
- Remote procedure call architectures: The components of a main program/subprogram architecture are distributed across multiple computers on a network.
4. Object-oriented architectures: The components of a system encapsulate data and the operations that must be applied to manipulate the data. Communication and coordination between components is accomplished via message passing.
5. Layered architectures: In this architectural style, numbers of different layers are defined, each accomplishing operations that progressively become closer to the machine instruction set. At the outer layer, components service user interface operations.
At the inner layer, components perform operating system interfacing. Intermediate layers provide utility services and application software functions.
Fig: Layered Architecture
Mapping Requirements into a Software Architecture
Software requirements can be mapped into various representations of the design model.
The mapping technique, sometimes called structured design, has its origins in earlier design concepts that stressed modularity, top-down design, and structured programming.
Structured design is often characterized as a data flow-oriented design method because it provides a convenient transition from a data flow diagram to software architecture. The transition from information flow (represented as a DFD) to program structure is accomplished as part of a six-step process:
- The type of information flow is established;
- Flow boundaries are indicated;
- The DFD is mapped into program structure;
- Control hierarchy is defined;
- Resultant structure is refined using design measures and heuristics; and
- The architectural description is refined and elaborated.
Recalling the fundamental system model (level 0 data flow diagram), information must enter and exit software in an “external world” form.
For example, data typed on a keyboard, tones on a telephone line, and video images in a multimedia application are all forms of external world information.
Such externalized data must be converted into an internal form for processing. Information enters the system along paths that transform external data into an internal form. These paths are identified as incoming flow. At the kernel of the software, a transition occurs. Incoming data are passed through a transform center and begin to move along paths that now lead “out” of the software. Data moving along these paths are called outgoing flow. The overall flow of data occurs in a sequential manner and follows one, or only a few, “straight line” paths.
When a segment of a data flow diagram exhibits these characteristics, transform flow is present.
The fundamental system model implies transform flow; therefore, it is possible to characterize all data flow in this category. However, information flow is often characterized by a single data item, called a transaction that triggers other data flow along one of many paths. When a DFD takes the form, transaction flow is present. Transaction flow is characterized by data moving along an incoming path that converts external world information into a transaction. The transaction is evaluated and, based on its value; flow along one of many action paths is initiated. The hub of information flow from which many action paths emanate is called a transaction center. It should be noted that, within a DFD for a large system, both transform and transaction flow may be present. For example, in a transaction-oriented flow, information flow along an action path may have transform flow characteristics.
Transform mapping is a set of design steps that allows a DFD with transform flow characteristics to be mapped into a specific architectural style.
Review the fundamental system model. The fundamental system model encompasses the level 0 DFD and supporting information. In actuality, the design step begins with an evaluation of both the System Specification and the Software Requirements Specification.
Review and refine data flow diagrams for the software. Information obtained from analysis models contained in the Software Requirements Specification is refined to produce greater detail.
Determine whether the DFD has transform or transaction flow characteristics. In this step, the designer selects global (software wide) flow characteristics based on the prevailing nature of the DFD. In addition, local regions of transform or transaction flow are isolated. These subflows can be used to refine program architecture derived from a global characteristic described.
Isolate the transform center by specifying incoming and outgoing flow boundaries.
The transforms (bubbles) that constitute the transform center lie within the two shaded boundaries that run from top to bottom in the figure. An argument can be made to readjust a boundary (e.g, an incoming flow boundary separating read sensors and acquire response info could be proposed). The emphasis in this design step should be on selecting reasonable boundaries, rather than lengthy iteration on placement of divisions.
Perform “first-level factoring.” Program structure represents a top-down distribution of control. Factoring results in a program structure in which top-level modules perform decision making and low-level modules perform most input, computation, and output work. Middle-level modules perform some control and do moderate amounts of work. When transform flow is encountered, a DFD is mapped to a specific structure (a call and return architecture) that provides control for incoming, transform, and outgoing information processing.
Perform “second-level factoring.” Second-level factoring is accomplished by mapping individual transforms (bubbles) of a DFD into appropriate modules within the architecture.
Beginning at the transform center boundary and moving outward along incoming and then outgoing paths, transforms are mapped into subordinate levels of the software structure. Two or even three bubbles can be combined and represented as one module (recalling potential problems with cohesion) or a single bubble may be expanded to two or more modules. Practical considerations and measures of design quality dictate the outcome of second level factoring. Review and refinement may lead to changes in this structure, but it can serve as a “first-iteration” design.
Refine the first-iteration architecture using design heuristics for improved software quality. First-iteration architecture can always be refined by applying concepts of module independence. Modules are exploded or imploded to produce sensible factoring, good cohesion, minimal coupling, and most important, a structure that can be implemented without difficulty, tested without confusion, and maintained without grief.
In many software applications, a single data item triggers one or a number of information flows that affect a function implied by the triggering data item. In this section we consider design steps used to treat transaction flow.
1. Review the fundamental system model.
2. Review and refine data flow diagrams for the software.
3. Determine whether the DFD has transform or transaction flow characteristics.
(Steps 1, 2, and 3 are identical to corresponding steps in transform mapping.)
Identify the transaction center and the flow characteristics along each of the action paths. The location of the transaction center can be immediately discerned from the DFD. The transaction center lies at the origin of a number of actions paths that flow radially from it.
Map the DFD in a program structure amenable to transaction processing.
Transaction flow is mapped into an architecture that contains an incoming branch and a dispatch branch. The structure of the incoming branch is developed in much the same way as transform mapping. Starting at the transaction center, bubbles along the incoming path are mapped into modules. The structure of the dispatch branch contains a dispatcher module that controls all subordinate action modules. Each action flow path of the DFD is mapped to a structure that corresponds to its specific flow characteristics.
Factor and refine the transaction structure and the structure of each action path.
Each action path of the data flow diagram has its own information flow characteristics. We have already noted that transform or transaction flow may be encountered. The action path-related “substructure” is developed using the design steps discussed in this and the preceding section.
Refine the first-iteration architecture using design heuristics for improved software quality. This step for transaction mapping is identical to the corresponding step for transform mapping. In both design approaches, criteria such as module independence, practicality (efficacy of implementation and test), and maintainability must be carefully considered as structural modifications are proposed.
This completes next tutorial for Computer Students on Software Engineering Design Concepts and Principles – Part 2, hope it has added some knowledge to your understanding. Subscribe us for more guides on Software Engineering.