Implementation Approach Philosophies
The waterfall approach to implementing an application requires that a designer confer with one or more representatives of the end user organization and write down all the specifications of the application. Usually, the specifications come in a set of functional documents or use cases, written so that the end user can easily read and understand the documents. The end user signs off on these documents, and the documents are then picked up by the technical design team that designs the application, creating a number of artifacts such as class model diagrams, state diagrams, activity diagrams, and data models. The aim of this phase is to write everything down in such detail that a developer will have no problem creating the necessary code. There is a formal handover of the design to both the development team and the test team. After handover, the development team starts coding, and the test team uses the technical design in combination with the use cases to create test cases and test scenarios. Once the development team is finished coding, the code is delivered to the test team. The test team performs the tests they designed based on requirements and detailed design. Any problems will be fixed by the development team. Once the process of testing and fixing is completed, the application is given to the end user for an acceptance test. The end user performs a final check to see whether the application conforms to the initial requirements. If approved, he or she signs off on the finished product, and the project is done.
A project can have more or fewer phases when using the waterfall approach, but the main characteristic is a very formal start and end of every phase, with very formal deliverables.
The advantage of the waterfall approach is that accountability of the team responsible for each phase is higher. It is clear what they need to deliver, when they need to deliver, and to whom they must deliver. Often, the development team will not need to interact with the user. This can be very useful when outsourcing development to a different country.
The main disadvantage to the waterfall approach is that in an environment in which everything is organized in a quite formal manner, flexibility in responding to change is decreased. Even change needs to be organized. Very few companies seem to do this effectively, often resulting in a significant increase in overhead cost. To manage the costs of a project, some companies even go as far as to delay any change in requirements until after initial delivery of the application, effectively delivering an application that does not match end user needs.
Many long-running software development projects have run over their budgets and do not deliver the product on time. The premise of the agile software development philosophy is to minimize risk by developing software in short time boxes, called iterations, which typically last one to four weeks. Each iteration is like a miniature software project of its own and includes all of the tasks necessary to release the increment of new functionality: planning, requirements analysis, design, coding, testing, and documentation. Although an iteration might not add enough functionality to warrant releasing the product, an agile software project intends to be capable of releasing new software at the end of every iteration. At the end of each iteration, the team re-evaluates project priorities.
The goal of agile software development is to achieve customer satisfaction by rapid, continuous delivery of useful software; always aiming to build what the customer needs; welcome, rather than oppose, late changes in requirements; regularly adapt to changing circumstances; have close, daily cooperation between business people and developers, in which face-to-face conversation is the best form of communication.
The main advantage of agile software development is flexibility in dealing with change, always aiming to deliver according to business needs. The drawback, of course, is an increase in complexity in managing scope, planning, and budget. Another common risk is limited attention to (technical) documentation.
Incremental software development is a mix of agile and waterfall development. An application is designed, implemented, and tested incrementally so that each increment can be delivered to the end user. The project is not finished until the last increment is finished. It aims to shorten the waterfall by defining intermediate increments and by using some of the advantages of agile development. Based on feedback received on a previous increment, adjustments can be made when delivering the next increment. The next increment can then consist of new code as well as modifications of code delivered earlier.
The advantage is that the formalities remain in place, but change management becomes easier. The cost of testing and deploying an application a number of times will be higher than doing this just once.
Program Flow Control
Choosing an approach for program flow control is very much an architectural task. The aim is to create a blueprint of your application in which, once you start adding functionality and code, everything just seems to have its own place. If you’ve ever reviewed or written a highquality piece of code, you understand this principle
The first step in designing your program flow is to organize the code, laying down a set of rules to help create a blueprint, or outline, of the application. Maintenance, debugging, and fixing errors will go more smoothly because code is located in a logical place. After doing the groundwork, you can choose an approach for implementing your application logic.
Design patterns should play an important part in designing your program flow control. Over the years, a lot of code has been written and a lot of solutions have been designed for what turn out to be repeating problems. These solutions are laid down in design patterns. Applying a design pattern to a common software design issue will help you create solutions that are easily recognizable and can be implemented by your peers. Unique problems will still require unique solutions, but you can use design patterns to guide you in solving them.
Creating the Blueprint
The first step is to consider logical layers. Note that layers are not the same as tiers, often confused or even assumed to be the same.
Layers versus tiers
Layers concern creating boundaries in your code. The top layer might have references to code in lower layers, but a layer can never have a reference to code in a higher layer. Tiers concern physically distributing layers across multiple computers. For example, in a three-tier application, the user interface is designed to run on a desktop computer, the application logic is designed to run on an application server, and the database runs on a dedicated database server, and the code on each tier can consist of multiple layers.
Figure 8-1: Basic three-layer organization
Layering refers to levels of abstraction. The layering shown in Figure 8-1 is true for most applications. These levels are also referred to as the three principal layers and might go by various other names. As a rule, code in the presentation layer might call on services in the application logic layer, but the application logic layer should not be calling method in the presentation layer. The presentation layer should never call on the data access layer directly because doing so would bypass the responsibilities implemented by the application logic layer. The data access layer should never call the application logic layer.
Layers are just an abstraction, and probably the easiest way to implement the layering is to create folders in your project and add code to the appropriate folder. A more useful approach would be to place each layer in a separate project, thus creating separate assemblies. The advantage of placing the application logic in a library assembly is that this will enable you to create unit tests, using Microsoft Visual Studio or NUnit, to test the logic. It also creates flexibility in choosing where to deploy each layer.
In an enterprise application, you should expect to have multiple clients for the same logic. In fact, the very thing that makes an application an enterprise application is that it will be deployed to three tiers: client, application server, and database server. The Microsoft Office Access application created by the sales department in your enterprise, although very important to the sales department, does not constitute an enterprise application.
Note that the application logic and data access layers are usually deployed together to the application server. Part of drawing up the blueprint is choosing whether to access the application server by using .NET remoting or Web services. Regardless of choice, you will be adding some code to access easily the remote services in the presentation layer. If you’re using Web services to access the services on your application server, Visual Studio .NET will do the work for you and generate proxy code, automatically providing an implementation of the remote proxy pattern.
Adding Patterns to Layers
The three basic layers provide a high-level overview. Let’s add a couple of structural patterns to create a robust enterprise architecture. The result is shown in Figure 8-2.
Focus on the application logic layer. Figure 8-2 shows that access to the application logic is using the façade pattern. A façade is an object that provides a simplified interface to a larger body of code, such as a class library. A façade can reduce dependencies of outside code on the inner workings of a library because most code uses the façade, thus allowing more flexibility in developing the system. To do so, the façade will provide a coarse-grained interface to a collection of fine-grained objects.
Program flow control, also referred to as decision flow, concerns how you design the services on your application logic layer or, as you’ve seen in the previous paragraph, how you design the methods on your façade.
There are two approaches to organizing your services:
When organizing services based on the actions of the user, you will be implementing application logic by offering services, each of which handles a specific request from the presentation layer. This is also known as the transaction script pattern. This approach is popular because it is simple and feels very natural. Examples of methods that follow this approach are BookStoreService.AddNewOrder(Order order) and BookStoreService.CancelOrder(int orderId).
The logic needed to perform the action is implemented quite sequentially within the method, making it very readable but also harder to reuse the code. Using additional design patterns such as the table module pattern can help increase reusability.
It is also possible to implement the decision flow of the application in a much more state-driven fashion. The services offered by the application server are more generic in nature, for example, BookStoreService.SaveOrder(Order order). This method will look at the state of the order and decide whether to add a new order or cancel an existing order.
Data Structure Designs
You must make a number of choices while designing your data structures. The first choice is the data storage mechanism, the second is the intended use of the data, and the third is the versioning requirements. There are three ways of looking at data structure designs:
- Services offer data; data is a reflection of the relational database.
- Data should be mapped to objects, and services offer access to objects.
- Data offered by services should be schema based.
Choosing one of the three as the basis for your data flow structure should be done in an early stage of the design process. A lot of companies have a company guideline that forces one of the three choices on all projects, but when possible, you should re-evaluate the options for each project, choosing the optimal approach for the project at hand.
Choosing a Data Storage Mechanism
When designing your application, you will undoubtedly have to design some sort of data store. The following stores and forms of data storage are available:
- app.config file
- XML files
- Plaintext files
- Message Queuing
Each store has its own unique characteristics and might be suitable to specific requirements.
Designing the Data Flow
Data Flow Using ADO.NET
When implementing data-centric services in the application logic layer, you’ll design your data flow by using ADO.NET. The .NET Framework Class Library offers an extensive application programming interface (API) for handling data in managed code. Referred to as ADO.NET, the API can be found in the System.Data namespace. The complete separation of data carriers and data stores is an important design feature of ADO.NET. Classes such as the DataSet, DataTable, and DataRow are designed to hold data but retain no knowledge of where the data came from. They are considered data-source agnostic. A separate set of classes such as SqlConnection, SqlDataAdapter, and SqlCommand take care of connecting to a data source, retrieving data, and populating the DataSet, DataTable, and DataRow. These classes are located in sub-namespaces such as System.Data.Sql, System.Data.OleDB, System.Data.Oracle, and so on. Depending on what data source you wish to connect to, you can use the classes in the correct namespace and, depending on the completeness of the product you’re using, you’ll find that these classes offer more or less functionality.
Because the DataSet is not connected to the data source, it can be quite successfully used for managing the data flow in an application. Figure 8-5 shows the flow of data when doing so.
Let’s do a walkthrough of this design and imagine that someone has logged on to your bookstore and has ordered three books. The presentation layer has managed the state of the shopping cart. The customer is ready to order and has provided all necessary data. He chooses submit order. The Web page transforms all data into a DataSet holding two DataTables, one for the order and one for orderliness; inserts one DataRow for the order; and inserts three DataRows for the order lines. The Web page then displays this data back to the user one more time, data binding controls against the DataSet, and asks Are you sure? The user confirms the order, and it is submitted to the application logic layer. The application logic layer checks DataSet to see that all mandatory fields have a value and performs a check to see whether the user has more than $1,000.00 in outstanding bills. If all is okay, the DataSet is passed on to the data access layer, which connects to the database and generates insert statements from the information in the DataSet.
Using the DataSet in this manner is a fast and efficient way of building an application and using the power of the Framework Class Library and the ability of ASP.NET to data bind various controls such as the GridView against a DataSet. Instead of using plain DataSet objects, you can use Typed DataSet objects and improve the coding experience when implementing code in both the presentation layer as well as in the application logic layer. The advantage of this approach is also the disadvantage of the approach. Small changes in the data model do not necessarily lead to a lot of methods having to change their signatures. So in terms of maintenance, this works quite well. If you remember the presentation layer is not necessarily a user interface, it can just as well be a Web service. And if you modify the definition of the DataSet, perhaps because you’re renaming a field in the database, then you’re modifying the contract that underwrites the Web service. As you can imagine, this can lead to some significant problems. This scenario works well if the presentation layer is just a user interface, but for interfaces to external systems or components, you will want to hide the inner workings of your application and transform data into something other than a direct clone of your data model, and you will want to create Data Transfer Objects (DTOs).
Data Flow Using Object Relational Mapping
Data flow using ADO.NET is a very data-centric approach to managing the data flow. Data and logic are discrete. The other side of the spectrum is to take a more object-oriented approach. Here, classes are created to bundle data and behavior. The aim is to define classes that mimic data and behavior found in the business domain that the application is created for. The result is often referred to as a business object. The collection of business objects that make up the application is called the domain model. Some developers claim that a rich domain model is better for designing more-complex logic. It’s hard to prove or disprove any such claim. Just know that you have a choice, and it is up to you to make it.
Now do the same walkthrough as you did before; imagine that someone has logged on to your bookstore and has ordered three books. The presentation layer has managed the state of the shopping cart. The customer is ready to order and has provided all necessary data. He chooses submit order. The Web page transforms all data into a DTO, holding data for one order and with three order lines, creating the objects as needed. The Web page then displays this data back to the user one more time, data binding controls against the DTO using the ObjectDataSource in ASP.NET 2.0, and asks Are you sure? The user confirms the choice, and the DTO is submitted to the application logic layer. The application logic layer transforms the DTO into a business object of type Order having a property for holding three OrderLine objects. The method Order.Validate() is called to validate the order and check that all mandatory fields have a value, and a check is performed to identify whether the user has more than $1,000.00 in outstanding bills. To do this, the order will call Order.Customer.GetOutstandingBills(). If all is well, the Order.Save() method is called. The order will submit itself to the object relational mapping layer, where the order and order lines are mapped to a DataTable in a DataSet, and the DataSet is passed to the data access layer, which connects to the database and generates insert statements from the information in the DataSet. There are, of course, many ways in which object relational mapping can take place, but not all will include transformation to a DataSet. Some will create the insert statements directly but still use the data access layer to execute that statement.
As you can see, quite a few transformations take place. The use of DTOs is needed because a business object implements behavior, and behavior is subject to change. To minimize the impact of these changes on the presentation layer, you need to transform the data, take it out of the business object and put it in a data transfer object. In Java, the data transfer object is normally referred to as a value object.
A big advantage of working with business objects is that it really helps organize your code. If you look back at a piece of complex logic, it is usually very readable because there is very little plumbing code. The disadvantage is that the majority of data stores are still relational, and the mapping of business objects to relational data can become quite complex.
You have just seen two opposites when it comes to managing the data flow. Many variations are possible. A common one is the variant in which a dataset is used as the basic data carrier from user interface to data store, but separate schemas (DTOs) are used for Web services that are called from other systems. The application layer transforms the relational data to a predefined schema. The main advantage of this is that any application that references the service is not depending on any kind of internal implementation of the component. This allows more flexibility in versioning, backward compatibility of interfaces, and the ability to change the implementation of the component while not changing the interface of the service.
Of course, you can use business objects in the Web application and skip the DTO transformation, but this usually works well only if the application logic is deployed together with the Web application. Remember that to call Order.Save(), you’ll need a database connection. Whether this is desirable is up to you as well as, probably, to your chief security officer.