Chapter 6:  Software Testing with example Process

Chapter 6: Software Testing with example Process

What is Testing?

Testing is the process of evaluating a system or its component(s) with the intent to find whether it satisfies the specified requirements or not. In simple words, testing is executing a system in order to identify any gaps, errors, or missing requirements in contrary to the actual requirements.

According to ANSI/IEEE 1059 standard, Testing can be defined as – A process of analyzing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.

Who does Testing?

It depends on the process and the associated stakeholders of the project(s). In the IT industry, large companies have a team with responsibilities to evaluate the developed software in context of the given requirements. Moreover, developers also conduct testing which is called Unit Testing. In most cases, the following professionals are involved in testing a system within their respective capacities −

  • Software Tester
  • Software Developer
  • Project Lead/Manager
  • End User

Different companies have different designations for people who test the software on the basis of their experience and knowledge such as Software Tester, Software Quality Assurance Engineer, QA Analyst, etc.

It is not possible to test the software at any time during its cycle. The next two sections state when testing should be started and when to end it during the SDLC.

When to Start Testing?

An early start to testing reduces the cost and time to rework and produce error-free software that is delivered to the client. However in Software Development Life Cycle (SDLC), testing can be started from the Requirements Gathering phase and continued till the deployment of the software.

It also depends on the development model that is being used. For example, in the Waterfall model, formal testing is conducted in the testing phase; but in the incremental model, testing is performed at the end of every increment/iteration and the whole application is tested at the end.

Testing is done in different forms at every phase of SDLC −

  • During the requirement gathering phase, the analysis and verification of requirements are also considered as testing.
  • Reviewing the design in the design phase with the intent to improve the design is also considered as testing.
  • Testing performed by a developer on completion of the code is also categorized as testing.

When to Stop Testing?

It is difficult to determine when to stop testing, as testing is a never-ending process and no one can claim that a software is 100% tested. The following aspects are to be considered for stopping the testing process −

  • Testing Deadlines
  • Completion of test case execution
  • Completion of functional and code coverage to a certain point
  • Bug rate falls below a certain level and no high-priority bugs are identified
  • Management decision

Verification & Validation

These two terms are very confusing for most people, who use them interchangeably. The following table highlights the differences between verification and validation.

Sr.No.VerificationValidation
1Verification addresses the concern: “Are you building it right?”Validation addresses the concern: “Are you building the right thing?”
2Ensures that the software system meets all the functionality.Ensures that the functionalities meet the intended behavior.
3Verification takes place first and includes the checking for documentation, code, etc.Validation occurs after verification and mainly involves the checking of the overall product.
4Done by developers.Done by testers.
5It has static activities, as it includes collecting reviews, walkthroughs, and inspections to verify a software.It has dynamic activities, as it includes executing the software against the requirements.
6It is an objective process and no subjective decision should be needed to verify a software.It is a subjective process and involves subjective decisions on how well a software works.

Software Testing – Myths

Given below are some of the most common myths about software testing.

Myth 1: Testing is Too Expensive

Reality − There is a saying, pay less for testing during software development or pay more for maintenance or correction later. Early testing saves both time and cost in many aspects, however reducing the cost without testing may result in improper design of a software application rendering the product useless.

Myth 2: Testing is Time-Consuming

Reality − During the SDLC phases, testing is never a time-consuming process. However diagnosing and fixing the errors identified during proper testing is a time-consuming but productive activity.

Myth 3: Only Fully Developed Products are Tested

Reality − No doubt, testing depends on the source code but reviewing requirements and developing test cases is independent from the developed code. However iterative or incremental approach as a development life cycle model may reduce the dependency of testing on the fully developed software.

Myth 4: Complete Testing is Possible

Reality − It becomes an issue when a client or tester thinks that complete testing is possible. It is possible that all paths have been tested by the team but occurrence of complete testing is never possible. There might be some scenarios that are never executed by the test team or the client during the software development life cycle and may be executed once the project has been deployed.

Myth 5: A Tested Software is Bug-Free

Reality − This is a very common myth that the clients, project managers, and the management team believes in. No one can claim with absolute certainty that a software application is 100% bug-free even if a tester with superb testing skills has tested the application.

Myth 6: Missed Defects are due to Testers

Reality − It is not a correct approach to blame testers for bugs that remain in the application even after testing has been performed. This myth relates to Time, Cost, and Requirements changing Constraints. However the test strategy may also result in bugs being missed by the testing team.

Myth 7: Testers are Responsible for Quality of Product

Reality − It is a very common misinterpretation that only testers or the testing team should be responsible for product quality. Testers‘ responsibilities include the identification of bugs to the stakeholders and then it is their decision whether they will fix the bug or release the software. Releasing the software at the time puts more pressure on the testers, as they will be blamed for any error.

Myth 8: Test Automation should be used wherever possible to Reduce Time

Reality − Yes, it is true that Test Automation reduces the testing time, but it is not possible to start test automation at any time during software development. Test automaton should be started when the software has been manually tested and is stable to some extent. Moreover, test automation can never be used if requirements keep changing.

Myth 9: Anyone can Test a Software Application

Reality − People outside the IT industry think and even believe that anyone can test a software and testing is not a creative job. However testers know very well that this is a myth. Thinking alternative scenarios, try to crash a software with the intent to explore potential bugs is not possible for the person who developed it.

Myth 10: A Tester’s only Task is to Find Bugs

Reality − Finding bugs in a software is the task of the testers, but at the same time, they are domain experts of the particular software. Developers are only responsible for the specific component or area that is assigned to them but testers understand the overall workings of the software, what the dependencies are, and the impacts of one module on another module.

Testing and Debugging

Testing − It involves identifying bug/error/defect in a software without correcting it. Normally professionals with a quality assurance background are involved in bugs identification. Testing is performed in the testing phase.

Debugging − It involves identifying, isolating, and fixing the problems/bugs. Developers who code the software conduct debugging upon encountering an error in the code. Debugging is a part of White Box Testing or Unit Testing. Debugging can be performed in the development phase while conducting Unit Testing or in phases while fixing the reported bugs.

Software Testing – QA, QC & Testing

Testing, Quality Assurance,and Quality Control

Most people get confused when it comes to pin down the differences among Quality Assurance, Quality Control, and Testing. Although they are interrelated and to some extent, they can be considered as same activities, but there exist distinguishing points that set them apart. The following table lists the points that differentiate QA, QC, and Testing.

Quality AssuranceQuality ControlTesting
QA includes activities that ensure the implementation of processes, procedures and standards in context to verification of developed software and intended requirements.It includes activities that ensure the verification of a developed software with respect to documented (or not in some cases) requirements.It includes activities that ensure the identification of bugs/error/defects in a software.
Focuses on processes and procedures rather than conducting actual testing on the system.Focuses on actual testing by executing the software with an aim to identify bug/defect through implementation of procedures and process.Focuses on actual testing.
Process-oriented activities.Product-oriented activities.Product-oriented activities.
Preventive activities.It is a corrective process.It is a preventive process.
It is a subset of Software Test Life Cycle (STLC).QC can be considered as the subset of Quality Assurance.Testing is the subset of Quality Control.

Software Testing – Types of Testing

This section describes the different types of testing that may be used to test a software during SDLC.

Manual Testing

Manual testing includes testing a software manually, i.e., without using any automated tool or any script. In this type, the tester takes over the role of an end-user and tests the software to identify any unexpected behavior or bug. There are different stages for manual testing such as unit testing, integration testing, system testing, and user acceptance testing.

Testers use test plans, test cases, or test scenarios to test a software to ensure the completeness of testing. Manual testing also includes exploratory testing, as testers explore the software to identify errors in it.

Automation Testing

Automation testing, which is also known as Test Automation, is when the tester writes scripts and uses another software to test the product. This process involves automation of a manual process. Automation Testing is used to re-run the test scenarios that were performed manually, quickly, and repeatedly.

Apart from regression testing, automation testing is also used to test the application from load, performance, and stress point of view. It increases the test coverage, improves accuracy, and saves time and money in comparison to manual testing.

What to Automate?

It is not possible to automate everything in a software. The areas at which a user can make transactions such as the login form or registration forms, any area where large number of users can access the software simultaneously should be automated.

Furthermore, all GUI items, connections with databases, field validations, etc. can be efficiently tested by automating the manual process.

When to Automate?

Test Automation should be used by considering the following aspects of a software −

  • Large and critical projects
  • Projects that require testing the same areas frequently
  • Requirements not changing frequently
  • Accessing the application for load and performance with many virtual users
  • Stable software with respect to manual testing
  • Availability of time

How to Automate?

Automation is done by using a supportive computer language like VB scripting and an automated software application. There are many tools available that can be used to write automation scripts. Before mentioning the tools, let us identify the process that can be used to automate the testing process −

  • Identifying areas within a software for automation
  • Selection of appropriate tool for test automation
  • Writing test scripts
  • Development of test suits
  • Execution of scripts
  • Create result reports
  • Identify any potential bug or performance issues

Software Testing Tools

The following tools can be used for automation testing −

  • HP Quick Test Professional
  • Selenium
  • IBM Rational Functional Tester
  • SilkTest
  • TestComplete
  • Testing Anywhere
  • WinRunner
  • LoadRunner
  • Visual Studio Test Professional
  • WATIR

Software Testing – Methods

There are different methods that can be used for software testing. This chapter briefly describes the methods available.

Black-Box Testing

The technique of testing without having any knowledge of the interior workings of the application is called black-box testing. The tester is oblivious to the system architecture and does not have access to the source code. Typically, while performing a black-box test, a tester will interact with the system’s user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon.

The following table lists the advantages and disadvantages of black-box testing.

AdvantagesDisadvantages
Well suited and efficient for large code segments.Limited coverage, since only a selected number of test scenarios is actually performed.
Code access is not required.Inefficient testing, due to the fact that the tester only has limited knowledge about an application.
Clearly separates user’s perspective from the developer’s perspective through visibly defined roles.Blind coverage, since the tester cannot target specific code segments or errorprone areas.
Large numbers of moderately skilled testers can test the application with no knowledge of implementation, programming language, or operating systems.The test cases are difficult to design.

White-Box Testing

White-box testing is the detailed investigation of internal logic and structure of the code. Whitebox testing is also called glass testing or open-box testing. In order to perform whitebox testing on an application, a tester needs to know the internal workings of the code.

The tester needs to have a look inside the source code and find out which unit/chunk of the code is behaving inappropriately.

The following table lists the advantages and disadvantages of white-box testing.

AdvantagesDisadvantages
As the tester has knowledge of the source code, it becomes very easy to find out which type of data can help in testing the application effectively.Due to the fact that a skilled tester is needed to perform white-box testing, the costs are increased.
It helps in optimizing the code.Sometimes it is impossible to look into every nook and corner to find out hidden errors that may create problems, as many paths will go untested.
Extra lines of code can be removed which can bring in hidden defects.It is difficult to maintain white-box testing, as it requires specialized tools like code analyzers and debugging tools.
Due to the tester’s knowledge about the code, maximum coverage is attained during test scenario writing. 

Grey-Box Testing

Grey-box testing is a technique to test the application with having a limited knowledge of the internal workings of an application. In software testing, the phrase the more you know, the better carries a lot of weight while testing an application.

Mastering the domain of a system always gives the tester an edge over someone with limited domain knowledge. Unlike black-box testing, where the tester only tests the application’s user interface; in grey-box testing, the tester has access to design documents and the database.

Having this knowledge, a tester can prepare better test data and test scenarios while making a test plan.

AdvantagesDisadvantages
Offers combined benefits of black-box and whitebox testing wherever possible.Since the access to source code is not available, the ability to go over the code and test coverage is limited.
Grey box testers don’t rely on the source code; instead they rely on interface definition and functional specifications.The tests can be redundant if the software designer has already run a test case.
Based on the limited information available, a grey-box tester can design excellent test scenarios especially around communication protocols and data type handling.Testing every possible input stream is unrealistic because it would take an unreasonable amount of time; therefore, many program paths will go untested.
The test is done from the point of view of the user and not the designer. 

A Comparison of Testing Methods

The following table lists the points that differentiate black-box testing, grey-box testing, and white-box testing.

Black-Box TestingGrey-Box TestingWhite-Box Testing
The internal workings of an application need not be known.The tester has limited knowledge of the internal workings of the application.Tester has full knowledge of the internal workings of the application.
Also known as closed-box testing, data-driven testing, or functional testing.Also known as translucent testing, as the tester has limited knowledge of the insides of the application.Also known as clear-box testing, structural testing, or code-based testing.
Performed by end-users and also by testers and developers.Performed by end-users and also by testers and developers.Normally done by testers and developers.
Testing is based on external expectations – Internal behaviorTesting is done on the basis of high-level database diagrams andInternal workings are fully known and the tester can
of the application is unknown.data flow diagrams.design test data accordingly.
It is exhaustive and the least time-consuming.Partly time-consuming and exhaustive.The most exhaustive and time-consuming type of testing.
Not suited for algorithm testing.Not suited for algorithm testing.Suited for algorithm testing.
This can only be done by trialand-error method.Data domains and internal boundaries can be tested, if known.Data domains and internal boundaries can be better tested.

Software Testing – Levels

There are different levels during the process of testing. In this chapter, a brief description is provided about these levels.

Levels of testing include different methodologies that can be used while conducting software testing. The main levels of software testing are −

  • Functional Testing
  • Non-functional Testing

Functional Testing

This is a type of black-box testing that is based on the specifications of the software that is to be tested. The application is tested by providing input and then the results are examined that need to conform to the functionality it was intended for. Functional testing of a software is conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements.

There are five steps that are involved while testing an application for functionality.

StepsDescription
IThe determination of the functionality that the intended application is meant to perform.
IIThe creation of test data based on the specifications of the application.
IIIThe output based on the test data and the specifications of the application.
IVThe writing of test scenarios and the execution of test cases.
VThe comparison of actual and expected results based on the executed test cases.

An effective testing practice will see the above steps applied to the testing policies of every organization and hence it will make sure that the organization maintains the strictest of standards when it comes to software quality.

Unit Testing

This type of testing is performed by developers before the setup is handed over to the testing team to formally execute the test cases. Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use test data that is different from the test data of the quality assurance team.

The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.

Limitations of Unit Testing

Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software application. The same is the case with unit testing.

There is a limit to the number of scenarios and test data that a developer can use to verify a source code. After having exhausted all the options, there is no choice but to stop unit testing and merge the code segment with other units.

Integration Testing

Integration testing is defined as the testing of combined parts of an application to determine if they function correctly. Integration testing can be done in two ways: Bottom-up integration testing and Top-down integration testing.

Sr.No.Integration Testing Method
1Bottom-up integration This testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds.
2Top-down integration In this testing, the highest-level modules are tested first and progressively, lower-level modules are tested thereafter.

In a comprehensive software development environment, bottom-up testing is usually done first, followed by top-down testing. The process concludes with multiple tests of the complete application, preferably in scenarios designed to mimic actual situations.

System Testing

System testing tests the system as a whole. Once all the components are integrated, the application as a whole is tested rigorously to see that it meets the specified Quality Standards. This type of testing is performed by a specialized testing team.

System testing is important because of the following reasons −

  • System testing is the first step in the Software Development Life Cycle, where the application is tested as a whole.
  • The application is tested thoroughly to verify that it meets the functional and technical specifications.
  • The application is tested in an environment that is very close to the production environment where the application will be deployed.
  • System testing enables us to test, verify, and validate both the business requirements as well as the application architecture.

Regression Testing

Whenever a change in a software application is made, it is quite possible that other areas within the application have been affected by this change. Regression testing is performed to verify that a fixed bug hasn’t resulted in another functionality or business rule violation. The intent of regression testing is to ensure that a change, such as a bug fix should not result in another fault being uncovered in the application.

Regression testing is important because of the following reasons −

  • Minimize the gaps in testing when an application with changes made has to be tested.  Testing the new changes to verify that the changes made did not affect any other area of the application.
  • Mitigates risks when regression testing is performed on the application.
  • Test coverage is increased without compromising timelines.
  • Increase speed to market the product.

Acceptance Testing

This is arguably the most important type of testing, as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client‘s requirement. The QA team will have a set of pre-written scenarios and test cases that will be used to test the application.

More ideas will be shared about the application and more tests can be performed on it to gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic errors, or interface gaps, but also to point out any bugs in the application that will result in system crashes or major errors in the application.

By performing acceptance tests on an application, the testing team will reduce how the application will perform in production. There are also legal and contractual requirements for acceptance of the system.

Alpha Testing

This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). Unit testing, integration testing and system testing when combined together is known as alpha testing. During this phase, the following aspects will be tested in the application −

  • Spelling Mistakes
  • Broken Links
  • Cloudy Directions
  • The Application will be tested on machines with the lowest specification to test loading times and any latency problems.

Beta Testing

This test is performed after alpha testing has been successfully performed. In beta testing, a sample of the intended audience tests the application. Beta testing is also known as pre-release testing. Beta test versions of software are ideally distributed to a wide audience on the Web, partly to give the program a “real-world” test and partly to provide a preview of the next release.

In this phase, the audience will be testing the following −

  • Users will install, run the application and send their feedback to the project team.
  • Typographical errors, confusing application flow, and even crashes.
  • Getting the feedback, the project team can fix the problems before releasing the software to the actual users.  The more issues you fix that solve real user problems, the higher the quality of your application will be.
  • Having a higher-quality application when you release it to the general public will increase customer satisfaction.

Non-Functional Testing

This section is based upon testing an application from its non-functional attributes. Nonfunctional testing involves testing a software from the requirements which are nonfunctional in nature but important such as performance, security, user interface, etc.

Some of the important and commonly used non-functional testing types are discussed below.

Performance Testing

It is mostly used to identify any bottlenecks or performance issues rather than finding bugs in a software. There are different causes that contribute in lowering the performance of a software −

  • Network delay
  • Client-side processing
  • Database transaction processing
  • Load balancing between servers
  • Data rendering

Performance testing is considered as one of the important and mandatory testing type in terms of the following aspects −

  • Speed (i.e. Response Time, data rendering and accessing)
  • Capacity
  • Stability
  • Scalability

Performance testing can be either qualitative or quantitative and can be divided into different sub-types such as Load testing and Stress testing.

Load Testing

It is a process of testing the behavior of a software by applying maximum load in terms of software accessing and manipulating large input data. It can be done at both normal and peak load conditions. This type of testing identifies the maximum capacity of software and its behavior at peak time.

Most of the time, load testing is performed with the help of automated tools such as Load Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer, Visual Studio Load Test, etc.

Virtual users (VUsers) are defined in the automated testing tool and the script is executed to verify the load testing for the software. The number of users can be increased or decreased concurrently or incrementally based upon the requirements.

Stress Testing

Stress testing includes testing the behavior of a software under abnormal conditions. For example, it may include taking away some resources or applying a load beyond the actual load limit.

The aim of stress testing is to test the software by applying the load to the system and taking over the resources used by the software to identify the breaking point. This testing can be performed by testing different scenarios such as −

  • Shutdown or restart of network ports randomly
  • Turning the database on or off
  • Running different processes that consume resources such as CPU, memory, server, etc.

Usability Testing

Usability testing is a black-box technique and is used to identify any error(s) and improvements in the software by observing the users through their usage and operation.

According to Nielsen, usability can be defined in terms of five factors, i.e. efficiency of use, learn-ability, memory-ability, errors/safety, and satisfaction. According to him, the usability of a product will be good and the system is usable if it possesses the above factors.

Nigel Bevan and Macleod considered that usability is the quality requirement that can be measured as the outcome of interactions with a computer system. This requirement can be fulfilled and the end-user will be satisfied if the intended goals are achieved effectively with the use of proper resources.

Molich in 2000 stated that a user-friendly system should fulfill the following five goals, i.e., easy to Learn, easy to remember, efficient to use, satisfactory to use, and easy to understand.

In addition to the different definitions of usability, there are some standards and quality models and methods that define usability in the form of attributes and sub-attributes such as ISO-9126, ISO-9241-11, ISO-13407, and IEEE std.610.12, etc.

UI vs Usability Testing

UI testing involves testing the Graphical User Interface of the Software. UI testing ensures that the GUI functions according to the requirements and tested in terms of color, alignment, size, and other properties.

On the other hand, usability testing ensures a good and user-friendly GUI that can be easily handled. UI testing can be considered as a sub-part of usability testing.

Security Testing

Security testing involves testing a software in order to identify any flaws and gaps from security and vulnerability point of view. Listed below are the main aspects that security testing should ensure −

  • Confidentiality
  • Integrity
  • Authentication
  • Availability
  • Authorization
  • Non-repudiation
  • Software is secure against known and unknown vulnerabilities
  • Software data is secure
  • Software is according to all security regulations
  • Input checking and validation
  • SQL insertion attacks
  • Injection flaws
  • Session management issues
  • Cross-site scripting attacks
  • Buffer overflows vulnerabilities
  • Directory traversal attacks

Portability Testing         

Portability testing includes testing a software with the aim to ensure its reusability and that it can be moved from another software as well. Following are the strategies that can be used for portability testing −

  • Transferring an installed software from one computer to another.
  • Building executable (.exe) to run the software on different platforms.

Portability testing can be considered as one of the sub-parts of system testing, as this testing type includes overall testing of a software with respect to its usage over different environments. Computer hardware, operating systems, and browsers are the major focus of portability testing. Some of the pre-conditions for portability testing are as follows −

  • Software should be designed and coded, keeping in mind the portability requirements.
  • Unit testing has been performed on the associated components.
  • Integration testing has been performed.  Test environment has been established.

Software Testing – Documentation

Testing documentation involves the documentation of artifacts that should be developed before or during the testing of Software.

Documentation for software testing helps in estimating the testing effort required, test coverage, requirement tracking/tracing, etc. This section describes some of the commonly used documented artifacts related to software testing such as −

  • Test Plan
  • Test Scenario
  • Test Case
  • Traceability Matrix

Test Plan

A test plan outlines the strategy that will be used to test an application, the resources that will be used, the test environment in which testing will be performed, and the limitations of the testing and the schedule of testing activities. Typically the Quality Assurance Team Lead will be responsible for writing a Test Plan.

A test plan includes the following −

  • Introduction to the Test Plan document
  • Assumptions while testing the application
  • List of test cases included in testing the application
  • List of features to be tested
  • What sort of approach to use while testing the software
  • List of deliverables that need to be tested
  • The resources allocated for testing the application
  • Any risks involved during the testing process
  • A schedule of tasks and milestones to be achieved

Test Scenario

It is a one line statement that notifies what area in the application will be tested. Test scenarios are used to ensure that all process flows are tested from end to end. A particular area of an application can have as little as one test scenario to a few hundred scenarios depending on the magnitude and complexity of the application.

The terms ‘test scenario’ and ‘test cases’ are used interchangeably, however a test scenario has several steps, whereas a test case has a single step. Viewed from this perspective, test scenarios are test cases, but they include several test cases and the sequence that they should be executed. Apart from this, each test is dependent on the output from the previous test.

Test Case        

Test cases involve a set of steps, conditions, and inputs that can be used while performing testing tasks. The main intent of this activity is to ensure whether a software passes or fails in terms of its functionality and other aspects. There are many types of test cases such as functional, negative, error, logical test cases, physical test cases, UI test cases, etc.

Furthermore, test cases are written to keep track of the testing coverage of a software. Generally, there are no formal templates that can be used during test case writing. However, the following components are always available and included in every test case −

  • Test case ID
  • Product module
  • Product version
  • Revision history
  • Purpose
  • Assumptions
  • Pre-conditions
  • Steps
  • Expected outcome
  • Actual outcome
  • Post-conditions

Many test cases can be derived from a single test scenario. In addition, sometimes multiple test cases are written for single software which is collectively known as test suites.

Traceability Matrix

Traceability Matrix (also known as Requirement Traceability Matrix – RTM) is a table that is used to trace the requirements during the Software Development Life Cycle. It can be used for forward tracing (i.e. from Requirements to Design or Coding) or backward (i.e. from Coding to Requirements). There are many user-defined templates for RTM.

Each requirement in the RTM document is linked with its associated test case so that testing can be done as per the mentioned requirements. Furthermore, Bug ID is also included and linked with its associated requirements and test case. The main goals for this matrix are −

  • Make sure the software is developed as per the mentioned requirements.
  • Helps in finding the root cause of any bug.
  • Helps in tracing the developed documents during different phases of SDLC.

What is Basis Path Testing?

  • Basis path testing is a white-box testing technique first proposed by Tom McCabe[MCC76].
  • The basis path method enables the test case designer to derive a logical complexity measure of a procedural design
  • use this measure as a guide for defining a basis set of execution paths.
  • Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing.

Example:

Function fn_delete_element (int value, int array_size, int array[]) { int i;                location = array_size + 1;    for i = 1 to array_size if ( array[i] == value )  4 location = i;             end if;                 end for;   for i = location to array_size array[i] = array[i+1];                end for; array_size –; } 

Steps to Calculate the independent paths

Step 1 : Draw the Flow Graph of the Function/Program under consideration as shown below:

Step 2 : Determine the independent paths.

Path 1:  1 – 2 – 5 – 7 

Path 2:  1 – 2 – 5 – 6 -5-7 

Path 3:  1 – 2 – 3 – 2 – 5 – 6 -5-7 Path 4:  1 – 2 – 3 – 4 – 2 – 5 – 6 -5-7

What is Cyclomatic Complexity?

Cyclomatic complexity is a source code complexity measurement that is being correlated to a number of coding errors. It is calculated by developing a Control Flow Graph of the code that measures the number of linearly-independent paths through a program module.

Lower the Program’s cyclomatic complexity, lower the risk to modify and easier to understand. It can be represented using the below formula:

Cyclomatic complexity = E – N + 2 where,

  E = number of edges in the flow graph.

  N = number of nodes in the flow graph.

Example :

IF A = 10 THEN   IF B > C THEN     A = B  ELSE    A = C  ENDIF ENDIF

Print A

Print B

Print C

FlowGraph:

The Cyclomatic complexity is calculated using the above control flow diagram that shows seven nodes(shapes) and eight edges (lines), hence the cyclomatic complexity is 8 – 7 + 2 = 3

Chapter 6:  Software Testing with example Process

Chapter-5: Cost Estimation Tutorial in Software Engineering

Cost Estimation Tutorial

Cost is s strategic concept in software development for the following reasons:

  1. Project management: Estimating cost is extremely crucial in carrying out project management activities such as scheduling, planning and control.
    1. Feasibility Study: Making investment decisions regarding software projects requires full cost breakdown and analysis .Consequently, identified recurring and one-time  costs are then incorporated in a financial feasibility study in terms of cost-benefit analysis.
    1. Cost reduction: Since software engineering aims to provide cost-effective software solutions to business problems, many process and project related activities are designed or re-engineered to achieve the goal of cost minimization.
    1. Evaluating business performance: Cost is an essential ingredient to calculate many of the financial ratios – explained above- uses to evaluate the financial performance for business firm
    1. Leverage: Cost plays a significant role in both e the operating and the financial leverage in respect of risk and return. Relying on fixed costs as opposed to variable costs can boost the operating leverage while financing with high percentage on debt- based costs may boost the financial leverage.

Cost Estimation

Every year more projects are doomed by poor cost and schedule estimates than by technical, political or organizational problems. It‘s no wonder that so few companies realize that software cost estimating can be a science, not just an art. It has been proven that it is quite applicable to accurately and consistently predict development life cycle costs and schedules for a broad array of software projects.

Though a vast body of knowledge exists today in respect to cost estimation techniques, most of these estimation techniques view cost as a function of complexity whether explicitly or implicitly. In early models, complexity means the project size or the program volume, which can be estimated merely via kilo lines of codes KSLOC. In late models, complexity is determined firstly by inputs, outputs, interfaces, files and queries that the software system needs. Then this complexity is further adjusted via up to 14 different added-complexity factors. Eventually, the final result is converted, through a standard conversion table to KLOC.

In basic cost estimation model the calculation is straightforward. By determining the value of only two variables, total efforts in person-months can be easily calculated. These two variables are :

  • How many thousands of lines of code (KSLOC) your programmers must develop?
  • The effort required per KSLOC (i.e.: Linear Productivity Factor)

Accordingly, multiplying these two variables together will result in the person months of effort required for the project provided that the project is relatively small. Otherwise, another exponential size penalty factor has to be incorporated for larger project sizes. Person-months implies the number of months required to complete the entire project if only one person was to carry out this mission. This underlying concept is the foundation of all of the software cost estimating models especially those originated from Barry Boehm‘s famous COCOMO models.

COCOMO Sample Example

Suppose it is required to build a Web Development system consisting of 25,000 lines of code. How many person months of effort would this take using just this equation if:

  1. The project size was relatively small
  2. The project size was considered large

Answer:

  1. For a relatively small project:

Efforts = Productivity x KSLOC

                                =     3.3 x 25 = 82.5 Person-Months

  • For a large project :

Efforts = Productivity x KSLOCPenalty

= 3.3 x 251.030 = 90.86 Person-Months

It should be  noted , however . that COCOMO formulas   have also different  modes , models and versions up to COCOMOII and the new COCOTS.

Estimating software costs typically involves the following drivers:

  1. Complexity of the software project
  2. Size of the software project 3- Efforts needed to complete the project 4- Time needed to complete the project

5- Risks and uncertainties involved .Yet , the risk driver is still not clearly incorporated in the majority of cost estimation models for software systems .

Despite of several differences, most cost estimation models are more or less based on the following rule:

Complexity € size

(Complexity determines software size in terms of KLOC)

Size € efforts

(KLOC determines efforts in person-months with a given level of productivity and exponential size penalty factor)

Efforts € time

(Effort determines time at a given mode and/or model )

Time € Number of people required

(Time determines people ―well-trained full time software development team‖)

Four standard conversion tables are widely adopted in cost estimation. These tables are shown below.  

Table 1. Common Values for the Linear Productivity Factor

Project TypeLinear Productivity Factor
COCOMO II Default2.94
Embedded Development2.58
E-commerce Development3.60
Web Development3.30
Military Development2.77

Table 2. Typical Size Penalty Factors for Various Project Types

Project TypeExponential Size Penalty Factor
COCOMO II Default1.052
Embedded development1.110
E-Commerce development1.030
Web development1.030
Military development1.072

Table 4. Lines of Code Per Function Point by Programming Language

Function Points Estimations

An alternative to direct KSLOC estimating is through function points, then use a the above standard table called ―Lines of Code Per Function Point by Programming Language” to convert them to KSLOC. Function points was used for the first time by IBM to capture the complexity of the software system in terms of its SRS functionality and the way it interacts with its users.

How Function Points Work?

  1. Estimate the number of external inputs, external interface files, external outputs, external queries and logical internal tables (files).
  2. Use the Function Point Conversion Factor table to get total initial function points .
  3. Initial function points are adjusted via 14 complexity factors to obtain final (adjusted) function points.
  • Use adjusted function points to obtain KSLOC.
  • Use KSLOC to estimate efforts as explained in COCOMO examples above

FP Sample Example

Suppose the requirement specification for the Blood Bank Website Development of the blood bank project has been carefully analyzed and the following estimates have been obtained. There is a need for 11 inputs, 11 outputs, 7 inquiries, 22 files, and 6 external interfaces. Also, assume outputs, queries , files function point attributes are of low complexity and all other function points attributes are of medium complexity.

The complexity adjustment value for factor is significant which is 4.  

Make the following calculations showing the full procedure in details:

  1. What is the FUNCTION POINTS (FP) for the blood bank project
  2. What is the ADJUSTED FUNCTION POINTS (AFP) for the blood bank project? 
  3. 3- What is the approximate number of LOC in the following languages:
    1. “C++” programming language
    1. “Java” Programming language
  4. Calculate the estimated schedule time in person-months assuming that Java is used as the implementation language

Answer

1- Calculating Function Points

FUNCTION POINTS ESTIMATION (1) 
DESCRIPTIONLOWMEDIUMHIGHTOTAL
INPUTSX311X4X644
OUTPUTS11X4X5X744
QUERIES7X3X4X621
FILES22X7X10X15154
PROGRAM INTERFACESX56X7X1042
Total Unadjusted Function Points305

2- Calculating Adjusted Function Points

FUNCTION POINT ESTIMATION (3) 
PROCESSING COMPLEXITY(PC):04
ADJUSTED PROCESSING COMPLEXITY (PCA)0.65+(0.01 *04)= 0.69
TOTAL ADJUSTED FUNCTION POINTS305 * .69 = 210.45
  • Approximate number of LOC for the following languages:
  • “C++” programming language :

LOC = 210.45 x 53 = 11153.85 ~ 11.15 KSLOC

  • “Java” Programming language

LOC = 210.45 x 46 = 9680.7 ~ 9.68 KSLOC

  • Estimated efforts calculation

Efforts = Productivity x KSLOCPenalty = 3.3 x 9.68.030 = 3.532 Person-Months

Chapter – 4: Project management with Example Procedures.

Project management

Software project management is an essential part of software engineering. Projects need to be managed because professional software engineering is always subject to organizational budget and schedule constraints. The project manager‘s job is to ensure that the software project meets and overcomes these constraints as well as delivering high-quality software. Good management cannot guarantee project success. However, bad management usually results in project failure: the software may be delivered late, cost more than originally estimated, or fail to meet the expectations of customers.

The success criteria for project management obviously vary from project to project but, for most projects, important goals are:

  1. Deliver the software to the customer at the agreed time.
  2. Keep overall costs within budget.
  3. Deliver software that meets the customer‘s expectations.
  4. Maintain a happy and well-functioning development team.

These goals are not unique to software engineering but are the goals of all engineering projects. However, software engineering is different from other types of engineering in a number of ways that make software management particularly challenging.

Some of these differences are:

  1. The product is intangible A manager of a shipbuilding or a civil engineering project can see the product being developed. If a schedule slips, the effect on the product is visible—parts of the structure are obviously unfinished.Software is intangible. It cannot be seen or touched. Software project managers cannot see progress by simply looking at the artifact that is being constructed. Rather, they rely on others to produce evidence that they can use to review the progress of the work.
  2. Large software projects are often ‗one-off ‘ projects Large software projects are usually different in some ways from previous projects. Therefore, even managers who have a large body of previous experience may find it difficult to anticipate problems. Furthermore, rapid technological changes in computers and communications can make a manager‘s experience obsolete. Lessons learned from previous projects may not be transferable to new projects.
  3. Software processes are variable and organization-specific The engineering process for some types of system, such as bridges and buildings, is well understood. However, software processes vary quite significantly from one organization to another. Although there has been significant progress in process standardization and improvement, we still cannot reliably predict when a particular software process is likely to lead todevelopment problems. This is especially true when the software project is part of a wider systems engineering project.

It is impossible to write a standard job description for a software project manager. The job varies tremendously depending on the organization and the software product being developed. However, most managers take responsibility at some stage for some or all of the following activities:

  1. Project planning Project managers are responsible for planning, estimating and scheduling project development, and assigning people to tasks. They supervise the work to ensure that it is carried out to the required standards and monitor progress to check that the development is on time and within budget.
  2. Reporting Project managers are usually responsible for reporting on the progress of a project to customers and to the managers of the company developing the software. They have to be able to communicate at a range of levels, from detailed technical information to management summaries. They have to write concise, coherent documents that abstract critical information from detailed project reports. They must be able to present this information during progress reviews.
  3. Risk management Project managers have to assess the risks that may affect a project, monitor these risks, and take action when problems arise.
  4. People management Project managers are responsible for managing a team of people. They have to choose people for their team and establish ways of working that lead to effective team performance.
  5. Proposal writing The first stage in a software project may involve writing a proposal to win a contract to carry out an item of work. The proposal describes the objectives of the project and how it will be carried out. It usually includes cost and schedule estimates and justifies why the project contract should be awarded to a particular organization or team. Proposal writing is a critical task as the survival of many software companies depends on having enough proposals accepted and contracts awarded. There can be no set guidelines for this task; proposal writing is a skill that you acquire through practice and experience.

Risk management

Risk management is one of the most important jobs for a project manager. Risk management involves anticipating risks that might affect the project schedule or the quality of the software being developed, and then taking action to avoid these risks (Hall,

1998; Ould, 1999). You can think of a risk as something that you‘d prefer not to have happen. Risks may threaten the project, the software that is being developed, or the organization. There are, therefore, three related categories of risk:

  1. Project risks Risks that affect the project schedule or resources. An example of a project risk is the loss of an experienced designer. Finding a replacement designer with appropriate skills and experience may take a long time and, consequently, the software design will take longer to complete.
  2. Product risks Risks that affect the quality or performance of the software being developed. An example of a product risk is the failure of a purchased component to perform as expected. This may affect the overall performance of the system so that it is slower than expected.
  3. Business risks Risks that affect the organization developing or procuring the software. For example, a competitor introducing a new product is a business risk. The introduction of a competitive product may mean that the assumptions made about sales of existing software products may be unduly optimistic.
Risk Affects Description
Staff turnover   ProjectExperienced staff will leave the project before it is finished.
Management change   ProjectThere will be a change of organizational management with different priorities.
Hardware unavailability Project Hardware that is essential for the project will not be delivered on schedule.
Requirements change Project and product There will be a larger number of changes to the requirements than anticipated.
Specification delays Project and productSpecifications of essential interfaces are not available on schedule.
Size underestimate Project and product The size of the system has been underestimated.
CASE tool under performance Product CASE tools, which support the project, do not perform as anticipated.
Technology change Product Business The underlying technology on which the system is built is superseded by new technology.
Product  competition BusinessA competitive product is marketed before the system is completed.

Figure 22.1 Examples of common project, product, and business risks

An outline of the process of risk management is illustrated in Figure 22.2. It involves several stages:

  1. Risk identification You should identify possible project, product, and business risks.
  2. Risk analysis You should assess the likelihood and consequences of these risks.
  3. Risk planning You should make plans to address the risk, either by avoiding it or minimizing its effects on the project.
  4. Risk monitoring You should regularly assess the risk and your plans for risk mitigation and revise these when you learn more about the risk. 

You should document the outcomes of the risk management process in a risk management plan. This should include a discussion of the risks faced by the project, an analysis of these risks, and information on how you propose to manage the risk if it seems likely to be a problem. The risk management process is an iterative process that continues throughout the project. Once you have drawn up an initial risk management plan, you monitor the situation to detect emerging risks. 

              Figure 22.2 The risk management process

Software pricing

In principle, the price of a software product to a customer is simply the cost of development plus profit for the developer. In practice, however, the relationship between the project cost and the price quoted to the customer is not usually so simple. When calculating a price, you should take broader organizational, economic, political, and business considerations into account, such as those shown in Figure 23.1.

                   Figure 23.1 Factors affecting software pricing

Project plans

In a plan-driven development project, a project plan sets out the resources available to the project, the work breakdown, and a schedule for carrying out the work. The plan should identify risks to the project and the software under development, and the approach that is taken to risk management. Although the specific details of project plans vary depending on the type of project and organization, plans normally include the following sections:

  1. Introduction This briefly describes the objectives of the project and sets out the constraints (e.g., budget, time, etc.) that affect the management of the project.
  2. Project organization This describes the way in which the development team is organized, the people involved, and their roles in the team.
  3. Risk analysis This describes possible project risks, the likelihood of these risks arising, and the risk reduction strategies that are proposed. 
  4. Hardware and software resource requirements This specifies the hardware and support software required to carry out the development. If hardware has to be bought, estimates of the prices and the delivery schedule may be included.
  5. Work breakdown This sets out the breakdown of the project into activities and identifies the milestones and deliverables associated with each activity. Milestones are key stages in the project where progress can be assessed; deliverables are work products that are delivered to the customer.
  6. Project schedule This shows the dependencies between activities, the estimated time required to reach each milestone, and the allocation of people to activities. 
  7. Monitoring and reporting mechanisms This defines the management reports that should be produced, when these should be produced, and the project monitoring mechanisms to be used.

As well as the principal project plan, which should focus on the risks to the projects and the project schedule, you may develop a number of supplementary plans to support other process activities such as testing and configuration management. Examples of possible supplementary plans are shown in Figure 23.2.

                     Figure 23.2 Project plan supplements

Project scheduling

Project scheduling is the process of deciding how the work in a project will be organized as separate tasks, and when and how these tasks will be executed. You estimate the calendar time needed to complete each task, the effort required, and who will work on the tasks that have been identified. You also have to estimate the resources needed to complete each task, such as the disk space required on a server, the time required on specialized hardware, such as a simulator, and what the travel budget will be. In terms of the planning stages that I discussed in the introduction of this chapter, an initial project schedule is usually created during the project startup phase. This schedule is then refined and modified during development planning.

Schedule representation

Project schedules may simply be represented in a table or spreadsheet showing the tasks, effort, expected duration, and task dependencies (Figure 23.5). However, this style of representation makes it difficult to see the relationships and dependencies between the different activities. For this reason, alternative graphical representations of project schedules have been developed that are often easier to read and understand.

Figure 23.4 The project scheduling  process

There are two types of representation that are commonly used:

  1. Bar charts, which are calendar-based, show who is responsible for each activity, the expected elapsed time, and when the activity is scheduled to begin and end. Bar charts are sometimes called ‗Gantt charts‘, after their inventor, Henry Gantt.
  2. Activity networks, which are network diagrams, show the dependencies between the different activities making up a project. Normally, a project planning tool is used to manage project schedule information. These tools usually expect you to input project information into a table and will then create a database of project information. Bar charts and activity charts can then be generated automatically from this database.

Project activities are the basic planning element. Each activity has:

  1. A duration in calendar days or months.
  2. An effort estimate, which reflects the number of person-days or person-months to complete the work.
  3. A deadline by which the activity should be completed.
  4. A defined endpoint. This represents the tangible result of completing the activity. This could be a document, the holding of a review meeting, the successful execution of all tests, etc.

Figure 23.5 Tasks, durations, and  dependencies

Figure 23.6 Activity bar chart

Problem 1:

  1. Draw the Activity network diagram for the following task. 
    1. Find the critical path and estimated completion time. 
    1. To shorten the project three week which task will be shorten and what will be the estimated project cost? 
ActivityPreceding ActivityNormal TimeCrash TimeNormal CostCrash CostWeeks available for crashingCost for crashing per week
A4210,00011,000  
BA326,0004,000  
CA214,0006,000  
DB5314,00018,000  
EB,C119,0009,000  
FC327,0008,000  
GE,F4213,00025,000  
HD,E4111,00018,000  
IH,G6520,00024,000  

Step 1: 

Step 1:

ActivityPreceding ActivityNormal TimeCrash TimeNormal CostCrash CostWeeks available for crashingCost for crashing per week
A4210,00011,0005002
BA327,0004,0003,0001
CA214,0006,0002,0001
DB5314,00018,0002,0002
EB,C119,0009,00000
FC327,0008,0001,0001
GE,F4213,00025,0006,0002
HD,E4111,00018,0002,3333
IH,G6520,00024,0004,0001

A – B – D – H – I = 22

A – B – E – H – I = 18

A – B – E – G – I = 18

A – C – E – H – I = 17

A – C – E – G – I = 17

A – C – F – G– I = 14

Here the critical path is   , A – B – D – H – I = 22

A – B – D – H – I = 22

A – B – E – H – I = 18

A – B – E – G – I = 18

A – C – E – G – I = 17

  1. – C – F – G– I = 14

1st   Week-Activity A-$500

                          A-3 weeks

                    ABDHI-21weeks

2nd   Week-Activity A-$500

                           A-2 weeks

                        ABDHI-21weeks

3rd   Week- Activity D-$2000

                           D- 4 weeks

                        ABDHI-18 weeks

Project weeks = 10,000 +6,000+4,000+14,000+4,000+7,000+13,000+11,000+20,000

                           =94,000$

New Project weeks = 11,000 +6,000+4,000+16,000+4,000+7,000+13,000+11,000+20,000

                           =97,000$

Problem 2: 

Chapter 6:  Software Testing with example Process

Chapter -3: Agile Software Development Method Process

Agile Software Development

Although there are many approaches to rapid software development, they share some fundamental characteristics:

  1. The processes of specification, design, and implementation are interleaved. There is no detailed system specification, and design documentation is minimized or generated automatically by the programming environment used to implement the system. The user requirements document only defines the most important characteristics of the system.
  2. The system is developed in a series of versions. End-users and other system stakeholders are involved in specifying and evaluating each version. They may propose changes to the software and new requirements that should be implemented in a later version of the system.
  3. System user interfaces are often developed using an interactive development system that allows the interface design to be quickly created by drawing and placing icons on the interface. The system may then generate a web-based interface for a browser or an interface for a specific platform such as Microsoft Windows. 

Agile methods are incremental development methods in which the increments are small and, typically, new releases of the system are created and made available to customers every two or three weeks. They involve customers in the development process to get rapid feedback on changing requirements. They minimize documentation by using informal communications rather than formal meetings with written documents.

Agile methods:

Agile methods have been very successful for some types of system development:

  1. Product development where a software company is developing a small or medium-sized product for sale.
  2. Custom system development within an organization, where there is a clear commitment from the customer to become involved in the development process and where there are not a lot of external rules and regulations that affect the software. 
PrincipalDescription
Customer involvement Customers should be closely involved throughout the development process. Their role is provide and prioritize new system requirements and to evaluate the iterations of the system.
Incremental delivery The software is developed in increments with the customer specifying the requirements to be included in each increment. 
People not process The skills of the development team should be recognized and exploited. Team members should be left to develop their own ways of working without prescriptive processes.
Embrace changeExpect the system requirements to change and so design the system to accommodate these changes.
Maintain simplicity Focus on simplicity in both the software being developed and in the development process. Wherever possible, actively work to eliminate complexity from the system.

The principles underlying agile methods are sometimes difficult to realize:

  1. Although the idea of customer involvement in the development process is an attractive one, its success depends on having a customer who is willing and able to spend time with the development team and who can represent all system stakeholders. Frequently, the customer representatives are subject to other pressure and cannot take full part in the software development.
  2. Individual team members may not have suitable personalities for the intense involvement that is typical of agile methods, and therefore not interact well with other team members.
  3. Prioritizing changes can be extremely difficult, especially in systems for which there are many stakeholders. Typically, each stakeholder gives different priorities to different changes.
  4. Maintaining simplicity requires extra work. Under pressure from delivery schedules, the team members may not have time to carry out desirable system simplifications.
  5. Many organizations, especially large companies, have spent years changing their culture so that processes are defined and followed. It is difficult for them to move to a working model in which processes are informal and defined by development teams.

Another non-technical problem—that is a general problem with incremental development and delivery—occurs when the system customer uses an outside organization for system development. The software requirements document is usually part of the contract between the customer and the supplier. Because incremental specification is inherent in agile methods, writing contracts for this type of development may be difficult.

There are two questions that should be considered when considering agile methods and maintenance:

  1. Are systems that are developed using an agile approach maintainable, given the emphasis in the development process of minimizing formal documentation?
  2. Can agile methods be used effectively for evolving a system in response to customer change requests?

Plan-Driven and Agile Development

  • Agile approaches to software development consider design and implementation to be the central activities in the software process. They incorporate other activities, such as requirements elicitation and testing, into design and implementation. By contrast, a plan-driven approach to software engineering identifies separate stages in the software process with outputs associated with each stage. The outputs from

one stage are used as a basis for planning the following process activity. Figure 3.2 shows the distinctions between plan-driven and agile approaches to system specification.

  • In a plan-driven approach, iteration occurs within activities with formal documents used to communicate between stages of the process. For example, the requirements will evolve and, ultimately, a requirements specification will be produced. This is then an input to the design and implementation process. In an agile approach, iteration occurs across activities. Therefore, the requirements and the design are developed together, rather than separately.

A plan-driven software process can support incremental development and delivery. It is perfectly feasible to allocate requirements and plan the design and development phase as a series of increments. An agile process is not inevitably code-focused and it may produce some design documentation. As I discuss in the following section, the agile development team may decide to include a documentation ‗spike‘, where, instead of producing a new version of a system, the team produce system documentation.

In fact, most software projects include practices from plan-driven and agile approaches. To decide on the balance between a plan-based and an agile approach, you have to answer a range of technical, human, and organizational questions:

  1. Is it important to have a very detailed specification and design before moving to implementation? If so, you probably need to use a plan-driven approach.
  2. Is an incremental delivery strategy, where you deliver the software to customers and get rapid feedback from them, realistic? If so, consider using agile methods.
  3. How large is the system that is being developed? Agile methods are most effective when the system can be developed with a small co-located team who can communicate informally. This may not be possible for large systems that require larger development teams so a plan-driven approach may have to be used.
  4. What type of system is being developed? Systems that require a lot of analysis before implementation (e.g., real-time system with complex timing requirements) usually need a fairly detailed design to carry out this analysis. A plan-driven approach may be best in those circumstances.
  5. What is the expected system lifetime? Long-lifetime systems may require more design documentation to communicate the original intentions of the system developers to the support team. However, supporters of agile methods rightly argue that documentation is frequently not kept up to date and it is not of much use for long-term system maintenance.
  6. What technologies are available to support system development? Agile methods often rely on good tools to keep track of an evolving design. If you are developing a system using an IDE that does not have good tools for program visualization and analysis, then more design documentation may be required.
  7. How is the development team organized? If the development team is distributed or if part of the development is being outsourced, then you may need to develop design documents to communicate across the development teams. You may need to plan in advance what these are.
  8. Are there cultural issues that may affect the system development? Traditional engineering organizations have a culture of plan-based development, as this is the norm in engineering. This usually requires extensive design documentation, rather than the informal knowledge used in agile processes.
  9. How good are the designers and programmers in the development team? It is sometimes argued that agile methods require higher skill levels than plan-based approaches in which programmers simply translate a detailed design into code. If you have a team with relatively low skill levels, you may need to use the best people to develop the design, with others responsible for programming.
  10. Is the system subject to external regulation? If a system has to be approved by an external regulator (e.g., the Federal Aviation Authority [FAA] approve software that is critical to the operation of an aircraft) then you will probably be required to produce detailed documentation as part of the system safety case. 

In reality, the issue of whether a project can be labeled as plan-driven or agile is not very important. Ultimately, the primary concern of buyers of a software system is whether or not they have an executable software system that meets their needs and does useful things for the individual user or the organization. In practice, many companies who claim to have used agile methods have adopted some agile practices and have integrated these with their plan-driven processes.

Extreme programming

Extreme programming (XP) is perhaps the best known and most widely used of the agile methods. The name was coined by Beck (2000) because the approach was developed by pushing recognized good practice, such as iterative development, to ‗extreme‘ levels. For example, in XP, several new versions of a system may be developed by different programmers, integrated and tested in a day. 

In extreme programming, requirements are expressed as scenarios (called user stories), which are implemented directly as a series of tasks. Programmers work in pairs and develop tests for each task before writing the code. All tests must be successfully executed when new code is integrated into the system. There is a short time gap between releases of the system. Figure 3.3 illustrates the XP process to produce an increment of the system that is being developed.

Extreme programming involves a number of practices, summarized in Figure 3.4, which reflect the principles of agile methods:

  1. Incremental development is supported through small, frequent releases of the system. Requirements are based on simple customer stories or scenarios that are used as a basis for deciding what functionality should be included in a system increment.
  2. Customer involvement is supported through the continuous engagement of the customer in the development team. The customer representative takes part in the development and is responsible for defining acceptance tests for the system.
  3. People, not process, are supported through pair programming, collective ownership of the system code, and a sustainable development process that does not involve excessively long working hours.
  4. Change is embraced through regular system releases to customers, test-first development, refactoring to avoid code degeneration, and continuous integration of new functionality.
  5. Maintaining simplicity is supported by constant refactoring that improves code quality and by using simple designs that do not unnecessarily anticipate future changes to the system.
Principal of Practice Description 
Incremental planning   Requirements are recorded on Story Cards and the Stories to be included in a release are determined by the time available and their relative priority. The developers break these Stories into development ‗Tasks‘. See Figures 3.5 and 3.6.
Small releasesThe minimal useful set of functionality that provides business value is developed first. Releases of the system are frequent and incrementally add functionality to the first release.
Simple designEnough design is carried out to meet the current requirements and no more. An automated unit test framework is used to write tests for a new
Test-first developmentpiece of functionality before that functionality itself is implemented.
Refactoring   All developers are expected to refactor the code continuously as soon as possible code improvements are found. This keeps the code simple and maintainable.
Pair programming   Developers work in pairs, checking each other‘s work and providing the support to always do a good job.
Collective ownership   The pairs of developers work on all areas of the system, so that no islands of expertise develop and all the developers take responsibility for all of the code. Anyone can change anything.
Continuous integration As soon as the work on a task is complete, it is integrated into the whole system. After any
 such integration, all the unit tests in the system must pass.
Sustainable paceLarge amounts of overtime are not considered acceptable as the net effect is often to reduce code quality and medium term productivity.
On-site customerA representative of the end-user of the system (the Customer) should be available full time for the use of the XP team. In an extreme programming process, the customer is a member of the development team and is responsible for bringing system requirements to the team for implementation.

Figure 3.4 Extreme  programming practices

In an XP process, customers are intimately involved in specifying and prioritizing system requirements. The requirements are not specified as lists of required system functions. Rather, the system customer is part of the development team and discusses scenarios with other team members. Together, they develop a ‗story card‘ that encapsulates the customer needs. The development team then aims to implement that scenario in a future release of the software. An example of a story card for the mental health care patient management system is shown in Figure 3.5. This is a short description of a scenario for prescribing medication for a patient. The story cards are the main inputs to the XP planning process or the ‗planning game‘. Once the story cards have been developed, the development team breaks these down into tasks (Figure 3.6) and estimates the effort and resources required for implementing each task. This usually involves discussions with the customer to refine the requirements. The customer then prioritizes the stories for implementation, choosing those stories that can be used immediately to deliver useful business support. The intention is to identify useful functionality that can be implemented in about two weeks, when the next release of the system is made available to the customer. Of course, as requirements change, the unimplemented stories change or may be discarded. If changes are required for a system that has already been delivered, new story cards are developed and again, the customer decides whether these changes should have priority over new functionality.

Figure 3.5 A ‗prescribing medication‘ story.

Sometimes, during the planning game, questions that cannot be easily answered come to light and additional work is required to explore possible solutions. The team may carry out some prototyping or trial development to understand the problem and solution. In XP terms, this is a ‗spike‘, an increment where no programming is done. There may also be ‗spikes‘ to design the system architecture or to develop system documentation. Extreme programming takes an ‗extreme‘ approach to incremental development. New versions of the software may be built several times per day and releases are delivered to customers roughly every two weeks. Release deadlines are never slipped; if there are development problems, the customer is consulted and functionality is removed from the planned release. When a programmer builds the system to create a new version, he or she must run all existing automated tests as well as the tests for the new functionality. The new build of the software is accepted only if all tests execute successfully. This then becomes the basis for the next iteration of the system. A fundamental precept of traditional software engineering is that you should design for change. That is, you should anticipate future changes to the software and design it so that these changes can be easily implemented. Extreme programming, however, has discarded this principle on the basis that designing for change is often wasted effort. It isn‘t worth taking time to add generality to a program to cope with change. The changes anticipated often never materialize and completely different change requests may actually be made. Therefore, the XP approach accepts that changes will happen and reorganize the software when these changes actually occur.

A general problem with incremental development is that it tends to degrade the software structure, so changes to the software become harder and harder to implement. Essentially, the development proceeds by finding workarounds to problems, with the result that code is often duplicated, parts of the software are reused in inappropriate ways, and the overall structure degrades as code is added to the system. Extreme programming tackles this problem by suggesting that the software should be constantly refactored. This means that the programming team look for possible improvements to the software and implement them immediately. When a team member sees code that can be improved, they make these improvements even in situations where there is no immediate need for them. Examples of refactoring include the reorganization of a class hierarchy to remove duplicate code, the tidying up and renaming of attributes and methods, and the replacement of code with calls to methods defined in a program library. Program development environments, such as Eclipse (Carlson, 2005), include tools for refactoring which simplify the process of finding dependencies between code sections and making global code modifications.

In principle then, the software should always be easy to understand and change as new stories are implemented. In practice, this is not always the case. Sometimes development pressure means that refactoring is delayed because the time is devoted to the implementation of new functionality. Some new features and changes cannot readily be accommodated by code-level refactoring and require the architecture of the system to be modified. In practice, many companies that have adopted XP do not use all of the extreme programming practices listed in Figure 3.4. They pick and choose according to their local ways of working. For example, some companies find pair programming helpful; others prefer to use individual programming and reviews. To accommodate different levels of skill, some programmers don‘t do refactoring in parts of the system they did not develop, and conventional requirements may be used rather than user stories. However, most companies who have adopted an XP variant use small releases, test-first development, and continuous integration.

Agile project management

The principal responsibility of software project managers is to manage the project so that the software is delivered on time and within the planned budget for the project. They supervise the work of software engineers and monitor how well the software development is progressing.

The standard approach to project management is plan-driven. A plan-based approach really requires a manager to have a stable view of everything that has to be developed and the development processes. However, it does not work well with agile methods where the requirements are developed incrementally; where the software is delivered in short, rapid increments; and where changes to the requirements and the software are the norm. Like every other professional software development process, agile development has to be managed so that the best use is made of the time and resources available to the team. This requires a different approach to project management, which is adapted to incremental development and the particular strengths of agile methods.

Scrum approach

The Scrum approach (Schwaber, 2004; Schwaber and Beedle, 2001) is a general agile method but its focus is on managing iterative development rather than specific technical approaches to agile software engineering. Figure 3.8 is a diagram of the Scrum management process. Scrum does not prescribe the use of programming practices such as pair programming and test-first development. It can therefore be used with more technical agile approaches, such as XP, to provide a management framework for the project.

There are three phases in Scrum. The first is an outline planning phase where you establish the general objectives for the project and design the software architecture.

This is followed by a series of sprint cycles, where each cycle develops an increment of the system. Finally, the project closure phase wraps up the project, completes required documentation such as system help frames and user manuals, and assesses the lessons learned from the project.

The innovative feature of Scrum is its central phase, namely the sprint cycles. A Scrum sprint is a planning unit in which the work to be done is assessed, features are selected for development, and the software is implemented. At the end of a sprint, the completed functionality is delivered to stakeholders. Key characteristics of this process are as follows:

  1. Sprints are fixed length, normally 2–4 weeks. They correspond to the development of a release of the system in XP.
  2. The starting point for planning is the product backlog, which is the list of work to be done on the project. During the assessment phase of the sprint, this is reviewed, and priorities and risks are assigned. The customer is closely involved in this process and can introduce new requirements or tasks at the beginning of each sprint.
  3. The selection phase involves all of the project team who work with the customer to select the features and functionality to be developed during the sprint.
  4. Once these are agreed, the team organizes themselves to develop the software. Short daily meetings involving all team members are held to review progress and if necessary, reprioritize work. During this stage the team is isolated from the customer and the organization, with all communications channelled through the so-called ‗Scrum master‘. The role of the Scrum master is to protect the development team from external distractions. The way in which the work is done depends on the problem and the team. Unlike XP, Scrum does not make specific suggestions on how to write requirements, test-first development, etc. However, these XP practices can be used if the team thinks they are appropriate.
  5. At the end of the sprint, the work done is reviewed and presented to stakeholders. The next sprint cycle then begins. The idea behind Scrum is that the whole team should be empowered to make decisions so the term ‗project manager‘, has been deliberately avoided. Rather, the   ‗Scrum master‘ is a facilitator who arranges daily meetings, tracks the backlog of work to be done, records decisions, measures progress against the backlog, and communicates with customers and management outside of the team. The whole team attends the daily meetings, which are sometimes ‗stand-up‘ meetings to keep them short and focused. During the meeting, all team members share information, describe their progress since the last meeting, problems that have arisen, and what is planned for the following day. This means that everyone on the team knows what is going on and, if problems arise, can replan short-term work to cope with them. Everyone participates in this short-term planning—there is no top down direction from the Scrum master.

There are many anecdotal reports of the successful use of Scrum available on the Web. Rising and Janoff (2000) discuss its successful use in a telecommunication software development environment, and they list its advantages as follows:

  1. The product is broken down into a set of manageable and understandable chunks.
  2. Unstable requirements do not hold up progress.
  3. The whole team has visibility of everything and consequently team communication is  improved.
  4. Customers see on-time delivery of increments and gain feedback on how the product works.
  5. Trust between customers and developers is established and a positive culture is created in which everyone expects the project to succeed.

Scrum, as originally designed, was intended for use with co-located teams where all team members could get together every day in stand-up meetings. However, much software development now involves distributed teams with team members located in different places around the world. Consequently, there are various experiments going on to develop Scrum for distributed development environments (Smitsand Pshigoda, 2007; Sutherland et al., 2007). 

Chapter 6:  Software Testing with example Process

Chapter 2: Software processes with various model

Objectives:

  • understand the concepts of software processes and software process models;
  • have been introduced to three generic software process models and when they might be used;
  • know about the fundamental process activities of software requirements engineering, software development, testing, and evolution;
  • understand why processes should be organized to cope with changes in the software requirements and design;
  • understand how the Rational Unified Process integrates good software engineering practice to create adaptable software processes.

Software processes

A software process is a set of related activities that leads to the production of a software product. These activities may involve the development of software from scratch in a standard programming language like Java or C. However, business applications are not necessarily developed in this way. New business software is now often developed by extending and modifying existing systems or by configuring and integrating off-the-shelf software or system components. There are many different software processes but all must include four activities that are fundamental to software engineering:

  1. Software specification The functionality of the software and constraints on its operation must be defined.
  2. Software design and implementation The software to meet the specification must be produced.
  3. Software validation The software must be validated to ensure that it does what the customer wants.
  4. Software evolution The software must evolve to meet changing customer needs.

In some form, these activities are part of all software processes. In practice, of course, they are complex activities in themselves and include sub-activities such as requirements validation, architectural design, unit testing, etc. There are also supporting process activities such as documentation and software configuration management. When we describe and discuss processes, we usually talk about the activities in these processes such as specifying a data model, designing a user interface, etc., and the ordering of these activities. However, as well as activities, process descriptions may also include:

  1. Products, which are the outcomes of a process activity. For example, the outcome of the activity of architectural design may be a model of the software architecture.
  2. Roles, which reflect the responsibilities of the people involved in the process. Examples of roles are project manager, configuration manager, programmer, etc.
  3. Pre- and post-conditions, which are statements that are true before and after a process activity has been enacted or a product produced. For example, before architectural design begins, a precondition may be that all requirements have been approved by the customer; after this activity is finished, a post-condition might be that the UML models describing the architecture have been reviewed 

software processes are categorized as either plan-driven or agile processes. Plan-driven processes are processes where all of the process activities are planned in advance and progress is measured against this plan. In agile processes, planning is incremental and it is easier to change the process to reflect changing customer requirements. As Boehm and Turner (2003) discuss, each approach is suitable for different types of software. Generally, you need to find a balance between plandriven and agile processes. 

Software process models

A software process model is a simplified representation of a software process. Each process model represents a process from a particular perspective, and thus provides only partial information about that process. For example, a process activity model shows the activities and their sequence but may not show the roles of the people involved in these activities. In this section, I introduce a number of very general process models (sometimes called ‗process paradigms‘) and present these from an architectural perspective. That is, we see the framework of the process but not the details of specific activities. These generic models are not definitive descriptions of software processes. Rather, they are abstractions of the process that can be used to explain different approaches to software development. You can think of them as process frameworks that may be extended and adapted to create more specific software engineering processes.

  1. The waterfall model This takes the fundamental process activities of specification, development, validation, and evolution and represents them as separate process phases such as requirements specification, software design, implementation, testing, and so on. 
  2. Incremental development This approach interleaves the activities of specification, development, and validation. The system is developed as a series of versions (increments), with each version adding functionality to the previous version.
  3. Reuse-oriented software engineering This approach is based on the existence of a significant number of reusable components. The system development process focuses on integrating these components into a system rather than developing them from scratch.

These models are not mutually exclusive and are often used together, especially for large systems development. For large systems, it makes sense to combine some of the best features of the waterfall and the incremental development models. You need to have information about the essential system requirements to design a software architecture to support these requirements. You cannot develop this incrementally. Sub-systems within a larger system may be developed using different approaches. Parts of the system that are well understood can be specified and developed using a waterfall-based process. Parts of the system which are difficult to specify in advance, such as the user interface, should always be developed using an incremental approach. 

The waterfall model

The first published model of the software development process was derived from more general system engineering processes (Royce, 1970). This model is illustrated in Figure 2.1. Because of the cascade from one phase to another, this model is known as the ‗waterfall model‘ or software life cycle. The waterfall model is an example of a plan-driven process—in principle, you must plan and schedule all of the process activities before starting work on them. 

The principal stages of the waterfall model directly reflect the fundamental development activities:

  1. Requirements analysis and definition The system‘s services, constraints, and goals are established by consultation with system users. They are then defined in detail and serve as a system specification.
  2. System and software design The systems design process allocates the requirements to either hardware or software systems by establishing an overall system architecture. Software design involves identifying and describing the fundamental software system abstractions and their relationships.
  3. Implementation and unit testing During this stage, the software design is realized as a set of programs or program units. Unit testing involves verifying that each unit meets its specification.
  4. Integration and system testing The individual program units or programs are integrated and tested as a complete system to ensure that the software requirements have been met. After testing, the software system is delivered to the customer.
  5. Operation and maintenance Normally (although not necessarily), this is the longest life cycle phase. The system is installed and put into practical use. Maintenance involves correcting errors which were not discovered in earlier stages of the life cycle, improving the implementation of system units and enhancing the system‘s services as new requirements are discovered.

In principle, the result of each phase is one or more documents that are approved (‗signed off‘). The following phase should not start until the previous phase has finished. In practice, these stages overlap and feed information to each other. During design, problems with requirements are identified. During coding, design problems are found and so on. The software process is not a simple linear model but involves feedback from one phase to another. Documents produced in each phase may then have to be modified to reflect the changes made. 

Because of the costs of producing and approving documents, iterations can be costly and involve significant rework. Therefore, after a small number of iterations, it is normal to freeze parts of the development, such as the specification, and to continue with the later development stages. Problems are left for later resolution, ignored, or programmed around. This premature freezing of requirements may mean that the system won‘t do what the user wants. It may also lead to badly structured systems as design problems are circumvented by implementation tricks.

 During the final life cycle phase (operation and maintenance) the software is put into use. Errors and omissions in the original software requirements are discovered. Program and design errors emerge and the need for new functionality is identified. The system must therefore evolve to remain useful. Making these changes (software maintenance) may involve repeating previous process stages.

The waterfall model is consistent with other engineering process models and documentation is produced at each phase. This makes the process visible so managers can monitor progress against the development plan. Its major problem is the inflexible partitioning of the project into distinct stages. Commitments must be made at an early stage in the process, which makes it difficult to respond to changing customer requirements. In principle, the waterfall model should only be used when the requirements are well understood and unlikely to change radically during system development. However, the waterfall model reflects the type of process used in other engineering projects. As is easier to use a common management model for the whole project, software processes based on the waterfall model are still commonly used. An important variant of the waterfall model is formal system development, where a mathematical model of a system specification is created. This model is then refined, using mathematical transformations that preserve its consistency, into executable code. Based on the assumption that your mathematical transformations are correct, you can therefore make a strong argument that a program generated in this way is consistent with its specification. Formal development processes, such as that based on the B method (Schneider, 2001; Wordsworth, 1996) are particularly suited to the development of systems that have stringent safety, reliability, or security requirements. The formal approach simplifies the production of a safety or security case. This demonstrates to customers or regulators that the system actually meets its safety or security requirements. Processes based on formal transformations are generally only used in the development of safety-critical or security-critical systems. They require specialized expertise. For the majority of systems this process does not offer significant cost benefits over other approaches to system development.

Waterfall Model – Application

Every software developed is different and requires a suitable SDLC approach to be followed based on the internal and external factors. Some situations where the use of Waterfall model is most appropriate are −

  • Requirements are very well documented, clear and fixed.
  • Product definition is stable.
  • Technology is understood and is not dynamic.
  • There are no ambiguous requirements.
  • Ample resources with required expertise are available to support the product.
  • The project is short.

Waterfall Model – Advantages

The advantages of waterfall development are that it allows for departmentalization and control. A schedule can be set with deadlines for each stage of development and a product can proceed through the development process model phases one by one. Development moves from concept, through design, implementation, testing, installation, troubleshooting, and ends up at operation and maintenance. Each phase of development proceeds in strict order.Some of the major advantages of the Waterfall Model are as follows −

  • Simple and easy to understand and use
  • Easy to manage due to the rigidity of the model. Each phase has specific deliverables and a review process.
  • Phases are processed and completed one at a time.
  • Works well for smaller projects where requirements are very well understood.
  • Clearly defined stages.
  • Well understood milestones.
  • Easy to arrange tasks.
  • Process and results are well documented.

Waterfall Model – Disadvantages

The disadvantage of waterfall development is that it does not allow much reflection or revision. Once an application is in the testing stage, it is very difficult to go back and change something that was not well-documented or thought upon in the concept stage.

  • The major disadvantages of the Waterfall Model are as follows −  No working software is produced until late during the life cycle.
  • High amounts of risk and uncertainty.
  • Not a good model for complex and object-oriented projects.
  • Poor model for long and ongoing projects.
  • Not suitable for the projects where requirements are at a moderate to high risk of changing. So, risk and uncertainty is high with this process model.
  • It is difficult to measure progress within stages.
  • Cannot accommodate changing requirements.
  • Adjusting scope during the life cycle can end a project.
  • Integration is done as a “big-bang. at the very end, which doesn’t allow identifying any technological or business bottleneck or challenges early.

Incremental development

Incremental development is based on the idea of developing an initial implementation, exposing this to user comment and evolving it through several versions until an adequate system has been developed (Figure 2.2). Specification, development, and validation activities are interleaved rather than separate, with rapid feedback across activities. Incremental software development, which is a fundamental part of agile approaches, is better than a waterfall approach for most business, ecommerce, and personal systems. Incremental development reflects the way that we solve problems. We rarely work out a complete problem solution in advance but move toward a solution in a series of steps, backtracking when we realize that we have made a mistake. By developing the software incrementally, it is cheaper and easier to make changes in the software as it is being developed. Each increment or version of the system incorporates some of the functionality that is needed by the customer. Generally, the early increments of the system include the most important or most urgently required functionality. This means that the customer can evaluate the system at a relatively early stage in the development to see if it delivers what is required. If not, then only the current increment has to be changed and, possibly, new functionality defined for later increments.

Incremental development has three important benefits, compared to the waterfall model:

  1. The cost of accommodating changing customer requirements is reduced. The amount of analysis and documentation that has to be redone is much less than is required with the waterfall model.
  2. It is easier to get customer feedback on the development work that has been done. Customers can comment on demonstrations of the software and see how much has been implemented. Customers find it difficult to judge progress from software design documents.
  3. More rapid delivery and deployment of useful software to the customer is possible, even if all of the functionality has not been included. Customers are able to use and gain value from the software earlier than is possible with a waterfall process.

Incremental development in some form is now the most common approach for the development of application systems. This approach can be either plan-driven, agile, or, more usually, a mixture of these approaches. In a plan-driven approach, the system increments are identified in advance; if an agile approach is adopted, the early increments are identified but the development of later increments depends on progress and customer priorities. 

From a management perspective, the incremental approach has two problems:

  1. The process is not visible. Managers need regular deliverables to measure progress. If systems are developed quickly, it is not cost-effective to produce documents that reflect every version of the system.
  2. System structure tends to degrade as new increments are added. Unless time and money is spent on refactoring to improve the software, regular change tends to corrupt its structure. Incorporating further software changes becomes increasingly difficult and costly.

The problems of incremental development become particularly acute for large, complex, longlifetime systems, where different teams develop different parts of the system. Large systems need a stable framework or architecture and the responsibilities of the different teams working on parts of the system need to be clearly defined with respect to that architecture. This has to be planned in advance rather than developed incrementally. You can develop a system incrementally and expose it to customers for comment, without actually delivering it and deploying it in the customer‘s environment. Incremental delivery and deployment means that the software is used in real, operational processes. This is not always possible as experimenting with new software can disrupt normal business processes. 

When to use Incremental models?

  • Requirements of the system are clearly understood
    • When demand for an early release of a product arises
    • When software engineering team are not very well skilled or trained
    • When high-risk features and goals are involved
    • Such methodology is more in use for web application and product based companies.

Reuse-oriented software engineering

In the majority of software projects, there is some software reuse. This often happens informally when people working on the project know of designs or code that are similar to what is required. They look for these, modify them as needed, and incorporate them into their system. This informal reuse takes place irrespective of the development process that is used. However, in the 21st century, software development processes that focus on the reuse of existing software have become widely used. Reuse-oriented approaches rely on a large base of reusable software components and an integrating framework for the composition of these components. Sometimes, these components are systems in their own right (COTS or commercial off-the-shelf systems) that may provide specific functionality such as word processing or a spreadsheet. A general process model for reuse-based development is shown in Figure 2.3. Although the initial requirements specification stage and the validation stage are comparable with other software processes, the intermediate stages in a reuse oriented process are different. These stages are:

  1. Component analysis Given the requirements specification, a search is made for components to implement that specification. Usually, there is no exact match and the components that may be used only provide some of the functionality required.
  2. Requirements modification during this stage, the requirements are analyzed using information about the components that have been discovered. They are then modified to reflect the available components. Where modifications are impossible, the component analysis activity may be reentered to search for alternative solutions.
  3. System design with reuse during this phase, the framework of the system is designed or an existing framework is reused. The designers take into account the components that are reused and organize the framework to cater for this. Some new software may have to be designed if reusable components are not available.
  4. Development and integration Software that cannot be externally procured is developed, and the components and COTS systems are integrated to create the new system. System integration, in this model, may be part of the development process rather than a separate activity.

There are three types of software component that may be used in a reuse-oriented process:

  1. Web services that are developed according to service standards and which are available for remote invocation.
  2. Collections of objects that are developed as a package to be integrated with a component framework such as .NET or J2EE.
  3. Stand-alone software systems that are configured for use in a particular environment.

Reuse-oriented software engineering has the obvious advantage of reducing the amount of software to be developed and so reducing cost and risks. It usually also leads to faster delivery of the software. However, requirements compromises are inevitable and this may lead to a system that does not meet the real needs of users. Furthermore, some control over the system evolution is lost as new versions of the reusable components are not under the control of the organization using them. 

Advantages :

  • It can reduce total cost of software development.
    • The risk factor is very low.
    • It can save lots of time and effort.
    • It is very efficient in nature.

Disadvantages :

  • Reuse-oriented model is not always worked as a practice in its true form.
    • Compromises in requirements may lead to a system that does not fulfill requirement of user.
    • Sometimes using old system component, that is not compatible with new version of component, this may lead to an impact on system evolution.

Process activities

Real software processes are interleaved sequences of technical, collaborative, and managerial activities with the overall goal of specifying, designing, implementing, and testing a software system. Software developers use a variety of different software tools in their work. Tools are particularly useful for supporting the editing of different types of document and for managing the immense volume of detailed information that is generated in a large software project. The four basic process activities of specification, development, validation, and evolution are organized differently in different development processes. In the waterfall model, they are organized in sequence, whereas in incremental development they are interleaved. How these activities are carried out depends on the type of software, people, and organizational structures involved. In extreme programming, for example, specifications are written on cards. Tests are executable and developed before the program itself. Evolution may involve substantial system restructuring or refactoring.

Software specification

Software specification or requirements engineering is the process of understanding and defining what services are required from the system and identifying the constraints on the system‘s operation and development. Requirements engineering is a particularly critical stage of the software process as errors at this stage inevitably lead to later problems in the system design and implementation. The requirements engineering process (Figure 2.4) aims to produce an agreed requirements document that specifies a system satisfying stakeholder requirements. Requirements are usually presented at two levels of detail. End-users and customers need a high-level statement of the requirements; system developers need a more detailed system specification.

There are four main activities in the requirements engineering process:

  • Feasibility study An estimate is made of whether the identified user needs may be satisfied using current software and hardware technologies. The study considers whether the proposed system will be cost-effective from a business point of view and if it can be developed within existing budgetary constraints. A feasibility study should be relatively cheap and quick. The result should inform the decision of whether or not to go ahead with a more detailed analysis.
  • Requirements elicitation and analysis This is the process of deriving the system requirements through observation of existing systems, discussions with potential users and procurers, task analysis, and so on. This may involve the development of one or more system models and prototypes. These help you understand the system to be specified.
  • Requirements specification Requirements specification is the activity of translating the information gathered during the analysis activity into a document that defines a set of requirements. Two types of requirements may be included in this document. User requirements are abstract statements of the system requirements for the customer and end-user of the system; system requirements are a more detailed description of the functionality to be provided.
  • Requirements validation This activity checks the requirements for realism, consistency, and completeness. During this process, errors in the requirements document are inevitably discovered. It must then be modified to correct these problems.

Of course, the activities in the requirements process are not simply carried out in a strict sequence. Requirements analysis continues during definition and specification and new requirements come to light throughout the process. Therefore, the activities of analysis, definition, and specification are interleaved. In agile methods, such as extreme programming, requirements are developed incrementally according to user priorities and the elicitation of requirements comes from users who are part of the development team. 

Software Design and implementation

The implementation stage of software development is the process of converting a system specification into an executable system. It always involves processes of software design and programming but, if an incremental approach to development is used, may also involve refinement of the software specification. A software design is a description of the structure of the software to be implemented, the data models and structures used by the system, the interfaces between system components and, sometimes, the algorithms used. Designers do not arrive at a finished design immediately but develop the design iteratively. They add formality and detail as they develop their design with constant backtracking to correct earlier designs. Figure 2.5 is an abstract model of this process showing the inputs to the design process, process activities, and the documents produced as outputs from this process. 

The activities in the design process vary, depending on the type of system being developed. For example, real-time systems require timing design but may not include a database so there is no database design involved. Figure 2.5 shows four activities that may be part of the design process for information systems: 

  • Architectural design, where you identify the overall structure of the system, the principal components (sometimes called sub-systems or modules), their relationships, and how they are distributed.
  • Interface design, where you define the interfaces between system components. This interface specification must be unambiguous. With a precise interface, a component can be used without other components having to know how it is implemented. Once interface specifications are agreed, the components can be designed and developed concurrently.
  • Component design, where you take each system component and design how it will operate. This may be a simple statement of the expected functionality to be implemented, with the specific design left to the programmer. Alternatively, it may be a list of changes to be made to a reusable component or a detailed design model. The design model may be used to automatically generate an implementation.
  • Database design, where you design the system data structures and how these are to be represented in a database. Again, the work here depends on whether an existing database is to be reused or a new database is to be created.

These activities lead to a set of design outputs, which are also shown in Figure 2.5.The detail and representation of these vary considerably. For critical systems, detailed design documents setting out precise and accurate descriptions of the system must be produced. If a model-driven approach is used, these outputs may mostly be diagrams. Where agile methods of development are used, the outputs of the design process may not be separate specification documents but may be represented in the code of the program. 

Software validation

Software validation or, more generally, verification and validation (V&V) is intended to show that a system both conforms to its specification and that it meets the expectations of the system customer. Program testing, where the system is executed using simulated test data, is the principal validation technique. Validation may also involve checking processes, such as inspections and reviews, at each stage of the software process from user requirements definition to program development. Because of the predominance of testing, the majority of validation costs are incurred during and after implementation. 

The stages in the testing process are:

  1. Development testing The components making up the system are tested by the people developing the system. Each component is tested independently, without other system components. Components may be simple entities such as functions or object classes, or may be coherent groupings of these entities. Test automation tools, such as JUnit (Massol and Husted, 2003), that can re-run component tests when new versions of the component are created, are commonly used.
  2. System testing System components are integrated to create a complete system. This process is concerned with finding errors that result from unanticipated interactions between components and component interface problems. It is also concerned with showing that the system meets its functional and non-functional requirements, and testing the emergent system properties. For large systems, this may be a multi-stage process where components are integrated to form subsystems that are individually tested before these sub-systems are themselves integrated to form the final system.
  3. Acceptance testing This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system customer rather than with simulated test data. Acceptance testing may reveal errors and omissions in the system requirements definition, because the real data exercise the system in different ways from the test data. Acceptance testing may also reveal requirements problems where the system‘s facilities do not really meet the user‘s needs or the system performance is unacceptable.

Figure 2.6 Testing phases in a plan-driven software process 

  • Normally, component development and testing processes are interleaved. Programmers make up their own test data and incrementally test the code as it is developed. This is an economically sensible approach, as the programmer knows the component and is therefore the best person to generate test cases.
  • If an incremental approach to development is used, each increment should be tested as it is developed, with these tests based on the requirements for that increment. In extreme programming, tests are developed along with the requirements before development starts. This helps the testers and developers to understand the requirements and ensures that there are no delays as test cases are created.
  • When a plan-driven software process is used (e.g., for critical systems development), testing is driven by a set of test plans. An independent team of testers works from these pre-formulated test plans, which have been developed from the system specification and design. Figure 2.7 illustrates how test plans are the link between testing and development activities. This is sometimes called the V-model of development (turn it on its side to see the V).
  • Acceptance testing is sometimes called ‗alpha testing‘. Custom systems are developed for a single client. The alpha testing process continues until the system developer and the client agree that the delivered system is an acceptable implementation of the requirements.
  • When a system is to be marketed as a software product, a testing process called ‗beta testing‘ is often used. Beta testing involves delivering a system to a number of potential customers who agree to use that system. They report problems to the system developers. This exposes the product to real use and detects errors that may not have been anticipated by the system builders. After this feedback, the system is modified and released either for further beta testing or for general sale.

Software evolution

  • The flexibility of software systems is one of the main reasons why more and more software is being incorporated in large, complex systems. Once a decision has been made to manufacture hardware, it is very expensive to make changes to the hardware design. However, changes can be made to software at any time during or after the system development. Even extensive changes are still much cheaper than corresponding changes to system hardware. 
  • Historically, there has always been a split between the process of software development and the process of software evolution (software maintenance). People think of software development as a creative activity in which a software system is developed from an initial concept through to a working system. However, they sometimes think of software maintenance as dull and uninteresting. Although the costs of maintenance are often several times the initial development costs, maintenance processes are sometimes considered to be less challenging than original software development.
  • This distinction between development and maintenance is increasingly irrelevant. Hardly any software systems are completely new systems and it makes much more sense to see development and maintenance as a continuum. Rather than two separate processes, it is more realistic to think of software engineering as an evolutionary process (Figure 2.8) where software is continually changed over its lifetime in response to changing requirements and customer needs.

Coping with change

Change is inevitable in all large software projects. The system requirements change as the business procuring the system responds to external pressures and management priorities change. As new technologies become available, new design and implementation possibilities emerge. Therefore whatever software process model is used, it is essential that it can accommodate changes to the software being developed. Change adds to the costs of software development because it usually means that work that has been completed has to be redone. This is called rework. For example, if the relationships between the requirements in a system have been analyzed and new requirements are then identified, some or all of the requirements analysis has to be repeated. It may then be necessary to redesign the system to deliver the new requirements, change any programs that have been developed, and re-test the system.

There are two related approaches that may be used to reduce the costs of rework:

  1. Change avoidance, where the software process includes activities that can anticipate possible changes before significant rework is required. For example, a prototype system may be developed to show some key features of the system to customers. They can experiment with the prototype and refine their requirements before committing to high software production costs.
  2. Change tolerance, where the process is designed so that changes can be accommodated at relatively low cost. This normally involves some form of incremental development. Proposed changes may be implemented in increments that have not yet been developed. If this is impossible, then only a single increment (a small part of the system) may have to be altered to incorporate the change.

Two ways of coping with change and changing system requirements. These are:

  1. System prototyping, where a version of the system or part of the system is developed quickly to check the customer‘s requirements and the feasibility of some design decisions. This supports change avoidance as it allows users to experiment with the system before delivery and so refine their requirements. The number of requirements change proposals made after delivery is therefore likely to be reduced.
  2. Incremental delivery, where system increments are delivered to the customer for comment and experimentation. This supports both change avoidance and change tolerance. It avoids the premature commitment to requirements for the whole system and allows changes to be incorporated into later increments at relatively low cost.

Incremental delivery

  • Rather than deliver the system y as a single delivery, the development and delivery is broken down into increments with each increment delivering part of the required functionality.
  • User requirements are prioritized and the highest priority requirements are included in early increments.
  • Once the development of an increment is started, the requirements are frozen though requirements for later increments can continue to evolve.

Incremental delivery has a number of advantages:

  1. Customers can use the early increments as prototypes and gain experience that informs their requirements for later system increments. Unlike prototypes, these are part of the real system so there is no re-learning when the complete system is available.
  2. Customers do not have to wait until the entire system is delivered before they can gain value from it. The first increment satisfies their most critical requirements so they can use the software immediately.
  3. The process maintains the benefits of incremental development in that it should be relatively easy to incorporate changes into the system.
  4. As the highest-priority services are delivered first and increments then integrated, the most important system services receive the most testing. This means that customers are less likely to encounter software failures in the most important parts of the system.

However, there are problems with incremental delivery:

  1. Most systems require a set of basic facilities that are used by different parts of the system. As requirements are not defined in detail until an increment is to be implemented, it can be hard to identify common facilities that are needed by all increments.
  2. Iterative development can also be difficult when a replacement system is being developed. Users want all of the functionality of the old system and are often unwilling to experiment with an incomplete new system. Therefore, getting useful customer feedback is difficult.
  3. The essence of iterative processes is that the specification is developed in conjunction with the software. However, this conflicts with the procurement model of many organizations, where the complete system specification is part of the system development contract. In the incremental approach, there is no complete system specification until the final increment is specified. This requires a new form of contract, which large customers such as government agencies may find difficult to accommodate.

Prototyping

A prototype is an initial version of a software system that is used to demonstrate concepts, try out design options, and find out more about the problem and its possible solutions. Rapid, iterative development of the prototype is essential so that costs are controlled and system stakeholders can experiment with the prototype early in the software process.

A software prototype can be used in a software development process to help anticipate changes that may be required:

  1. In the requirements engineering process, a prototype can help with the elicitation and validation of system requirements.
  2. In the system design process, a prototype can be used to explore particular software solutions and to support user interface design.

A general problem with prototyping is that the prototype may not necessarily be used in the same way as the final system. The tester of the prototype may not be typical of system users. The training time during prototype evaluation may be insufficient. If the prototype is slow, the evaluators may adjust their way of working and avoid those system features that have slow response times. When provided with better response in the final system, they may use it in a different way. Developers are sometimes pressured by managers to deliver throwaway prototypes, particularly when there are delays in delivering the final version of the software. However, this is usually unwise:

  1. It may be impossible to tune the prototype to meet non-functional requirements, such as performance, security, robustness, and reliability requirements, which were ignored during prototype development.
  2. Rapid change during development inevitably means that the prototype is undocumented. The only design specification is the prototype code. This is not good enough for long-term maintenance.
  3. The changes made during prototype development will probably have degraded the system structure. The system will be difficult and expensive to maintain.
  4. Organizational quality standards are normally relaxed for prototype development.

Boehm’s Spiral model

A risk-driven software process framework (the spiral model) was proposed by Boehm (1988). This is shown in Figure 2.11. Here, the software process is represented as a spiral, rather than a sequence of activities with some backtracking from one activity to another. Each loop in the spiral represents a phase of the software process. Thus, the innermost loop might be concerned with system feasibility, the next loop with requirements definition, the next loop with system design, and so on. The spiral model combines change avoidance with change tolerance. It assumes that changes are a result of project risks and includes explicit risk management activities to reduce these risks. Each loop in the spiral is split into four sectors:

  1. Objective setting: Specific objectives for that phase of the project are defined. Constraints on the process and the product are identified and a detailed management plan is drawn up. Project risks are identified. Alternative strategies, depending on these risks, may be planned.
  2. Risk assessment and reduction: For each of the identified project risks, a detailed analysis is carried out. Steps are taken to reduce the risk. For example, if there is a risk that the requirements are inappropriate, a prototype system may be developed.
  3. Development and validation: After risk evaluation, a development model for the system is chosen. For example, throwaway prototyping may be the best development approach if user interface risks are dominant. If safety risks are the main consideration, development based on formal transformations may be the most appropriate process, and so on. If the main identified risk is sub-system integration, the waterfall model may be the best development model to use.
  4. Planning: The project is reviewed and a decision made whether to continue with a further loop of the spiral. If it is decided to continue, plans are drawn up for the next phase of the project. 

The main difference between the spiral model and other software process models is its explicit recognition of risk. A cycle of the spiral begins by elaborating objectives such as performance and functionality. Alternative ways of achieving these objectives, and dealing with the constraints on each of them, are then enumerated. Each alternative is assessed against each objective and sources of project risk are identified. The next step is to resolve these risks by information-gathering activities such as more detailed analysis, prototyping, and simulation. Once risks have been assessed, some development is carried out, followed by a planning activity for the next phase of the process. Informally, risk simply means something that can go wrong. For example, if the intention is to use a new programming language, a risk is that the available compilers are unreliable or do not produce sufficiently efficient object code. Risks lead to proposed software changes and project problems such as schedule and cost overrun, so risk minimization is a very important project management activity.

Chapter 6:  Software Testing with example Process

Chapter-1 : Introduction to Software Engineering Product

Software Engineering

 Objectives

  • To introduce software engineering and to explain its importance
  • To set out the answers to key questions about software engineering
  • To introduce ethical and professional issues and to explain why they are of concern to software engineers.
  • Figure 1.1 Frequently  asked questions about software
QuestionAnswer
What is software?Computer programs and associated documentation. Software products may be developed for a particular customer or may be developed for a general market. 
What are the attributes of good software? Good software should deliver the required Functionality and performance to the user and should be maintainable, dependable, and usable.
What is software engineering?Software engineering is an engineering discipline that is concerned with all aspects of software production.
What are the fundamental software engineering activities?Software          specification,   software development, Software validation and software evolution.
What is the difference between software Engineering and computer science?Computer science focuses on theory and fundamentals; software engineering is concerned with the practicalities of developing and delivering useful software.
What is the difference between software Engineering and system engineering?System engineering is concerned with all aspects of computer-based systems development including hardware, software, and process engineering. Software engineering is part of this more general process.
What are the key challenges facing software Engineering? Coping with increasing diversity, demands for reduced delivery times, and developing trustworthy software.
What   are       the       costs    of             software engineering? Roughly 60% of software costs are development costs; 40% are testing costs. For custom software, evolution costs often exceed development costs. 
What are the best software engineering techniques and methods?While all software projects have to be professionally managed and developed, different techniques are appropriate for different types of system. For example, games should always be developed using a series of prototypes whereas safety critical
 What is control systems?control systems require a complete and analyzable specification to be developed. You can‘t, therefore, say that one method is better than another.
What differences has the Web made to software engineering?The Web has led to the availability of software Services and the possibility of developing highly distributed service-based systems. Web-based systems development has led to important advances in programming languages and software reuse.

Software Product

Software engineers are concerned with developing software products (i.e., software which  can be sold to a customer). There are two kinds of software products:

  1. Generic products : These are stand-alone systems that are produced by a development Organization and sold on the open market to any customer who is able to buy them. Examples of this type of product include software for PCs such as databases, word processors, drawing packages, and project-management tools. It also includes so-called vertical applications designed for some specific purpose such as library information systems, accounting systems, or systems for maintaining dental records.
  2. Customized (or bespoke) products: These are systems that are commissioned by a particular customer. A software contractor develops the software especially for that customer. Examples of this type of software include control systems for electronic devices, systems written to support a particular business process, and  air traffic control systems.

An important difference between these types of software is that, in generic products, the organization that develops the software controls the software specification. For custom products, the specification is usually developed and controlled by the organization that is buying the software. The software developers must work to that specification. However, the distinction between these system product types is becoming increasingly blurred. More and more systems are now being built with a generic product as a base, which is then adapted to suit the requirements of a customer. Enterprise Resource Planning (ERP) systems, such as the SAP system, are the best examples of this approach. Here, a large and complex system is adapted for a company by incorporating information about business rules and processes, reports required, and so on.

Essential attributes of good software

Product characteristics Description 
Maintainability Software should be written in such a way so that it can evolve to meet the changing needs of customers. This is a critical attribute because software change is an inevitable requirement of a changing business environment. 
Dependability and security Software dependability includes a range of characteristics including reliability, security, and safety. Dependable software should not cause physical or economic damage in the event of system failure. Malicious users should not be able to access or damage the system.
Efficiency Software should not make wasteful use of system resources such as memory and processor cycles. Efficiency therefore includes  responsiveness, processing time, memory utilization, etc.
Acceptability Software must be acceptable to the type of users for which it is designed. This means that it must be understandable, usable, and compatible with other systems that they use.

Software engineering

Software engineering is an engineering discipline that is concerned with all aspects of software production from the early stages of system specification through to maintaining the system after it has gone into use. In this definition, there are two key phrases:

  1. Engineering discipline Engineers make things work. They apply theories, methods, and tools where these are appropriate. However, they use them selectively and always try to discover solutions to problems even when there are no applicable theories and methods. Engineers also recognize that they must work to organizational and financial constraints so they look for solutions within these constraints.
  2. All aspects of software production Software engineering is not just concerned with the technical processes of software development. It also includes activities such as software project management and the development of tools, methods, and theories to support software production.

Engineering is about getting results of the required quality within the schedule and budget. This often involves making compromises—engineers cannot be perfectionists. People writing programs for themselves, however, can spend as much time as they wish on the program development. In general, software engineers adopt a systematic and organized approach to their work, as this is often the most effective way to produce high-quality software. However, engineering is all about selecting the most appropriate method for a set of circumstances so a more creative, less formal approach to development may be effective in some circumstances. Less formal development is particularly appropriate for the development of web-based systems, which requires a blend of software and graphical design skills.

Software engineering is important for two reasons:

  1. More and more, individuals and society rely on advanced software systems. We need to be able to produce reliable and trustworthy systems economically and quickly.
  2. It is usually cheaper, in the long run, to use software engineering methods and techniques for software systems rather than just write the programs as if it was a personal programming project. For most types of systems, the majority of costs are the costs of changing the software after it has gone into use.

Software process

The systematic approach that is used in software engineering is sometimes called a software process. A software process is a sequence of activities that leads to the production of a software product. There are four fundamental activities that are common to all software processes. These activities are:

  1. Software specification, where customers and engineers define the software that is to be produced and the constraints on its operation.
  2. Software development, where the software is designed and programmed.
  3. Software validation, where the software is checked to ensure that it is what the customer requires.
  4. Software evolution, where the software is modified to reflect changing customer and market requirements.

Software engineering is related to both computer science and systems engineering:

  1. Computer science is concerned with the theories and methods that underlie computers and software systems, whereas software engineering is concerned with the practical problems of producing software. Some knowledge of computer science is essential for software engineers in the same way that some knowledge of physics is essential for electrical engineers. Computer science theory, however, is often most applicable to relatively small programs. Elegant theories of computer science cannot always be applied to large, complex problems that require a software solution.
  2. System engineering is concerned with all aspects of the development and evolution of complex systems where software plays a major role. System engineering is therefore concerned with hardware development, policy and process design and system deployment, as well as software engineering. System engineers are involved in specifying the system, defining its overall architecture, and then integrating the different parts to create the finished system. They are less concerned with the engineering of the system components (hardware, software etc.).

General Issues that affect many Software

There are many different types of software. There is no universal software engineering method or technique that is applicable for all of these. However, there are three general issues that affect many different types of software:

  1. Heterogeneity increasingly, systems are required to operate as distributed systems across networks that include different types of computer and mobile devices. As well as running on general-purpose computers, software may also have to execute on mobile phones. You often have to integrate new software with older legacy systems written in different programming languages. The challenge here is to develop techniques for building dependable software that is flexible enough to cope with this heterogeneity.
  2. Business and social change Business and society are changing incredibly quickly as emerging economies develop and new technologies become available. They need to be able to change their existing software and to rapidly develop new software. Many traditional software engineering techniques are time consuming and delivery of new systems often takes longer than planned. They need to evolve so that the time required for software to deliver value to its customers is reduced.
  3. Security and trust As software is intertwined with all aspects of our lives, it is essential that we can trust that software. This is especially true for remote software systems accessed through a web page or web service interface. We have to make sure that malicious users cannot attack our software and that information security is maintained.

This radical change in software organization has, obviously, led to changes in the ways that web-based systems are engineered. For example:

  1. Software reuse has become the dominant approach for constructing web-based systems. When building these systems, you think about how you can assemble them from pre-existing software components and systems. 
  2. It is now generally recognized that it is impractical to specify all the requirements for such systems in advance. Web-based systems should be developed and delivered incrementally.
  3. User interfaces are constrained by the capabilities of web browsers. Although technologies such as AJAX (Holdener, 2008) mean that rich interfaces can be created within a web browser, these technologies are still difficult to use. Web forms with local scripting are more commonly used. Application interfaces on web-based systems are often poorer than the specially designed user interfaces on PC system products. 

Software engineering ethics

  1. Confidentiality You should normally respect the confidentiality of your employers or clients irrespective of whether or not a formal confidentiality agreement    has been signed.
  2. Competence You should not misrepresent your level of competence. You should not knowingly accept work that is outside your competence.
  3. Intellectual property rights You should be aware of local laws governing the use of intellectual property such as patents and copyright. You should be careful to    ensure that the intellectual property of employers and clients is protected.
  4. Computer misuse You should not use your technical skills to misuse other  people‘s computers. Computer misuse ranges from relatively trivial (game playing on an employer‘s machine, say) to extremely serious (dissemination of viruses or other malware).

ACM/IEEE Code of Ethics

  1. The professional societies in the US have cooperated to produce a code of ethical practice.
  • Members of these organizations sign up to the code of practice when they join.
  • The Code contains eight Principles related to the behavior of and decisions made by professional software engineers, including practitioners, educators, managers, supervisors and policy makers, as well as trainees and students of the profession.

Code of ethics – preamble

PREAMBLE

The short version of the code summarizes aspirations at a high level of the abstraction; the clauses that are included in the full version give examples and details of how these aspirations change the way we act as software engineering professionals. Without the aspirations, the details can become legalistic and tedious; without the details, the aspirations can become high sounding but empty; together, the aspirations and the details form a cohesive code. Software engineers shall commit themselves to making the analysis, specification, design, development, testing and maintenance of software a beneficial and respected profession. In accordance with their commitment to the health, safety and welfare of the public, software engineers shall adhere to the following

Eight Principles of software engineers:

  1. PUBLIC — Software engineers shall act consistently with the public interest.
  2. CLIENT AND EMPLOYER — Software engineers shall act in a manner that is in the

Best interests of their client and employer consistent with the public interest.

  • PRODUCT — Software engineers shall ensure that their products and related

Modifications meet the highest professional standards possible.

  • JUDGMENT — Software engineers shall maintain integrity and independence in their Professional judgment.
  • MANAGEMENT — Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance.
  • PROFESSION — Software engineers shall advance the integrity and reputation of the profession consistent with the public interest.
  • COLLEAGUES — Software engineers shall be fair to and supportive of their colleagues.
  • SELF — Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession.

. Ethical dilemmas

  • Disagreement in principle with the policies of senior management.
  • Your employer acts in an unethical way and releases a safety critical system without finishing the testing of the system.
  • Participation in the development of military weapons systems or nuclear systems.

Case studies

The three types of systems that I use as case studies are:

  1. An embedded system This is a system where the software controls a hardware device and is embedded in that device. Issues in embedded systems typically include physical size, responsiveness, power management, etc. The example of an embedded system that I use is a software system to control a medical device.
  2. An information system This is a system whose primary purpose is to manage and provide access to a database of information. Issues in information systems include security, usability, privacy, and maintaining data integrity. The example of an information system that I use is a medical records system.
  3. A sensor-based data collection system This is a system whose primary purpose is to collect data from a set of sensors and process that data in some way. The key requirements of such systems are reliability, even in hostile environmental conditions, and maintainability. The example of a data collection system that I use is a wilderness weather station.

Homework

  1. Study Three types of case studies (Ref: software engineering , Ian Sommerville) 
  2. Make Groups (4 Members) and Give a presentation on your selected topics on next class!  . 
Frequency Word for IELTS Listening

Frequency Word for IELTS Listening

Frequency Word for IELTS Listening

School

a. Library 

WordSentence
1. Shelf 
2. Librarian 
3. The stacks 
4. Return 
5. Fine 
6. Magazine 
7. Copier  
8. Overdue  
9. Reading room  
10. Reference room  
11. Periodical room  
12. Study lounge  
13. Catalogue  
14. Index  
15. Keyword  
16. Volume  
17. Library card  
18. Book reservation  
19. Periodical  
20. Quarterly  
21. Back issue  
22. Current issue  
23. Latest number  
24. Writing permission  
25. Check out  
26. Put on reserve  

b. Student

WordSentence
1. Freshman  
2. Sophomore  
3. Junior student  
4. Senior student  
5. Bachelor  
6. Master  
7. Doctoral candidate  
8. Alumni/alumnus  
9. Post doctorate  

c. Teacher

WordSentence
1. Lecturer  
2. Associate professor  
3. Supervisor  
4. Professor  
5. Dean  
6. Teaching assistant  

d. Courses

WordSentence
1. Take the course  
2. Credit  
3. Register  
4. Drop the course  
5. Introductory course  
6. Advanced course  
7. Rank  
8. Syllabus  
9. Curriculum  
10. Seminar  
11. Elective/optional course  
12. Compulsory course  
13. Drop-out  
14. Makeup exam  
15. Psychology course  
16. Physics  
17. Computer course  
18. Computer science  

e. Reading & Books

WordSentence
1. Book review  
2. Novel  
3. Press  
4. Publisher  
5. Publication  
6. Biography  
7. Editorial  
8. Extra copy  
9. Paperback edition  
10. Out of print  
11. Read selectively  
12. Get through a novel  
13. Be addicted to the book  
14. Plough through  
15. Read extensively  

f. After Class

WordSentence
1. Devote to  
2. Run for  
3. Candidate  
4. Vote  
5. Conflict  
6. Election campaign  
7. Campaign manager  
8. Participant  
9. The student’s union  
10. Chairman  
11. Speech contest  
12. Enroll in  
13. Sign up for  

Daily Life

  1. Shopping
WordSentence
1. Convenience store  
2. Department store  
3. Mall  
4. Chain store  
5. Shopping list  
6. Supermarket  
7. Family size  
8. Receipt  
9. Outlet  
10. On sale  
11. Sell out  
12. Grocery store  
13. Out of stock  
14. In stock  
15. Customer  
16. Complaint  
17. Deliver  
18. Counter  
19. Closing time  
20. Balance  
21. Luxurious items  
22. Electronic product  
23. Stationery  
24. Digital video camera  
25. Past the prime  
  • Living in a house
WordSentence
  
1. Housework  
2. Electric cooker  
3. Laundry  
4. Iron  
5. Vacuum cleaner  
6. Housemaid  
7. Housekeeper  
8. Housewife  
9. Keep an eye on  
10. Household expenses  
11. Keep down the cost  
12. Fix the dinner  
13. Budget  
14. In a mess  
  • Daily Interaction
WordSentence
1. Leisure time 
2. Telephone booth  
3. Date  
4. Pay phone  
5. Call on sb.  
6. Long-distance call  
7. Take a message  
8. Hang up  
9. Keep contact  
10. Hold on  
11. Hospitable  

Business

  1. Looking for a job
WordSentence
1. Job hunting  
2. Inexperienced  
3. Opportunity  
4. Want ads  
5. Unemployment  
6. Position  
7. Wage  
8. Opening/vacancy  
9. Full-time job  
10. Part-time job  
11. Inquiry  
12. Do odd jobs  
13. Consult  
14. Resume  
15. Application letter  
16. Fire  
17. Hire  
18. Recruit  
19. Interview  
20. Job-hopping  
21. Interviewee  
22. Take over  
23. Interviewer  
24. Appointment  
25. Impression  
26. Confident  
27. Turn down  
28. Have no match for…  
  • Working in a business
WordSentence
1. On business  
2. Be involved in  
3. Appointment  
4. In charge of  
5. Client  
6. Compromise  
7. Get along with 
8. Proposal  
9. Assistance  
10. Branch  
11. Cooperation  
12. Transaction  
13. Bid  
14. Transfer  
  • Business Attitude 
WordSentence
1. Attitude  
2. Personality  
3. Overwork  
4. Determined  
5. Forgetful  
6. Diligent  
7. Wear out  
8. Perseverance  
9. Hang on  
10. Workaholic  
11. Workload  
12. Struggle  
13. Continuous exploration  
14. Hard-working  
  • Work Performance
WordSentence
1. Recognition  
2. Tribute  
3. Achievement  
4. Pioneer  
5. Contribution  
6. Blaze a trail  
7. Symbol  
8. Legend  

Entertainment 

  1. Art & Culture
WordSentence
1. Napkin  
2. Beverage  
3. Gardening  
4. Excursion  
5. Performance  
6. TV channels  
7. Horror movie  
8. Broadcast  
9. Live broadcast  
10. Documentary  
11. Violence movie  
12. Commercial advertisement  
13. Entertainment industry  
14. TV theater  
  • Eating Out
WordSentence
1. Waiter/waitress  
2. Pork  
3. Beef steak  
4. Menu  
5. Raw  
6. Medium  
7. Done  
8. Dessert  
9. Snack  
10. Join sb. for dinner  
11. Appetizer  
12. Make a reservation  
13. Cutlery  
14. Loaf  
15. Buffet  
16. Staple  
17. Go dutch  
18. Regular dinner  
19. Mutton  
20. Change  
1. Waiter/waitress  
2. Pork  
3. Beef steak  
4. Menu  
5. Raw  
6. Medium  

Personal Well-being

 a. Illness

WordSentence
1. Epidemic  
2. Sore throat  
3. Bird flu  
4. Runny nose  
5. SARS  
6. Stomachache  
7. Infectious illness  
8. Toothache  
9. Symptom  
10. Allergy  
11. Sneeze  
12. Fracture  
13. Diabetes  
14. Have a temperature  
15. Dental decay  
  • Hospital & Doctors 
WordSentence
1. Attending/chief doctor;physician;consultant  
2. Infirmary  
3. Physician  
4. Surgeon  
5. Clinic  
6. Anaesthetist  
  • Exercise 
WordSentence
1. Put on weight  
2. Watch your diet  
3. Overweight  
4. On diet  
5. Lose weight  
6. Physical exercise  
  • Personal Health
WordSentence
1. In good shape  
2. In a fit state  
3. Out of shape  
4. Fit as a fiddle  
5. In poor shape  
6. Feel under the weather  

Traveling

WordSentence
1. Travel agency  
2. Flight number  
3. Check in  
4. Motel  
5. Book the ticket  
6. Platform  
7. Hiking  
8. Hitch-hike  
9. Conductor  
10. Skiing  
11. Mineral bath  
12. Streetcar  
13. Resort  
14. Visa  
15. Express train  
16. High-speed train  
17. Shuttle  
18. Ferry  
19. Tube/underground  
20. Expressway/freeway  
21. Roundtrip  

Trending Topics

WordSentence
1. Prosperous  
2. Decline  
3. Depression  
4. Recession  
5. Collapse  
6. Bankrupt  
7. Monetary  
8. Circulation 
9. Financier  
10. Surplus  
11. Inflation  
12. Deflation  
13. Economic crisis  
14. Potential  
15. Cyberspace  
16. Multimedia  
17. Hacker  
18. Server  

Weather 

WordSentence
1. Recycled water  
2. Renewable energy  
3. Sewage treatment  
4. Recyclable  
5. Deforestation rate  
6. Water and soil erosion  
7. Temperature  
8. Muggy  
9. Humidity  
10. Breeze  
11. Climate trend  
12. Climate variation  
13. Climate warming  
14. Climate watch  
15. Climate-sensitive activity  
16. Climatic anomaly  
17. Conservation area 
18. Forecast  
19. Downpour  
20. Gust  

Housing & Moving

Housing

WordSentence
1. Landlord/landlady  
2. Ventilation  
3. Tenant  
4. Accommodate  
5. Apartment/flat  
6. Dwell  
7. Residence  
8. Downtown  
9. Hallway  
10. Suburb  
11. Spare room  
12. Neighborhood  
13. Burglar  
14. Transportation  
15. House-warming party  
16. Subway entrance  

Decoration & Repair

WordSentence
1. Furnished  
2. Crack  
3. Unfurnished  
4. Install  
5. Baby crib  
6. Maintenance  
7. Decoration  
8. Plumber  
9. Multiple glazing  
10. Washing machine  
11. Cupboard  
12. Refrigerator/fridge  
13. Sideboard  
14. Light bulb  
15. Sink 
16. Heater  
17. Pipe  
18. Furnace  
19. Leak  
20. Air conditioner  
Chapter 8:  Gantt chart Project Development in SDLC

Chapter 8: Gantt chart Project Development in SDLC

Gantt chart Project Development

SD

Schedule (project management)

  • The project scheduleis the tool that communicates what work needs to be performed, which resources of the organization will perform the work and the timeframes in which that work needs to be performed.
  • The project scheduleshould reflect all of the work associated with delivering the project on time.
  • In project management, a scheduleis a listing of a project’s milestones, activities, and deliverables, usually with intended start and finish dates. 

What is a Gantt chart?

  • A chart in which a series of horizontal lines shows the amount of work done or production completed in certain periods of time in relation to the amount planned for those periods.
  • To summarize, a Gantt chart shows you what has to be done (the activities) and when (the schedule).
  • Gantt charts make it easy to visualize project management timelines by transforming task names, start dates, durations, and end dates into cascading horizontal bar charts.

How a Gantt chart look like?

  • A Gantt chart, commonly used in project management, is one of the most popular and useful ways of showing activities (tasks or events) displayed against time.
  • On the left of the chart is a list of the activities and along the top is a suitable time scale.
  • Each activity is represented by a bar; the position and length of the bar reflects the start date, duration and end date of the activity.
SD

Gantt Chart represents..!

  • What the various activities are?
  • When each activity begins and ends?
  • How long each activity is scheduled to last?
  • Where activities overlap with other activities, and by how much?
  • The start and end date of the whole project?

Advantages of Gantt Charts

  • It creates a understandable picture of complexity.if we can see complex ideas as a picture, this will help our understanding.
  • It organizes your thoughts.It represents the concept of dividing and conquering. A big problem is conquered by dividing it into component parts.
  • It demonstrates that you know what you’re doing.When you produce a nicely presented Gantt chart with high level tasks properly organized and resources allocated to those tasks, it speaks volumes about whether you are on top of the needs of the project and whether the project will be successful.
  • It helps you to set realistic time frames.The bars on the chart indicate in which period a particular task or set of tasks will be completed. This can help you to get things in perspective properly. And when you do this, make sure that you think about events in your organization that have nothing to do with this project that might consume resources and time.
  • It can be highly visible.It can be useful to place the chart, or a large version of it, where everyone can see it. This helps to remind people of the objectives and when certain things are going to happen. It is useful if everyone in your enterprise can have a basic level of understanding of what is happening with the project even if they may not be directly involved with it.

Disadvantages of Gantt Charts

  • They can become extraordinarily complex.Except for the most simple projects, there will be large numbers of tasks undertaken and resources employed to complete the project.
  • The size of the bar does not indicate the amount of work.Each bar on the chart indicates the time period over which a particular set of tasks will be completed. However, by looking at the bar for a particular set of tasks, you cannot tell what level of resources are required to achieve those tasks. So, a short bar might take 500 man hours while a longer bar may only take 20 man hours.
  • They need to be constantly updated.As you get into a project, things will change. If you’re going to use a Gantt chart you must have the ability to change the chart easily and frequently. 
  • It does not identify potential weak links between phases . Whenever work is transferred from one person or department to another, your project is subject to potential delay. These weak links are the most common causes of delays. 
  • It does not reveal the problems your team will encounter due to unexpected delays. The Gantt chartshows only the planned and actual start and completion dates for each phase. It gives you a quick visual overview of the project’s status, but you might need more. The chart does not show how a delay during one phase will impact on the completion of another. 
  • It does not coordinate the resources or networking requirements needed at critical points in the schedule. Many projects can proceed only when forms, documents, reports, outside help, and other requirements are either developed by your team or supplied by someone else. Thus, a complete schedule should identify these critical points and enable you to plan ahead for the related demands. The Gantt chart does not provide this much detail. 

PERT chart (Program Evaluation Review Technique)

  • A PERT chart presents a graphic illustration of a project as a network diagram consisting of numbered nodes(either circles or rectangles) representing events, or milestones in the project linked by labelled vectors (directional lines) representing tasks in the project.
  • The direction of the arrows on the lines indicates the sequence of tasks
SD
Chapter 8:  Gantt chart Project Development in SDLC

Chapter 7: Feasibility Analysis in Software Develoment Life Cycle.

Feasibility Analysis

What is Feasibility Analysis??

  • An analysisand evaluation of a proposed project to determine if it (1) is technically feasible, (2) is feasible within the estimated cost, and (3) will be profitable for Organization.
  • Feasibility analysis guides the organization in determining whether to proceed with the project.
  • Feasibility analysis also identifies the important risks associated with the project that must be managed if the project is approved.
Feasibility Analysis

Types in Feasibility: As with the system request, each organization has its own process and format for the feasibility analysis, but most include techniques to assess three areas:

  • Technical Feasibility,
  • Economic Feasibility, and
  • Organizational Feasibility

Technical Feasibility?

  • Technical Feasibility: Can We Build It?
  • Familiarity with application: Less familiarity generates more risk.
  • Familiarity with technology: Less familiarity generates more risk.
  • Project size: Large projects have more risk.
  • Compatibility: The harder it is to integrate the system with the company’s existing technology, the higher the risk will be.

Economic Feasibility

  • Economic Feasibility: Should We Build It?
  • Development costs
  • Annual operating costs
  • Annual benefits (cost savings and/or increased revenues)
  • Intangible benefits and costs

Organizational Feasibility

  • Organizational Feasibility: If We Build It, Will They Come?
  • Project champion(s)
  • Senior management
  • Users
  • Other stakeholders
  • Is the project strategically aligned with the business?
FA

Technical Feasibility

  • The first technique in the feasibility analysis is to assess the technical feasibility of the project, the extent to which the system can be successfully designed, developed, and installed by the IT group.
  • Technical feasibility analysis is, in essence, a technical risk analysis that strives to answer the question: “ Can we build it?”

Familiarity with the application

  • First and foremost is the users’ and analysts’ familiarity with the application.
  • When analysts are unfamiliar with the business application area, they have a greater chance of misunderstanding the users or missing opportunities for improvement.
  • The risks increase dramatically when the users themselves are less familiar with an application.
  • When a system will use technology that has not been used before within the organization, there is a greater chance that problems and delays will occur because of the need to learn how to use the technology.
  • Risk increases dramatically when the technology itself is new.

Project size

  • Project size is an important consideration, whether measured as the number of people on the development team, the length of time it will take to complete the project, or the number of distinct features in the system.
  • Larger projects present more risk, because they are more complicated to manage and because there is a greater chance that some important system requirements will be overlooked or misunderstood.

Compatibility

  • Systems rarely are built in a vacuum—they are built in organizations that have numerous systems already in place.
  • New technology and applications need to be able to integrate with the existing environment for many reasons.
  • They may rely on data from existing systems, they may produce data that feed other applications, and they may have to use the company’s existing communications infrastructure.
  • A new system has little value if it does not use customer data found across the organization in existing sales systems, marketing applications, and customer service systems.

Economic Feasibility

  • Economic feasibility analysis also called a cost–benefit analysis.
  • This attempts to answer the question “Should we build the system?”
  • Economic feasibility is determined by identifying costs and benefits associated with the system, assigning values to them, calculating future cash flows, and measuring the financial worthiness of the project.
  • Keep in mind that organizations have limited capital resources and multiple projects will be competing for funding.

Steps to Conduct an Economic Feasibility Analysis

  1. Identify Costs and Benefits
    • List the tangible costs and benefits for the project.
    • Include both one-time and recurring costs.
  1. Assign Values to Costs and Benefits
  • Work with business users and IT professionals to create numbers for each of the costs and benefits.
  • Even intangibles should be valued if at all possible.
  1. Determine Cash Flow
  • Forecast what the costs and benefits will be over a certain period, usually, three to five years.
  • Apply a growth rate to the values, if necessary.
  1. Assess Project’s Economic Value
  • Evaluate the project’s expected returns in comparison to its costs.
  • Use one or more of the following evaluation techniques:
  1. Determine Cash Flow
  • Forecast what the costs and benefits will be over a certain period, usually, three to five years.
  • Apply a growth rate to the values, if necessary.
  1. Assess Project’s Economic Value
  • Evaluate the project’s expected returns in comparison to its costs.
  • Use one or more of the following evaluation techniques:

Assess Project’s Economic Value

  1. Return on Investment (ROI)
    • Calculate the rate of return earned on the money invested in the project, using the ROI formula.
  2. Break-Even Point (BEP)
    • Find the year in which the cumulative project benefits exceed cumulative project costs.
    • Apply the breakeven formula, using figures for that year.
    • This calculation measures how long it will take for the system to produce benefits that cover its costs.

iii.   Net Present Value (NPV)

  • Restate all costs and benefits in today’s dollar terms(present value), using an appropriate discount rate.
  • Determine whether the total present value of benefits is greater than or less than the total present value of costs.
dfd

Identify Costs and Benefits

  • The systems analyst’s first task when developing an economic feasibility analysis is to identify the kinds of costs and benefits the system will have and list them.
  • The costs and benefits can be broken down into four categories:
    • (1) Development Costs,
    • (2) Operational Costs,
    • (3) Tangible Benefits, and
    • (4) Intangible Benefits.

Development costs

  • Development costs are those tangible expenses that are incurred during the creation of the system, such as salaries for the project team, hardware and software expenses, consultant fees, training, and office space and equipment.
  • Development costs are usually thought of as one-time costs.

Operational costs

  • Operational costs are those tangible costs that are required to operate the system, such as the salaries for operations staff, software licensing fees, equipment upgrades, and communications charges.
  • Operational costs are usually thought of as ongoing costs

Tangible benefits

  • Tangible benefits include revenue that the system enables the organization to collect, such as increased sales.

Tangible benefits include revenue that the system enables the organization to collect, such as increased sales.

dfd

Assign Values to Costs and Benefits

  • Once the types of costs and benefits have been identified, the analyst needs to assign specific BDT values to them.
  • This may seem impossible—How can someone quantify costs and benefits that haven’t happened yet? And how can those predictions be realistic?
  • The most effective strategy for estimating costs and benefits is to rely on the people who have the best understanding of them.

Cash Flow Analysis and Measures

  • IT projects commonly involve an initial investment that produces a stream of benefits over time, along with some ongoing support costs.
  • Cash flows, both inflows and outflows, are estimated over some future period.
  • In this simple example, a system is developed in Year 0 (the current year) costing $100,000. Once the system is operational, benefits and on-going costs are projected over three years.
RV

Return on Investment(ROI)

  • The return on investment (ROI) is a calculation that measures the average rate of return earned on the money invested in the project.
  • ROI is a simple calculation that divides the project’s net benefits (total benefits – total costs) by the total costs.
RV

A high ROI suggests that the project’s benefits far outweigh the project’s cost, although exactly what constitutes a “high” ROI is unclear.

Break-Even Point

  • The break-even point  (also called the payback method ) is defined as the number of years it takes a firm to recover its original investment in the project from net cash flows.
RV
  • In this example, the project’s cumulative cash flow figure becomes positive during Year 3, so the initial investment is “paid back” over two years plus some fraction of the year 3.

 

dfd

Discounted Cash Flow Technique

  • Discounted cash flows are used to compare the present value of all cash inflows and outflows for the project in today’s BDT terms.
  • A BDT received in the future is worth less than a BDT received today, since you forgo that potential return.
SDLC

Discounted Cash Flow Projection

rv

Net Present Value (NPV)

  • The NPV is simply the difference between the total present value of the benefits and the total present value of the costs.
  • As long as the NPV is greater than zero, the project is considered economically acceptable.
dfd

Net Present Value (NPV)

  • Unfortunately for this project, the NPV is less than zero, indicating that for a required rate of return of 10%, this project should not be accepted.
  • The required rate of return would have to be something less than 6.65% before this project returns a positive NPV.

Organizational Feasibility

  • The final technique used for feasibility analysis is to assess the organizational feasibility of the system: how well the system ultimately will be accepted by its users and incorporated into the ongoing operations of the organization.
  • One way to assess the organizational feasibility of the project is to understand how well the goals of the project align with business objectives.
  • A second way to assess organizational feasibility is to conduct a stakeholder analysis.
  • A stakeholder is a person, group, or organization that can affect (or can be affected by) a new system.
  • The most important stakeholders in the introduction of a new system are the project champion, system users, and organizational management.
dfd

Try yourself

  1. Think about the idea that you developed to improve your university course enrollment process.

QUESTIONS :

  1. List three things that influence the technical feasibility of the system.
  2. List three things that influence the economic feasibility of the system.
  3. List three things that influence the organizational feasibility of the system.
  4. How can you learn more about the issues that affect the three kinds of feasibility?
Chapter 8:  Gantt chart Project Development in SDLC

Chapter 6: Data Flow Diagram in Software Development Life Cycle.

Data Flow Diagram

What is DFD?

  • data flow diagram (DFD) is a graphical representation of the “flow” of data through an information system, modelling its process aspects.
  • A DFD is often used as a preliminary step to create an overview of the system, which can later be elaborated.
  • Show users how data moves between different processes in a system.
SDLC
Figure 1: DFD
SDLC
Symbols and Notations Used in DFDs:
•       Two common systems of symbols are named after their creators:
•       Yourdon and Coad
•       Yourdon and DeMarco
•       Gane and Sarson
•       One main difference in their symbols is that Yourdon-Coad and Yourdon-DeMarco use circles for processes, while Gane and Sarson use rectangles with rounded corners, sometimes called lozenges.
•       There are other symbol variations in use as well, so the important thing to keep in mind is to be clear and consistent in the shapes and notations you use to communicate and collaborate with others.
Using any convention’s DFD rules or guidelines, the symbols depict the four components of data flow diagrams.
•       External entity: an outside system that sends or receives data, communicating with the system being diagrammed.
•       They are the sources and destinations of information entering or leaving the system.
•       They might be an outside organization or person, a computer system or a business system.
•       They are also known as terminators, sources and sinks or actors.

They are typically drawn on the edges of the diagram.

•       Process: any process that changes the data, producing an output.
•       It might perform computations, or sort data based on logic, or direct the data flow based on business rules.
•       A short label is used to describe the process, such as “Submit payment.”
•       Data store: files or repositories that hold information for later use, such as a database table or a membership form.
•       Each data store receives a simple label, such as “Orders.”
•       Data flow: the route that data takes between the external entities, processes and data stores.
•       It portrays the interface between the other components and is shown with arrows, typically labeled with a short data name, like “Billing details.”
Figure: DFD element features

DFD rules and tips

  • Each process should have at least one input and an output.
  • Each data store should have at least one data flow in and one data flow out.
  • Data stored in a system must go through a process.
  • All processes in a DFD go to another process or a data store.
  • Each process should have at least one input and an output.
  • Each data store should have at least one data flow in and one data flow out.
  • Data stored in a system must go through a process.
  • All processes in a DFD go to another process or a data store.

DFD levels and layers: From context diagrams to pseudocode

  • A data flow diagram can dive into progressively more detail by using levels and layers, zeroing in on a particular piece.  
  • DFD levels are numbered 0, 1 or 2, and occasionally go to even Level 3 or beyond.
  • The necessary level of detail depends on the scope of what you are trying to accomplish.

DFD Level 0

  • DFD Level 0 is also called a Context Diagram.
  • It’s a basic overview of the whole system or process being analyzed or modeled.
  • It’s designed to be an at-a-glance view, showing the system as a single high-level process, with its relationship to external entities.
  • It should be easily understood by a wide audience, including stakeholders, business analysts, data analysts and developers. 

DFD Level 1

  • DFD Level 1 provides a more detailed breakout of pieces of the Context Level Diagram.
  • You will highlight the main functions carried out by the system, as you break down the high-level process of the Context Diagram into its sub processes. 
DFD Level 2
•      DFD Level 2 then goes one step deeper into parts of Level 1.
•      It may require more text to reach the necessary level of detail about the system’s functioning. 
•      Progression to Levels 3, 4 and beyond is possible, but going beyond Level 3 is uncommon.
•      Doing so can create complexity that makes it difficult to communicate, compare or model effectively.
Figure: Context Diagram

Context Diagram For Level 0 Diagram

  • Level 0 Diagram

Context Diagram For Level 1 Diagram

  • Level 1 DFD for Process 2

Context Diagram For Level 2 Diagram

  • Level 2 DFD for Process 2.2

Context Diagram For Level 1, 2, 0 Diagram

Process

• Every process has a unique name that is an action-oriented verb phrase, a number, and a description.

• Every process has at least one input data flow.

• Every process has at least one output data flow.

• Output data flows usually have different names than input data flows because the process changes the

input into a different output in some way.

• There are between three and seven processes per DFD

Data Flow

• Every data flow has a unique name that is a noun, and a description.

• Every data flow connects to at least one process.

• Data flows only in one direction (no two-headed arrows).

• A minimum number of data flow lines cross.

Data Store

• Every data store has a unique name that is a noun, and a description.

• Every data store has at least one input data flow (which means to add new data or change existing

data in the data store) on some page of the DFD.

• Every data store has at least one output data flow (which means to read data from the data store) on

some page of the DFD.

External Entity

• Every external entity has a unique name that is a noun, and a description.

• Every external entity has at least one input or output data flow.

Within DFD all the elements with features

Across DFDs :

Errors in DFD:

  • An entity cannot provide data to another entity without some processing occurred.
  • Data cannot move directly from an entity to a data story without being processed.
  • Data cannot move directly from a data store without being processed.
  • Data cannot move directly from one data store to another without being processed.

Other frequently-made mistakes in DFD
A second class of DFD mistakes arise when the outputs from one processing step do not match its inputs and they can be classified as:

  • Black holes: A processing step may have input flows but no output flows.
  • Miracles: A processing step may have output flows but no input flows.
  • Grey holes: A processing step may have outputs that are greater than the sum of its inputs
Chapter 8:  Gantt chart Project Development in SDLC

Chapter 5: System request on SDLC

System Request

In most organizations, project initiation begins by preparing a 

system request.

  • A  system request is a document that describes the business reasons for building a system and the value that the system is expected to provide.
  • The project sponsor usually completes this form as part of a formal system project selection process within the organization.
  • Most system requests include five elements:
  1. Project Sponsor,
  2. Business Need,
  3. Business Requirements,
  4. Business Value, and
  5. Special Issues.

Project Sponsor?

  • The sponsor describes the person who will serve as the primary contact for the project.

Business Need

  • The business need presents the reasons prompting the project.

Business Requirements

  • The business requirements of the project refer to the business capabilities that the system will need to have.

Business Value

  • Business value describes the benefits that the organization should expect from the system. 

Special Issues

  • Special issues  are included on the document as a catchall category for other information that should be considered in assessing the project.
  • For example, the project may need to be completed by a specific deadline.

Applying the Concepts…!

  • Tune Source is a company headquartered in Dhaka.
  • Tune Source is the brainchild of three entrepreneurs with ties to the music industry: John, Megan, and Phil.
  • Tune Source quickly became known as the place to go to find rare audio recordings.
  • Annual sales last year were BDT 2 million with annual growth at about 3%–5% per year.

Case study

  • John, Megan, and Phil, like many others in the music industry, watched with alarm the rise of music-sharing websites like Napster, as music consumers shared digital audio files without paying for them, denying artists and record labels royalties associated with sales. Once the legal battle over copyright infringement was resolved and Napster was shut down, the partners set about establishing agreements with a variety of industry partners in order to offer a legitimate digital music download resource for customers in their market niche.
  • Phil has asked Carly Edwards, a rising star in the Tune Source department, to spearhead the digital music download project.
  • Tune Source currently has a website that enables customers to search for and purchase CDs. This site was initially developed by an Internet consulting firm and is hosted by a prominent local Internet Service Provider (ISP) in Dhaka. The IT department at Tune Source has become experienced with Internet technology as it has worked with the ISP to maintain the site.

Sales Projection

Create A System Request? (Assignment)

  • Think about your varsity and choose an idea that could improve student satisfaction with the course enrollment process. Currently, can students enroll for classes from anywhere? How long does it take? Are directions simple to follow? Is online help available?
  • Next, think about how technology can help support your idea. Would you need completely new technology? Can the current system be changed?
  • Question:
  • Create a system request that you could give to the administration that explains the sponsor, business need, business requirements, and potential value of the project. Include any constraints or issues that should be considered.

The Result?

  • The committee reviews the system request and makes an initial determination, based on the information provided, of whether to investigate the proposed project or not.
  • If so, the next step is to conduct a feasibility analysis.
Chapter 8:  Gantt chart Project Development in SDLC

Chapter 4: SDLC design Phase

SDLC design Phase

  • DFD (Design Analysis)
  • Architectural Design
  • UI Design
  • Database Design
  • Program Design
  • Architectural design (logical)
    • Network design
      • Client –server design
      • Client design
      • Server design
        • Cloud Computing
  • Database design
    • ER diagram
    • Relational diagram
    • DDL (not now..!!)
  • Program design (physical)
    • Investigating the hardware/software platform
    • Physical DFD
    • Data storage
    • Data communication design
  • Moving from logical to physical design
  • UI Design

Principles of User Interface Design

  • The principles of user interface design are intended to improve the quality of user interface design.
  •  According to Larry Constantine and Lucy Lockwood in their usage-centered design, these principles are:
  • The structure principle: Design should organize the user interface purposefully, in meaningful and useful ways based on clear, consistent models that are apparent and recognizable to users, putting related things together and separating unrelated things, differentiating dissimilar things and making similar things resemble one another. The structure principle is concerned with overall user interface architecture.
  • The simplicity principle: The design should make simple, common tasks easy, communicating clearly and simply in the user’s own language, and providing good shortcuts that are meaningfully related to longer procedures.
  • The visibility principle: The design should make all needed options and materials for a given task visible without distracting the user with extraneous or redundant information. Good designs don’t overwhelm users with alternatives or confuse with unneeded information.
  • The feedback principle: The design should keep users informed of actions or interpretations, changes of state or condition, and errors or exceptions that are relevant and of interest to the user through clear, concise, and unambiguous language familiar to users.
  • The tolerance principle: The design should be flexible and tolerant, reducing the cost of mistakes and misuse by allowing undoing and redoing, while also preventing errors wherever possible by tolerating varied inputs and sequences and by interpreting all reasonable actions.
  • The reuse principle: The design should reuse internal and external components and behaviors, maintaining consistency with purpose rather than merely arbitrary consistency, thus reducing the need for users to rethink and remember.

UX and UI Design

UX design is a more analytical and technical field, UI design is closer to what we refer to as graphic design.

What is User Experience Design?

  • User experience design (UXD or UED) is the process of enhancing customer satisfaction and loyalty by improving the usability, ease of use, and pleasure provided in the interaction between the customer and the product.
  • User experience encompasses all aspects of the end-user’s interaction with the company, its services, and its products.
  • User experience design is the process of development and improvement of quality interaction between a user and all facets of a company.
  • User experience design is responsible for being hands on with the process of research, testing, development, content, and prototyping to test for quality results.
  • User experience design is in theory a non-digital (cognitive science) practice, but used and defined predominantly by digital industries.

What is UI Design?

  • User Interface Design is responsible for the transference of a brand’s strengths and visual assets to a product’s interface as to best enhance the user’s experience.
  • User Interface Design is a process of visually guiding the user through a product’s interface via interactive elements and across all sizes/platforms.
  • User Interface Design is a digital field, which includes responsibility for cooperation and work with developers or code.

What  is The Difference Between UX and UI Design?

  • UX designer is like architect. He takes care of users and helps your business to improve measurable parameters (reduce bounce rate, improve CTR, etc.)
  • he knows a lot about interface ergonomics
  • he understands user’s behavior and psychology
  • he analyzes business needs and converts it into user flows.
  • UI designer is like decorator/interior designer. He takes care of how the interface reflects your brand’s mission, using the brand visual style. It’s more about unmeasurable things (how cozy an interface is, is it stylish enough, etc.)
  • he knows a lot and ‘feels’ colors and color combinations
  • he can read brand books and convert it into UI elements
  • he creates small ‘visual candies’ (pictograms, etc.) and UI animations (now it’s the must have skill).

Implementation Phases

  • Coding:
    Includes implementation of the design specified in the design document into executable programming language code. The output of the coding phase is the source code for the software that acts as input to the testing and maintenance phase.
  • Integration and Testing: Includes detection of errors in the software. The testing process starts with a test plan that recognizes test-related activities, such as test case generation, testing criteria, and resource allocation for testing. The code is tested and mapped against the design document created in the design phase. The output of the testing phase is a test report containing errors that occurred while testing the application.
  • Installation:
    In this stage the new system is installed and rolled out.

Testing

  • Unit Testing.
  • Integration Testing.
  • Functional Testing.
  • System Testing.
  • Stress Testing.
  • Performance Testing.
  • Usability Testing.
  • Acceptance Testing.
Chapter 8:  Gantt chart Project Development in SDLC

Chapter 3: SDLC and its Life cycle Phases.

What is SDLC?

The systems development life cycle (SDLC), also referred to as the application development life-cycle, is a term used in systems engineering, information systems and software engineering to describe a process for planning, creating, testing, and deploying an information system.

Career Paths for System Developers

Systems Development Life Cycle

  • Building an information system using the SDLC follows a similar set of four fundamental phases:
  • Planning,
  • Analysis,
  • Design,
  • Implementation

The Systems Development Life Cycle

  • Each phase is itself composed of a series of steps , which rely on  techniques  that produce deliverables (specific documents and files that explain various elements of the system).

Planning

  • The  planning phase is the fundamental process of understanding why  an information system should be built and determining how the project team will go about building it. It has two steps:
  • 1. Project initiation
  • 2. Project management

Project initiation

  • During project initiation , the system’s business value to the organization is identified—how will it lower costs or increase revenues?
  • The IS department works to conduct a feasibility analysis. The  feasibility analysis  examines key aspects of the proposed project:
  • ■ The technical feasibility (Can we build it?)
  • ■ The economic feasibility (Will it provide business value?)
  • ■ The organizational feasibility (If we build it, will it be used?)
  • The system request and feasibility analysis are presented to an information systems approval committee (sometimes called a steering committee ), which decides whether the project should be undertaken.

Project management

  • Once the project is approved, it enters  project management.
  • During project management, the project manager creates a  work plan, staffs the project, and puts techniques in place to help the project team control and direct the project through the entire SDLC.
  • The deliverable for project management is a project plan that describes how the project team will go about developing the system
  • The  analysis phase answers the questions of  who will use the system?, what the system will do?, and  where   and   when  it will be used?
  • During this phase, the project team investigates any current system(s), identifies improvement opportunities, and develops a concept for the new system. This phase has three steps:
  • Analysis strategy
  • Requirements gathering
  • System proposal

Analysis strategy

  • An  analysis strategy is developed to guide the project team’s efforts.
  •  Such a strategy usually includes a study of the current system (called the  as-is system ) and its problems, and envisioning ways to design a new system (called the  to-be system ).

Requirements gathering

  • The next step is  requirements gathering  (e.g., through interviews, group work-shops, or questionnaires).
  • The analysis of this information leads to the development of a concept for a new system.
  • The system concept is then used as a basis to develop a set of business  analysis models  that describes how the business will operate if the new system were developed.

System proposal

  • The analyses, system concept, and models are combined into a document called the   system proposal , which is presented to the project sponsor and other key decision makers (e.g., members of the approval committee) who will decide whether the project should continue to move forward.

Design

  • The  design phase  decides how  the system will operate in terms of the hardware, software, and network infrastructure that will be in place; the user interface, forms, and reports that will be used; and the specific programs, databases, and files that will be needed.
  • The design phase has four steps:
  • Design strategy
  • Architecture design
  • Database and file specifications
  • Program design

Design strategy

  • This clarifies whether the system will be developed by the company’s own programmers, whether its development will be outsourced to another firm (usually a consulting firm), or whether the company will buy an existing software package.

Architecture design

  • This leads to the development of the basic  architecture design  for the system that describes the hardware, software, and network infrastructure that will be used.
  • The  interface design  specifies how the users will move through the system (e.g., by navigation methods such as menus and on-screen buttons) and the forms and reports that the system will use.

Database and file specifications

  • These define exactly what data will be stored and where they will be stored.

Program design

  • The analyst team develops the  program design, which defines the programs that need to be written and exactly what each program will do.

To sum up…

  • This collection of deliverables (architecture design, interface design, database and file specifications, and program design) is the system specification that is used by the programming team for implementation.
  • At the end of the design phase, the feasibility analysis and project plan are reexamined and revised, and another decision is made by the project sponsor and approval committee about whether to terminate the project or continue.
 
Implementation
•       The final phase in the SDLC is the implementation phase , during which the system is actually built (or purchased, in the case of a packaged software design and installed).
•       It is the longest and most expensive single part of the development process. This phase has three steps:
•       System construction
•       Installation
•       Supportive plan
System construction
•       The system is built and tested to ensure that it performs as designed.
•       Since the cost of fixing bugs can be immense, testing is one of the most critical steps in implementation.
•       Most organizations spend more time and attention on testing than on writing the programs in the first place.
Installation
•       Installation  is the process by which the old system is turned off and the new one is turned on.
Supportive plan
•       This plan usually includes a formal or informal post-implementation review, as well as a systematic way for identifying major and minor changes needed for the system.
Chapter 8:  Gantt chart Project Development in SDLC

Chapter 2: SDLC Key Features For SYSTEMS ANALYST.

  • Once upon a time, software development consisted of a programmer writing code to solve a problem or automate a procedure. Nowadays, systems are so big and complex that teams of architects, analysts, programmers, testers and users must work together to create the millions of lines of custom-written code that drive our enterprises.
  • To manage this, a number of system development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize.

What is SDLC?

  • The  systems development life cycle  (SDLC) is the process of determining how an information system (IS) can support business needs, designing the system, building it, and delivering it to users.
SDLC cycle

The systems development life cycle (SDLC), also referred to as the application development life-cycle, is a term used in systems engineering, information systems and software engineering to describe a process for planning, creating, testing, and deploying an information system.

SDLC key person?

  • The key person in the SDLC is the systems analyst, who analyzes the business situation, identifies opportunities for improvements, and designs an information system to implement the improvements.

THE SYSTEMS ANALYST

  • The systems analyst plays a key role in information systems development projects.
  • The systems analyst works closely with all project team members so that the team develops the right system in an effective way.
  • Systems analysts must understand how to apply technology to solve business problems.
  • In addition, systems analysts may serve as change agents who identify the organizational improvements needed, design systems to implement those changes, and train and motivate others to use the systems.

Systems Analyst  Skills

  • Skills can be broken down into six major categories:
  • TECHNICAL skill,
  • BUSINESS skill,
  • ANALYTICAL skill,
  • INTERPERSONAL skill,
  • MANAGEMENT skill,
  • ETHICAL issue.

Technical skills

  • Analysts must have the technical skills to understand the organization’s existing technical environment, the new system’s technology foundation, and the way in which both can be fit into an integrated technical solution.

Business skills

  • Business skills are required to understand how IT can be applied to business situations and to ensure that the IT delivers real business value.

Analytical skills

  • Analysts are continuous problem solvers at both the project and the organizational level, and they put their analytical skills to the test regularly.

Interpersonal skills

  • Often, analysts need to communicate effectively, one-on-one with users and business managers (who often have little experience with technology) and with programmers (who often have more technical expertise than the analyst does).
  • They must be able to give presentations to large and small groups and to write reports.

Management skills

  • They also need to manage people with whom they work, and they must manage the pressure and risks associated with unclear situations.

Ethical issues

  • Finally, analysts must deal fairly, honestly, and ethically with other project team members, managers, and system users.
  • Analysts often deal with confidential information or information that, if shared with others, could cause harm (e.g., dissent among employees); it is important for analysts to maintain confidence and trust with all people.

Systems Analyst  Roles

  • The roles and the names used to describe them may vary from organization to organization.

Systems analyst role

  • The systems analyst  role focuses on the IS issues surrounding the system.
  • This person develops ideas and suggestions for ways that IT can support and improve business processes, helps design new business processes supported by IT, designs the new information system, and ensures that all IS standards are maintained.
  • The systems analyst will have significant training and experience in analysis and design and in programming.

Business analyst role

  • The  business analyst role focuses on the business issues surrounding the system.
  • This person helps to identify the business value that the system will create, develops ideas for improving the business processes, and helps design new business processes and policies.
  • The business analyst will have business training and experience, plus knowledge of analysis and design.

Requirements analyst role

  • The  requirements analyst role focuses on eliciting the requirements from the stakeholders associated with the new system.
  • As more organizations recognize the critical role that complete and accurate requirements play in the ultimate success of the system, this specialty has gradually evolved.
  • Requirements analysts understand the business well, are excellent communicators, and are highly skilled in an array of requirements elicitation techniques.

Infrastructure analyst role

  • The  infrastructure analyst  role focuses on technical issues surrounding the ways the system will interact with the organization’s technical infrastructure (hardware, software, networks, and databases).
  • The infrastructure analyst will have significant training and experience in networking, database administration, and various hardware and software products.

Change management analyst role

  • The change management analyst role focuses on the people and management issues surrounding the system installation.
  • This person ensures that adequate documentation and support are available to users, provides user training on the new system, and develops strategies to overcome resistance to change.
  • The change management analyst will have significant training and experience in organizational behavior and specific expertise in change management.

Project manager role

The  project manager role ensures that the project is completed on time and within budget and that the system delivers the expected value to the organization.

  • The project manager is often a seasoned systems analyst who, through training and experience, has acquired specialized project management knowledge and skills.

Assignment

Suppose you decide to become an analyst after you graduate. What type of analyst would you most prefer to be? What type of courses should you take before you graduate? What type of internship should you seek?

QUESTION:

  • Develop a short plan that describes how you will prepare for your career as an analyst.

THE SYSTEMS DEVELOPMENT LIFE CYCLE

  • In many ways, building an information system is similar to building a house.

THE SYSTEMS DEVELOPMENT LIFE CYCLE

  • First, the owner describes the vision for the house to the developer.
  • Second, this idea is transformed into sketches and drawings that are shown to the owner and refined (often, through several drawings, each improving on the other) until the owner agrees that the pictures depict what he or she wants.
  • Third, a set of detailed blue-prints is developed that presents much more specific information about the house (e.g., the layout of rooms, placement of plumbing fixtures and electrical outlets, and so on).
  • Finally, the house is built following the blueprints and often with some changes and decisions made by the owner as the house is erected.
  • Building an information system using the SDLC follows a similar set of four fundamental phases:
  • Planning,
  • Analysis,
  • Design,
  • Implementation
Chapter 8:  Gantt chart Project Development in SDLC

Chapter 1: System analysis and Design Overview.

System analysis, a method of studying a system by examining its component parts and their interactions.

It provides a framework in which judgments of the experts in different fields can be combined to determine what must be done, and what is the best way to accomplish it in light of current and future needs. 

•The system analyst (usually a software engineer or programmer) examines the flow of documents, information, and material to design a system that best meets the cost, performance, and scheduling objectives.

Systems design is the process of defining the architecture, components, modules, interfaces, and data for a system to satisfy specified requirements.

•Systems design could be seen as the application of systems theory to product development.

Successful Systems

•How will you know if you’ve helped to produce a successful system?

•Does the system achieve the goals set for it?

•How well does the system fit the structure of the business for which it was developed?

•Is the new system accurate, secure and reliable?

•Is the system well documented and easy to understand?


It shows an industrial organization with subsystems for: •Marketing and Purchasing:
•These are the main links with the environment as represented by customers and suppliers.
•It’s important to recognize, however, that the environment also interacts with the organization through legislation, social pressures, competitive forces, the education system and political decisions.

Production system: •This is concerned with transforming raw materials into finished products. •It applies just as much in service organizations as in traditional manufacturing industry: an architectural drawing office is the equivalent of a motor engine assembly line.

Support systems: •These are shown as the accounting, personnel and management control subsystems. •For this organization to work effectively it has to make good use of information, so the need arises for information systems that collect, store, transform and display information about the business.

Why businesses should want to develop information systems?

•To reduce manpower costs . •To improve customer service. •To improve management information. •To secure or defend competitive advantage.

Reduce manpower costs ..!

•The introduction of computer-based systems has often enabled work to be done by fewer staff or, more likely nowadays, has permitted new tasks to be undertaken without increasing staffing levels.

•Improve customer service…!

•Computer systems can often allow organizations to serve customers more quickly or to provide them with additional services.

Improve management information….!

Management decisions can only be as good as the information on which they are based, so many computer systems have been designed to produce more, or more accurate, or more timely information. Modern database query facilities is a good example.

Secure or defend competitive advantage…..!

 View of systems

•We can represent information systems structure in two ways: •either in a non-hierarchical way showing each subsystem on the same level, •or in an hierarchical way where some systems sit on top of others. •This multilevel view is often more helpful as it shows the different levels of control, the different data requirements, and a different view of the organization of each system.


Fig. : An hierarchical view of systems

Hierarchical view of systems:

At the top level are strategic systems and decision support systems that inform  the organization.

Strategic systems use information from lower-level internal systems and externally obtained information about markets, social trends and competitor behavior.

•Underneath strategic systems lie managerial or tactical systems that are concerned with the monitoring and control of business functions. •

•The operational systems level is concerned with the routine processing of transactions such as orders, invoices, schedules and statements.

•Other models

A useful model and widely used model is the Gibson–Nolan four-stage model of: • •Initiation; •Expansion; •Formalization; •Maturity.

Gibson–Nolan four-stage model

•During the initiation phase, the repetitive processing of large volumes of transactions occurs.Expansion  stage apply the new technology to as many applications as possible. •This is the honeymoon period for the system until one day a halt is called to the ever-growing system budget and the introduction of development planning and controls signals the start of the formalization stage.

•During formalization stage the need for information surpasses the need for data and where the organization begins to plan its way from a mixture of separate data processing systems towards a more coordinated and integrated approach. •Corporate recognition of the need for integrated systems is the characteristic of the maturity  stage. Here we see the use of open system architectures, database environments and comprehensive systems planning.

Role of the Analyst and Designer

•Analysts and designers are not always the same person

Role of the Analyst and Designer


Attributes analysts or designers should possess.

Uncover the fundamental issues of a problem. •Able to prepare sound plans and appreciate the effect that new data will have on them, and re-plan appropriately; •To be perceptive but not jump to conclusions, to be persistent to overcome difficulties and obstacles and maintain a planned course of action to achieve results

•To exhibit stamina, strength of character and a sense of purpose essential in a professional specialist •To have a broad flexible outlook, an orderly mind and a disciplined approach, as the job will frequently require working without direct supervision •To possess higher-than-average social skills so as to work well with others, and the ability to express thoughts, ideas, suggestions and proposals clearly, both orally and in writing.

Why do we study systems analysis and design?

•Systems analysis and design, need to analyze data input or data flow systematically, process or transform data, store data, and output information in the context of a particular business. •It is used to analyze, design, and implement improvements in the support of users. •Systems analysis and design involves working with current and eventual users of information systems to support them in working with technologies in an organizational setting.

•Installing a system without proper planning leads to great user dissatisfaction and frequently causes the system to fall into disuse. •Systems analysis and design lends structure to the analysis and design of information systems, a costly endeavor that might otherwise have been done in a haphazard way.

Chapter 4: Concept Of Sampling, Quantization And Resolutions

Chapter 4: Concept Of Sampling, Quantization And Resolutions

Concept Of Sampling, Quantization And Resolutions

Conversion of analog signal to digital signal:

The output of most of the image sensors is an analog signal, and we can not apply digital processing on it because we can not store it. We can not store it because it requires infinite memory to store a signal that can have infinite values. So we have to convert an analog signal into a digital signal. To create an image which is digital, we need to covert continuous data into digital form. There are two steps in which it is done.

  • Sampling
  • Quantization

We will discuss sampling now, and quantization will be discussed later on but for now on we will discuss just a little about the difference between these two and the need of these two steps.

Basic idea:

The basic idea behind converting an analog signal to its digital signal is

input output

to convert both of its axis x,yx,y into a digital format. Since an image is continuous not just in its co-ordinates xaxisxaxis, but also in its amplitude yaxisyaxis, so the part that deals with the digitizing of co-ordinates is known as sampling. And the part that deals with digitizing the amplitude is known as quantization.

Sampling.

The term sampling refers to take samples. We digitize x axis in sampling. It is done on independent variable. In case of, equation y = sinx, it is done on x variable. It is further divided into two parts , up sampling and down sampling

If you will look at the above figure, you will see that there are some random variations in the signal. These variations are due to noise. In sampling we reduce this noise by taking samples. It is obvious that more samples we take, the quality of the image would be more better, the noise would be more removed and same happens vice versa.

However, if you take sampling on the x axis, the signal is not converted to digital format, unless you take sampling of the y-axis too which is known as quantization. The more samples eventually means you are collecting more data, and in case of image, it means more pixels.

Relationship with pixels

Since a pixel is a smallest element in an image. The total number of pixels in an image can be calculated as

Pixels = total no of rows * total no of columns.

Lets say we have total of 25 pixels, that means we have a square image of 5 X 5. Then as we have discussed above in sampling, that more samples eventually result in more pixels. So it means that of our continuous signal, we have taken 25 samples on x axis. That refers to 25 pixels of this image. This leads to another conclusion that since pixel is also the smallest division of a CCD array. So it means it has a relationship with CCD array too, which can be explained as this.

Relationship with CCD array

The number of sensors on a CCD array is directly equal to the number of pixels. And since we have concluded that the number of pixels is directly equal to the number of samples, that means that number sample is directly equal to the number of sensors on CCD array.

Oversampling.

In the beginning we have define that sampling is further categorize into two types. Which is up sampling and down sampling. Up sampling is also called as over sampling. The oversampling has a very deep application in image processing which is known as Zooming.

Zooming

We will formally introduce zooming in the upcoming tutorial, but for now on, we will just briefly explain zooming. Zooming refers to increase the quantity of pixels, so that when you zoom an image, you will see more detail. The increase in the quantity of pixels is done through oversampling. The one way to zoom is, or to increase samples, is to zoom optically, through the motor movement of the lens and then capture the image. But we have to do it, once the image has been captured.

There is a difference between zooming and sampling

The concept is same, which is, to increase samples. But the key difference is that while sampling is done on the signals, zooming is done on the digital image.

Quantization

Digitizing a signal

As we have seen in the previous tutorials, that digitizing an analog signal into a digital, requires two basic steps. Sampling and quantization. Sampling is done on x axis. It is the conversion of x axis infinite values to digital values. The below figure shows sampling of a signal.

Sampling with relation to digital images

The concept of sampling is directly related to zooming. The more samples you take, the more pixels, you get. Oversampling can also be called as zooming. This has been discussed under sampling and zooming tutorial.But the story of digitizing a signal does not end at sampling too, there is another step involved which is known as Quantization. 

What is quantization

Quantization is opposite to sampling. It is done on y axis. When you are quantizing an image, you are actually dividing a signal into quantapartitions. On the x axis of the signal, are the co-ordinate values, and on the y axis, we have amplitudes. So digitizing the amplitudes is known as Quantization.

Here how it is done

quantize signal

You can see in this image, that the signal has been quantified into three different levels. That means that when we sample an image, we actually gather a lot of values, and in quantization, we set levels to these values. This can be more clear in the image below.

quantiz

In the figure shown in sampling, although the samples has been taken, but they were still spanning vertically to a continuous range of gray level values. In the figure shown above, these vertically ranging values have been quantized into 5 different levels or partitions. Ranging from 0 black to 4 white. This level could vary according to the type of image you want.

The relation of quantization with gray levels has been further discussed below.

Relation of Quantization with gray level resolution:

The quantized figure shown above has 5 different levels of gray. It means that the image formed from this signal, would only have 5 different colors. It would be a black and white image more or less with some colors of gray. Now if you were to make the quality of the image more better, there is one thing you can do here. Which is, to increase the levels, or gray level resolution up. If you increase this level to 256, it means you have an gray scale image. Which is far better then simple black and white image. Now 256, or 5 or what ever level you choose is called gray level. Remember the formula that we discussed in the previous tutorial of gray level resolution which is,

L= 2^k

We have discussed that gray level can be defined in two ways. Which were these two.

  • Gray level = number of bits per pixel BPPBPP.
  • Gray level = number of levels per pixel.

In this case we have gray level is equal to 256. If we have to calculate the number of bits, we would simply put the values in the equation. In case of 256levels, we have 256 different shades of gray and 8 bits per pixel, hence the image would be a gray scale image.

Reducing the gray level

Now we will reduce the gray levels of the image to see the effect on the image.

For example

Lets say you have an image of 8bpp, that has 256 different levels. It is a grayscale image and the image looks something like this.

256 Gray Levels

Now we will start reducing the gray levels. We will first reduce the gray levels from 256 to 128.

128 Gray Levels

There is not much effect on an image after decrease the gray levels to its half. Lets decrease some more.

64 Gray Levels

Still not much effect, then lets reduce the levels more.

32 Gray Levels

Surprised to see, that there is still some little effect. May be its due to reason, that it is the picture of Einstein, but lets reduce the levels more.

16 Gray Levels

Boom here, we go, the image finally reveals, that it is effected by the levels.

8 Gray Levels

4 Gray Levels

Now before reducing it, further two 2 levels, you can easily see that the image has been distorted badly by reducing the gray levels. Now we will reduce it to 2 levels, which is nothing but a simple black and white level. It means the image would be simple black and white image.

2 Gray Levels

Thats the last level we can achieve, because if reduce it further, it would be simply a black image, which can not be interpreted.

Contouring

There is an interesting observation here, that as we reduce the number of gray levels, there is a special type of effect start appearing in the image, which can be seen clear in 16 gray level picture. This effect is known as Contouring.

Image Resolution

Image resolution can be defined in many ways. One type of it which is pixel resolution that has been discussed in the tutorial of pixel resolution and aspect ratio.

 

Spatial resolution

Spatial resolution states that the clarity of an image cannot be determined by the pixel resolution. The number of pixels in an image does not matter. Spatial resolution can be defined as the in other way we can define spatial resolution as the number of independent pixels values per inch. In short what spatial resolution refers to is that we cannot compare two different types of images to see that which one is clear or which one is not. If we have to compare the two images, to see which one is more clear or which has more spatial resolution, we have to compare two images of the same size.

For example:

You cannot compare these two images to see the clarity of the image.

einstine secomd exmple

Although both images are of the same person, but that is not the condition we are judging on. The picture on the left is zoomed out picture of Einstein with dimensions of 227 x 222. Whereas the picture on the right side has the dimensions of 980 X 749 and also it is a zoomed image. We cannot compare them to see that which one is more clear. Remember the factor of zoom does not matter in this condition, the only thing that matters is that these two pictures are not equal.

So in order to measure spatial resolution , the pictures below would server the purpose.

3einstine

Now you can compare these two pictures. Both the pictures has same dimensions which are of 227 X 222. Now when you compare them, you will see that the picture on the left side has more spatial resolution or it is more clear then the picture on the right side. That is because the picture on the right is a blurred image.

Measuring spatial resolution

Since the spatial resolution refers to clarity, so for different devices, different measure has been made to measure it.

For example

  • Dots per inch
  • Lines per inch
  • Pixels per inch

They are discussed in more detail in the next tutorial but just a brief introduction has been given below.

Dots per inch

Dots per inch or DPI is usually used in monitors.

Lines per inch

Lines per inch or LPI is usually used in laser printers.

Pixel per inch

Pixel per inch or PPI is measure for different devices such as tablets , Mobile phones e.t.c.

Gray level resolution

Gray level resolution refers to the predictable or deterministic change in the shades or levels of gray in an image. In short gray level resolution is equal to the number of bits per pixel. We have already discussed bits per pixel in our tutorial of bits per pixel and image storage requirements. We will define bpp here briefly.

BPP

The number of different colors in an image is depends on the depth of color or bits per pixel.

Mathematically

The mathematical relation that can be established between gray level resolution and bits per pixel can be given as.

L = 2^k

In this equation L refers to number of gray levels. It can also be defined as the shades of gray. And k refers to bpp or bits per pixel. So the 2 raise to the power of bits per pixel is equal to the gray level resolution.

For example:

einstine

The above image of Einstein is an gray scale image. Means it is an image with 8 bits per pixel or 8bpp. Now if were to calculate the gray level resolution, here how we gonna do it. It means it gray level resolution is 256. Or in other way we can say that this image has 256 different shades of gray. The more is the bits per pixel of an image, the more is its gray level resolution.

Defining gray level resolution in terms of bpp

It is not necessary that a gray level resolution should only be defined in terms of levels. We can also define it in terms of bits per pixel.

For example

If you are given an image of 4 bpp, and you are asked to calculate its gray level resolution. There are two answers to that question. The first answer is 16 levels. The second answer is 4 bits.

Finding bpp from Gray level resolution

You can also find the bits per pixels from the given gray level resolution. For this, we just have to twist the formula a little.

Equation 1.

L = 2^k

where k=8
l =2^8
L=256

This formula finds the levels. Now if we were to find the bits per pixel or in this case k, we will simply change it like this.

K = log base 2LL Equation 22

Because in the first equation the relationship between Levels LL and bits per pixel kk is exponentional. Now we have to revert it, and thus the inverse of exponentional is log. Lets take an example to find bits per pixel from gray level resolution.

For example:

If you are given an image of 256 levels. What is the bits per pixel required for it.

Putting 256 in the equation, we get.

K = log base 2 256256  K = 8.

So the answer is 8 bits per pixel.

Gray level resolution and quantization:

The quantization will be formally introduced in the next tutorial, but here we are just going to explain the relation ship between gray level resolution and quantization. Gray level resolution is found on the y axis of the signal. In the tutorial of Introduction to signals and system, we have studied that digitizing a an analog signal requires two steps. Sampling and quantization.

input output

Sampling is done on x axis. And quantization is done in Y axis.

So that means digitizing the gray level resolution of an image is done in quantization.

Chapter 4: Concept Of Sampling, Quantization And Resolutions

Chapter 3: Images and Conversions in Digital Image Process

Images And Conversions

There are many type of images, and we will look in detail about different types of images, and the color distribution in them.

The binary image

The binary image as it name states, contain only two pixel values.

0 and 1.

In our previous tutorial of bits per pixel, we have explained this in detail about the representation of pixel values to their respective colors.

Here 0 refers to black color and 1 refers to white color. It is also known as Monochrome.

Black and white image:

The resulting image that is formed hence consist of only black and white color and thus can also be called as Black and White image.

 

No gray level

One of the interesting this about this binary image that there is no gray level in it. Only two colors that are black and white are found in it.

Format

Binary images have a format of PBM Portable bitmap

2, 3, 4,5, 6 bit color format

The images with a color format of 2, 3, 4, 5 and 6 bit are not widely used today. They were used in old times for old TV displays, or monitor displays.

But each of these colors have more then two gray levels, and hence has gray color unlike the binary image.

In a 2 bit 4, in a 3 bit 8, in a 4 bit 16, in a 5 bit 32, in a 6 bit 64 different colors are present.

8 bit color format

8 bit color format is one of the most famous image format. It has 256 different shades of colors in it. It is commonly known as Grayscale image.

The range of the colors in 8 bit vary from 0-255. Where 0 stands for black, and 255 stands for white, and 127 stands for gray color.

This format was used initially by early models of the operating systems UNIX and the early color Macintoshes.

A grayscale image of Einstein is shown below:

 

Format

The format of these images are PGM Portable Gray Map

This format is not supported by default from windows. In order to see gray scale image, you need to have an image viewer or image processing toolbox such as Matlab.

Behind gray scale image:

As we have explained it several times in the previous tutorials, that an image is nothing but a two dimensional function, and can be represented by a two dimensional array or matrix. So in the case of the image of Einstein shown above, there would be two dimensional matrix in behind with values ranging between 0 and 255.

But thats not the case with the color images.

16 bit color format

It is a color image format. It has 65,536 different colors in it. It is also known as High color format.

It has been used by Microsoft in their systems that support more then 8 bit color format. Now in this 16 bit format and the next format we are going to discuss which is a 24 bit format are both color format.

The distribution of color in a color image is not as simple as it was in grayscale image.

A 16 bit format is actually divided into three further formats which are Red , Green and Blue. The famous RGB format.

It is pictorially represented in the image below.

Figure 1: Einstein (Left); 16 bit Format (Right)

Now the question arises, that how would you distribute 16 into three. If you do it like this,

5 bits for R, 5 bits for G, 5 bits for B

Then there is one bit remains in the end.

So the distribution of 16 bit has been done like this.

5 bits for R, 6 bits for G, 5 bits for B.

The additional bit that was left behind is added into the green bit. Because green is the color which is most soothing to eyes in all of these three colors.

Note this is distribution is not followed by all the systems. Some have introduced an alpha channel in the 16 bit.

Another distribution of 16 bit format is like this:

  • bits for R, 4 bits for G, 4 bits for B, 4 bits for alpha channel.

Or some distribute it like this

  • bits for R, 5 bits for G, 5 bits for B, 1 bits for alpha channel.

24 bit color format

24 bit color format also known as true color format. Like 16 bit color format, in a 24 bit color format, the 24 bits are again distributed in three different formats of Red, Green and Blue. From figure instead of 16 use 24 bit format R, G, and B in same bit.

Since 24 is equally divided on 8, so it has been distributed equally between three different color channels.

Their distribution is like this.

8 bits for R, 8 bits for G, 8 bits for B.

Behind a 24 bit image.

Unlike a 8 bit gray scale image, which has one matrix behind it, a 24 bit image has three different matrices of R, G, B.

 

Color Codes Conversion Different color codes

All the colors here are of the 24 bit format, that means each color has 8 bits of red, 8 bits of green, 8 bits of blue, in it. Or we can say each color has three different portions. You just have to change the quantity of these three portions to make any color.

Binary color format

Color: Black Image

Decimal Code:

0,0,00,0,0

Explanation:

As it has been explained in the previous tutorials, that in an 8-bit format, 0 refers to black. So if we have to make a pure black color, we have to make all the three portion of R, G, B to 0. Color: White Image:

 

Decimal Code:

255,255,255255,255,255

Explanation:

Since each portion of R, G, B is an 8 bit portion. So in 8-bit, the white color is formed by 255. It is explained in the tutorial of pixel. So in order to make a white color we set each portion to 255 and thats how we got a white color. By setting each of the value to 255, we get overall value of 255, thats make the color white.

RGB color model:

Color: Red Image

Decimal Code:

255,0,0255,0,0

Explanation:

Since we need only red color, so we zero out the rest of the two portions which are green and blue, and we set the red portion to its maximum which is 255.

Color: Green Image

Decimal Code:

0,255,00,255,0

Explanation:

Since we need only green color, so we zero out the rest of the two portions which are red and blue, and we set the green portion to its maximum which is 255.

Color: Blue Image

Decimal Code:

0,0,2550,0,255

Explanation:

Since we need only blue color, so we zero out the rest of the two portions which are red and green, and we set the blue portion to its maximum which is 255

Gray color:

Color: Gray Image

Decimal Code:

128,128,128128,128,128

Explanation

As we have already defined in our tutorial of pixel, that gray color Is actually the mid point. In an 8-bit format, the mid point is 128 or 127. In this case we choose 128. So we set each of the portion to its mid point which is 128, and that results in overall mid value and we got gray color.

CMYK color model:

CMYK is another color model where c stands for cyan, m stands for magenta, y stands for yellow, and k for black. CMYK model is commonly used in color printers in which there are two carters of color is used. One consist of CMY and other consist of black color.

The colors of CMY can also made from changing the quantity or portion of red, green and blue.

Color: Cyan Image:

Decimal Code:

0,255,2550,255,255

Explanation:

Cyan color is formed from the combination of two different colors which are Green and blue. So we set those two to maximum and we zero out the portion of red. And we get cyan color.

Color: Magenta Image

Decimal Code:

255,0,255255,0,255

Explanation:

Magenta color is formed from the combination of two different colors which are Red and Blue. So we set those two to maximum and we zero out the portion of green. And we get magenta color.

Color: Yellow Image

Decimal Code:

255,255,0255,255,0

Explanation:

Yellow color is formed from the combination of two different colors which are Red and Green.

So we set those two to maximum and we zero out the portion of blue. And we get yellow color.

Conversion

Now we will see that how color are converted are from one format to another.

Conversion from RGB to Hex code:

Conversion from Hex to rgb is done through this method:

  • Take a color. E.g: White = 255,255,255255,255,255.
  • Take the first portion e.g 255.
  • Divide it by 16. Like this:
  • Take the two numbers below line, the factor, and the remainder. In this case it is 15 15 which is FF.
  • Repeat the step 2 for the next two portions.  Combine all the hex code into one.

Answer: #FFFFFF

Conversion from Hex to RGB:

Conversion from hex code to rgb decimal format is done in this way.

Take a hex number. E.g: #FFFFFF

Break this number into 3 parts: FF FF FF

Take the first part and separate its components: F F

Convert each of the part separately into binary: 11111111 11111111

Now combine the individual binaries into one: 11111111

Convert this binary into decimal: 255

Now repeat step 2, two more times.

The value comes in the first step is R, second one is G, and the third one belongs to B.

Answer: 255,255,255255,255,255

Common colors and their Hex code has been given in this table.

Color Hex Code
   
Black #000000
White #FFFFFF
Gray #808080
Red #FF0000
Green #00FF00
Blue #0000FF
Cyan #00FFFF
Magenta #FF00FF
Yellow #FFFF00

 

Grayscale to RGB Conversion

Now we will convert an color image into a grayscale image. There are two methods to convert it. Both has their own merits and demerits. The methods are:

  • Average method
  • Weighted method or luminosity method

Average method

Average method is the most simple one. You just have to take the average of three colors. Since its an RGB image, so it means that you have add r with g with b and then divide it by 3 to get your desired grayscale image.

Its done in this way.

Grayscale = R+G+B/3R+G+B/3 For example:

If you have an color image like the image shown above and you want to convert it into grayscale using average method. The following result would appear.

Explanation

There is one thing to be sure, that something happens to the original works. It means that our average method works. But the results were not as expected. We wanted to convert the image into a grayscale, but this turned out to be a rather black image.

Problem

This problem arise due to the fact, that we take average of the three colors. Since the three different colors have three different wavelength and have their own contribution in the formation of image, so we have to take average according to their contribution, not done it averagely using average method. Right now what we are doing is this,

33% of Red, 33% of Green, 33% of Blue

We are taking 33% of each, that means, each of the portion has same contribution in the image. But in reality thats not the case. The solution to this has been given by luminosity method.

Weighted method or luminosity method

You have seen the problem that occur in the average method. Weighted method has a solution to that problem. Since red color has more wavelength of all the three colors, and green is the color that has not only less wavelength then red color but also green is the color that gives more soothing effect to the eyes.

It means that we have to decrease the contribution of red color, and increase the contribution of the green color, and put blue color contribution in between these two.

So the new equation that form is:

New grayscale image = (0.3R(0.3 R + 0.59 G0.59 G + 0.11 B0.11 B ).

According to this equation, Red has contribute 30%, Green has contributed 59% which is greater in all three colors and Blue has contributed 11%.

Applying this equation to the image, we get this

Original Image: Grayscale Image:

Explanation

As you can see here, that the image has now been properly converted to grayscale using weighted method. As compare to the result of average method, this image is more brighter.

Chapter 4: Concept Of Sampling, Quantization And Resolutions

Chapter 2: Concept of Pixel in Digital Image Process

Concept of Pixel

Pixel

Pixel is the smallest element of an image. Each pixel correspond to any one value. In an 8-bit gray scale image, the value of the pixel between 0 and 255. The value of a pixel at any point correspond to the intensity of the light photons striking at that point. Each pixel store a value proportional to the light intensity at that particular location.

PEL

  • A pixel is also known as PEL. You can have more understanding of the pixel from the pictures given below.
  • In the above picture, there may be thousands of pixels, that together make up this image. We will zoom that image to the extent that we are able to see some pixels division. It is shown in the image below.

Capture

figure 1

Relationship with CCD array

  • We have seen that how an image is formed in the CCD array. So a pixel can also be defined as
  • The smallest division the CCD array is also known as pixel.
  • Each division of CCD array contains the value against the intensity of the photon striking to it. This value can also be called as a pixel.

Calculation of total number of pixels

  • We have define an image as a two dimensional signal or matrix. Then in that case the number of PEL would be equal to the number of rows multiply with number of columns.
  • This can be mathematically represented as below:
  • Total number of pixels = number of rows XX number of columns
  • Or we can say that the number of x,y coordinate pairs make up the total number of pixels.
  • We will look in more detail in the tutorial of image types, that how do we calculate the pixels in a color image.

          Gray level

  • The value of the pixel at any point denotes the intensity of image at that location, and that is also known as gray level.
  • We will see in more detail about the value of the pixels in the image storage and bits per pixel tutorial, but for now we will just look at the concept of only one pixel value.

         Pixel value 0.0

  • As it has already been define in the beginning of this tutorial, that each pixel can have only one value and each value denotes the intensity of light at that point of the image.
  • We will now look at a very unique value 0. The value 0 means absence of light. It means that 0 denotes dark, and it further means that when ever a pixel has a value of 0, it means at that point, black color would be formed.
  • Have a look at this image matrix
0 0 0
0 0 0
0 0 0
  • Now this image matrix has all filled up with 0. All the pixels have a value of 0. If we were to calculate the total number of pixels form this matrix, this is how we are going to do it.
  • Total no of pixels = total no. of rows X total no. of columns  = 3 X 3        = 9.
  • It means that an image would be formed with 9 pixels, and that image would have a dimension of 3 rows and 3 column and most importantly that image would be black.  The resulting image that would be made would be something like this
  • Now why is this image all black. Because all the pixels in the image had a value of 0.

Concept of Bits per pixel

Bpp or bits per pixel denotes the number of bits per pixel. The number of different colors in an image is depends on the depth of color or bits per pixel.

Bits in mathematics:

Its just like playing with binary bits.

How many numbers can be represented by one bit. 0,1

How many two bits combinations can be made.

00,01,10,11

If we devise a formula for the calculation of total number of combinations that can be made from bit, it would be like this.

2^bpp

Where bpp denotes bits per pixel. Put 1 in the formula you get 2, put 2 in the formula, you get 4. It grows exponentially.

Number of different colors:

Now as we said it in the beginning, that the number of different colors depend on the number of bits per pixel.

The table for some of the bits and their color is given below.

Bits per pixel Number of colors
1 bpp 2 colors
2 bpp 4 colors
3 bpp 8 colors
4 bpp 16 colors
5 bpp 32 colors
6 bpp 64 colors
7 bpp 128 colors
8 bpp 256 colors
10 bpp 1024 colors
16 bpp 65536 colors
24 bpp 16777216 colors 16.7millioncolors16.7millioncolors
32 bpp 4294967296 colors 4294millioncolors4294millioncolors

This table shows different bits per pixel and the amount of color they contain.

Shades

You can easily notice the pattern of the exponentional growth. The famous gray scale image is of 8 bpp , means it has 256 different colors in it or 256 shades.

Shades can be represented as:

 

Color images are usually of the 24 bpp format, or 16 bpp.

We will see more about other color formats and image types in the tutorial of image types.

Color values:

We have previously seen in the tutorial of concept of pixel, that 0 pixel value denotes black color.

Black color:

Remember, 0 pixel value always denotes black color. But there is no fixed value that denotes white color.

White color:

The value that denotes white color can be calculated as :

 

In case of 1 bpp, 0 denotes black, and 1 denotes white.

In case 8 bpp, 0 denotes black, and 255 denotes white.

Gray color:

When you calculate the black and white color value, then you can calculate the pixel value of gray color.

Gray color is actually the mid point of black and white. That said,

In case of 8bpp, the pixel value that denotes gray color is 127 or 128bpp ifyoucountfrom1,notfrom0ifyoucountfrom1,notfrom0.

Image storage requirements

After the discussion of bits per pixel, now we have every thing that we need to calculate a size of an image.

Image size

The size of an image depends upon three things.

  • Number of rows
  • Number of columns
  • Number of bits per pixel

The formula for calculating the size is given below.

Size of an image = rows * cols * bpp

It means that if you have an image, lets say the above figure 1:

Assuming it has 1024 rows and it has 1024 columns. And since it is a gray scale image, it has 256 different shades of gray or it has bits per pixel. Then putting these values in the formula, we get

Size of an image = rows * cols * bpp

= 1024 * 1024 * 8

= 8388608 bits.

But since its not a standard answer that we recognize, so will convert it into our format.

Converting it into bytes = 8388608 / 8 = 1048576 bytes.

Converting into kilo bytes = 1048576 / 1024 = 1024kb.

Converting into Mega bytes = 1024 / 1024 = 1 Mb.

Thats how an image size is calculated and it is stored. Now in the formula, if you are given the size of image and the bits per pixel, you can also calculate the rows and columns of the image

Frequency Word for IELTS Listening

Part 6: IELTS Academic Writing Task 1 For Diagram/Graph Vocabulary

Vocabulary to show the sequence:

You must write a summary of at least 150 words in response to a specific graph (bar, line, or pie graph), table, chart, or procedure in Writing Task 1 of the IELTS Academic test (how something works, how something is done). This job assesses your ability to choose and report the most important aspects, describe and compare data, recognize importance and trends in factual data, and describe a process.

» Subsequently, Respectively, Consecutively, Sequentially.

» Previous, Next, First, Second, Third, Finally, Former, Latter.

Tips:

“The market shares of HTC, Huawei, Samsung, Apple and Nokia in 2010 were 12%, 7%, 20%, 16% and 4% globally.”

The above sentence makes it ambiguous to understand which mobile brand had what percentage of market share. If there are more than 2 values/ figures, you should always use ‘consecutively/ sequentially/ respectively‘. Using either of these words would eliminate any doubt about the above sentence as it will clearly state that the percentages of market shares mentioned here would match the mobile brands sequentially (i.e. first one for the first brand, the second one for the second brand and so on.)

“The market shares of HTC, Huawei, Samsung, Apple and Nokia in 2010 were 12%, 7%, 20%, 16% and 4% respectively in the global market.”

Note: You do not need to use ‘consecutively/ sequentially/ respectively’ if there are only two values to write.

Vocabulary to show transitions:

Vocabulary to describe different types of data/trends in a paragraph while showing a smooth and accurate transition is quite important. Following word(s)/ phrase(s) would help you do so in an excellent way…

» Then

» Afterwards

» Following that

» Followed by

» Next

» Subsequently

» Former

» Latter

» After

» Previous

» Prior to

» Simultaneously

» During

» While

» Finally.

Few More Vocabularies:

Few more useful vocabulary to use in your report writing:

» Stood at

» A marked increase

» Steep

» Gradual

» Hike

» Drastic

» Declivity

» Acclivity

» Prevalent » Plummet

Useful phrases for describing graphs:

» To level off

» To reach a plateau

» To hit the highest point

» To stay constant

» To flatten out

» To show some fluctuation

» To hit the lowest point

» Compared to

» Compared with

» Relative to

Useful Vocabulary for Graphs and Diagrams

To get a high score in Task 1 writing of the academic IELTS you need to give accurate and strong description and analyses for the provided graph(s) or diagram. In this minimum 150 word essay it is easy to keep repeating words and numbers. However, this is not good to achieve a high score. In order to get a great band level on this section of the IELTS, you must use a variety of vocabulary that not only describes but also emphasizes the changes, similarities and differences in the data.

Verbs

These verbs are alternatives to the basic rise and fall vocabulary. One benefit of using them is that sometimes they help you avoid repeating too many numbers. If you have a strong verb, you don’t always have to give the exact figure.

Up Verbs

Verbs Example
soar the use of water soared in March
leap the prices leapt to 90% in one year
Climb populations climbed to over one million by 1980
Rocket use of cars rocketed in the first decade
Surge a surge of migration is seen in November

 

Notes:

  • “Soar “and “rocket” are both very strong words that describe large rises. “Rocket” is more sudden. You probably do not need to qualify these verbs with adverbs.
  • “Leap” shows a large and sudden rise. Again, you probably do not need to qualify it with an adverb.
  • “Climb” is a relatively neutral verb that can be used with the adverbs below.

Down verbs

Verbs Example
Sink The cost of housing sunk after 2008
Slip back Use of electricity slipped back to 50 in May
Dip Divorce rate dipped in the 60s
Drop A drop in crime can be seen last year
Plummet Tourists to the city plummets after September

Notes:

  • “Plummet” is the strongest word here. It means to fall very quickly and a long way.
  • “Drop” and “drop” are normally used for fairly small decreases
  • “Slip back” is used for falls that come after rises
  • “Drop” and “Dip” are also frequently used as nouns: “a slight dip” “a sudden drop”

Adjectives and adverbs

This is a selection of some of the most common adjectives and adverbs used for trend language. Please be careful. This is an area where it is possible to make low-level mistakes.

Make sure that you use adjectives with nouns and adverbs with verbs:

  • a significant rise – correct (adjective/noun)
  • rose significantly – correct (adverb/verb)
  • a significantly rise – wrong

Please also note the spelling of the adverbs. There is a particular problem with the word “dramatically:

  • dramatically – correct
  • dramaticly – wrong
  • dramaticaly – wrong

Adjectives of Degree

Adjective Example Adverb Example
Significant A significant change Significantly Changed

significantly

Dramatic A dramatic shift Dramatically Sifts dramatically
Sudden A sudden rise Suddenly Has risen suddenly
Substantial A substantial gain Substantially Gained substantially
Sharp A sharp decrease Sharply Had decreased

sharply

  Notes:

  • “sudden” and “sharp” can be used for relatively minor changes that happen quickly
  • “spectacular” and “dramatic” are very strong words only used for big changes

Steady Adjectives

Adjective Example Adverb Example
Consistent A consistent flow Consistently Flowed consistently
Steady A steady movement Steadily Moved steadily
Constant Constant shift Constantly Sifted constantly

  Small adjectives

Adjective Example Adverb Example
Slight A slight rise Slightly Rose slightly
Gradual A gradual fall Gradually Has fallen gradually
Marginal A marginal change Marginally Had changed

marginally

Modest A modest increase Modestly Increases modestly

  Notes:

  • “marginal” is a particularly useful word for describing very small changes

Other useful adjectives

These adjectives can be used to describes more general trends

Adjective Example
Upward By looking at the five data points, there appears to be a clear upward pattern in prices
Downward Over the past quarter century there is a downward trend in use of pesticides
Overall The overall shift in the market seems to favour the use of nuclear power

Notes:

  • “overall” can be used to describe changes in trend over the whole period: very useful in introductions and conclusions
  • “upward” and “downward” are adjectives: the adverbs are “upwards” and “downwards”

Credit: Stalin’s GRE, Internet

Frequency Word for IELTS Listening

Part 5: IELTS Academic Writing Task 1 Formal and Informal expressions.

Formal and Informal expressions and words:

You must write a summary of at least 150 words in response to a specific graph (bar, line, or pie graph), table, chart, or procedure on the IELTS Academic test (how something works, how something is done). Few more informal expressions with their formal versions are given below. Since IELTS is a formal test, your writing should be formal as well. Using informal words or expressions should be avoided. Some of the informal words are so frequently used that it would be tough for you to eliminate them from your writing. However, we would suggest you make a habit of using

Informal Formal
Go up Increase
Go down Decrease
Look at Examine
Find about Discover
Point out Indicate
Need to Required
Get Obtain
Think about Consider
Seem Appear
Show demonstrate/

illustrate

Start Commence
Keep Retain
But However
So Therefore/ Thus

formal words and expressions instead- for your performance and band score’s sake.

Also In addition/ Additionally
In the meantime In the interim
In the end Finally
Anyway Notwithstanding
Lots of/ a lot of Much, many
Kids Children
Cheap Inexpensive
Right Correct
I think In my opinion

 

IELTS Writing Task 1 vocabulary:

Following are the vocabularies for Academic IELTS Writing Task 1 grouped as Noun, Verb, Adjective, Adverb, and Phrase to help you improve your vocabulary and understanding of the usages of these while describing a graph.

Noun:

Increase:

A growth: There was a growth in the earning of the people of the city at the end of the year. An increase: Between noon and evening, there was an increase in the temperature of the coast area and this was probably because of the availability of sunlight at that time.

A rise: A rise of the listener in the morning can be observed from the bar graph. An improvement: The data show that there was an improvement in the traffic condition between 11:00 am till 3:00 pm.

A progress: There was progress in the law and order of the city during the end of the last year.

Rapid Increase:

A surge: From the presented information, it is clear that there was a surge in the number of voters in 1990 compared to the data given for the previous years.

A rapid increase/ a rapid growth/ a rapid improvement: There was a rapid growth in the stock value of the company ABC during December of the last year.

N.B: Following adjectives can be used before the above nouns to show a rapid growth/ increase of something:

Rapid, Sudden, Steady, Noticeable, Mentionable, Tremendous, huge, enormous, massive, vast, gigantic, monumental, incredible, fabulous, great etc.

(The above list is the words which are actually adjective and can be used before nouns to show the big changes)

Highest:

A/ The peak: Visitors number reached a peak in 2008 and it exceeded 2 million.

Top/ highest/ maximum: The oil prices reached the top/ highest in 1981 during the war.

N.B: Some of the words to present the highest/ top of something are given below:

Apex, pyramid, zenith, acme, obelisk, climax, needle, spire, vertex, summit, tower, most, greatest, max, tops, peak, height, crown.

Changes:

A fluctuation: There was a fluctuation in the passenger numbers who used railway transportation during the year 2003 to 2004.

A variation: A variation in the shopping habit of teenagers can be observed from the data. A disparately/ dissimilarity/ an inconsistency: The medicine tested among the rabbits shows an inconsistency of the effect it had.

 

Steadiness:

Stability: The data from the line graph show the stability of the price in the retail market from January till June for the given year.

A plateau: As is presented in the line graph, there was a plateau of the oil price from 1985 to 1990.

Decrease:

A fall: There was a fall in the price of the energy bulbs in 2010 which was less than $5. A decline: A decline occurred after June and the production reached 200/day for the next three months.

 

A decrease: After the initial four years, the company’s share price increased, and there was a decrease in the bearish market.

Using ‘Nouns’ and ‘Verbs’ to describe trends in a graph:

Direction:

Verbs                                                       Nouns

» Increased (to)                                         An increase

» Rose (to)                                                A rise

» Climbed (to)                                           An upward trend

» Went up (to)                                           A growth

 

Direction:

Verbs                                                       Nouns

» Surge                                                     A surge

» Boomed (to)                                           A boom / a dramatic increase.

 

Direction:

Verbs                                                       Nouns

» Decreased (to)                                       A decrease

» Declined (to)                                          A decline

» Fell (to)                                                  A  fall

» Reduce (to)                                            A reduction

» Dipped (to)

» Dropped (to)                                           A drop

» Went down (to)                                  A downward trend

Direction:

Verbs                                                       Nouns

» Plunge

» Slumped (to)                                          A slum / a dramatic fall.

» Plummeted (to)

Direction:

Verbs                                                       Nouns

» Remained stable (at)

» Remained static (at)

» Remained steady (at)

» Stayed constant (at)

» Levelled out (at)                                  A level out

» Did not change                                    No change

» Remained unchanged                         No change

» Maintained the same level

» Plateaued (at)                                      A plateau

Direction:

Verbs                                                       Nouns

» Fluctuated (around)                               A fluctuation

» Oscillated                                               An oscillation

Direction:

Verbs                                                       Nouns

» Peaked (at)                                             The peak/ apex/ zenith/ summit/ the highest point

Direction:

Verbs                                                       Nouns

» Bottomed (at)                                         The lowest point/ the bottom/ bottommost point

 

Use ‘adjective/adverb’ to indicate the movement of a trend. Examples:

  1. There has been a slight increase in the unemployment rate in 1979 at which point it stood at 12%.
  2. The price of gold dropped rapidly over the next three years.

 

Use ‘adjective’ to modify the ‘Noun’ form of a trend and use ‘adverb’ to modify the ‘verb’ form of a trend.

Greater or Higher?

We usually use ‘greater’ when we compare two numbers, and ‘higher’ while comparing two percentages or ratio. Reversely, ‘smaller or fewer’ could be used to compare two numbers and

‘lower’ to compare two percentages or ratios. The following table would make it clear —

Examples:

  1. The number of male doctors in this city was greater than the number of female doctors. 2. The number of European programmers who attended the seminar was fewer than the number of Asian programmers.
  2. The percentage of male doctors in this city was higher than the percentage of female doctors.
  3. During 2010, the inflow of illegal immigrants was lower than that of 2012.
  4. the birth rate in Japan in 2014 was higher than the birth rate in 2015.

Vocabulary to compare to what extent / to (/by) what degree something is greater/higher than the other.

» Overwhelmingly, Substantially, Significantly. Considerably.

» Moderately, Markedly.

» Hardly, Barely, Slightly, Fractionally, Marginally.