Software Architecture- To study ETL (Extract, transform, load) tools specially SQL Server Integration Services

March 27, 2010

1           Objective:

 To study ETL (Extract, transform, load) tools specially SQL Server Integration Services.

2           Problem Definition:

There are few databases which moved from location to location

during day time, later on at closing of the day or after some specific time interval, data from few tables of these moving databases is needs to be pushed to central database tables after applying few business rule like avoiding duplication of data.

Traditional solution to problem is to build a small software in .Net that pull data from moving databases and apply business rules and push data to central database server.

This solution might work for small problem domain. But actually this is a bigger domain called ETL.

3           Solution:

4           What is ETL, Extract Transform and Load?

ETL is an abbreviation of the three words Extract, Transform and Load. It is an ETL process to extract data, mostly from different types of system, transform it into a structure that’s more appropriate for reporting and analysis and finally load it into the database. The figure below displays these ETL steps.

ETL architecture and steps

But, today, ETL is much more than that. It also covers data profiling, data quality control, monitoring and cleansing, real-time and on-demand data integration in a service oriented architecture (SOA), and metadata management.

  • ETL – Extract from source

In this step we extract data from different internal and external sources, structured and/or unstructured. Plain queries are sent to the source systems, using native connections, message queuing, ODBC or OLE-DB middleware. The data will be put in a so-called Staging Area (SA), usually with the same structure as the source. In some cases we want only the data that is new or has been changed, the queries will only return the changes. Some ETL tools can do this automatically, providing a changed data capture (CDC) mechanism.

  • ETL – Transform the data

Once the data is available in the Staging Area, it is all on one platform and one database. So we can easily  join and union tables, filter and sort the data using specific attributes, pivot to another structure and make business calculations. In this step of the ETL process, we can check on data quality and cleans the data if necessary. After having all the data prepared, we can choose to implement slowly changing dimensions. In that case we want to keep track in our analysis and reports when attributes changes over time, for example a customer moves from one region to another.

  • ETL – Load into the data warehouse

Finally, data is loaded into a data warehouse, usually into fact and dimension tables. From there the data can be combined, aggregated and loaded into datamarts or cubes as is deemed necessary.

ETL Tools:


ETL tools are widely used for extracting, cleaning, transforming and loading data from different systems, often into a data warehouse. Following is list of tools available for ETL activities.

No. List of ETL Tools Version  ETL Vendors
1. Oracle Warehouse Builder (OWB) 11gR1 Oracle 
2. Data Integrator & Data Services  XI 3.0 SAP Business Objects
3. IBM Information Server (Datastage) 8.1 IBM
4. SAS Data Integration Studio 4.2 SAS Institute
5. PowerCenter 8.5.1 Informatica 
6. Elixir Repertoire 7.2.2 Elixir
7. Data Migrator 7.6 Information Builders
8. SQL Server Integration Services 10 Microsoft 
9. Talend Open Studio 3.1 Talend
10. DataFlow Manager 6.5 Pitney Bowes Business Insight
11. Data Integrator 8.12 Pervasive
12. Open Text Integration Center 7.1 Open Text
13. Transformation Manager 5.2.2 ETL Solutions Ltd.
14. Data Manager/Decision Stream 8.2 IBM (Cognos)
15. Clover ETL 2.5.2 Javlin 
16. ETL4ALL 4.2 IKAN
17. DB2 Warehouse Edition 9.1 IBM
18. Pentaho Data Integration 3.0 Pentaho 
19 Adeptia Integration Server 4.9 Adeptia

4.1        Microsoft Technology Solution:

Microsoft SQL Server Integration Services is a platform for building high performance data integration solutions, including packages that provide extract, transform, and load (ETL) processing for data warehousing.

Microsoft Integration Services is a platform for building enterprise-level data integration and data transformations solutions. You use Integration Services to solve complex business problems by copying or downloading files, sending e-mail messages in response to events, updating data warehouses, cleaning and mining data, and managing SQL Server objects and data. The packages can work alone or in concert with other packages to address complex business needs. Integration Services can extract and transform data from a wide variety of sources such as XML data files, flat files, and relational data sources, and then load the data into one or more destinations.

The SQL Server Import and Export Wizard offers the simplest method to create a Integration Services package that copies data from a source to a destination.

Integration Services Architecture:

Of the components shown in the previous diagram, here are some important components to using Integration Services succesfully:

4.1.1      SSIS Designer

SSIS Designer is a graphical tool that you can use to create and maintain Integration Services packages. SSIS Designer is available in Business Intelligence Development Studio as part of an Integration Services project.

4.1.2      Runtime engine

The Integration Services runtime saves the layout of packages, runs packages, and provides support for logging, breakpoints, configuration, connections, and transactions.

4.1.3      Tasks and other executables

The Integration Services run-time executables are the package, containers, tasks, and event handlers that Integration Services includes. Run-time executables also include custom tasks that you develop.

4.1.4      Data Flow engine and Data Flow components

The Data Flow task encapsulates the data flow engine. The data flow engine provides the in-memory buffers that move data from source to destination, and calls the sources that extract data from files and relational databases. The data flow engine also manages the transformations that modify data, and the destinations that load data or make data available to other processes. Integration Services data flow components are the sources, transformations, and destinations that Integration Services includes.

4.1.5      API or object model

The Integration Services object model includes managed application programming interfaces (API) for creating custom components for use in packages, or custom applications that create, load, run, and manage packages. Developer can write custom applications or custom tasks or transformations by using any common language runtime (CLR) compliant language.

4.1.6      Integration Services Service

The Integration Services service lets you use SQL Server Management Studio to monitor running Integration Services packages and to manage the storage of packages.

4.1.7      SQL Server Import and Export Wizard

The SQL Server Import and Export Wizard can copy data to and from any data source for which a managed .NET Framework data provider or a native OLE DB provider is available. This wizard also offers the simplest method to create an Integration Services package that copies data from a source to a destination.

4.1.8      Other tools, wizards, and command prompt utilities

Integration Services includes additional tools, wizards, and command prompt utilities for running and managing Integration Services packages.

4.1.9      Integration Services Packages

A package is an organized collection of connections, control flow elements, data flow elements, event handlers, variables, and configurations, that you assemble using either the graphical design tools that SQL Server Integration Services provides, or build programmatically. You then save the completed package to SQL Server, the SSIS Package Store, or the file system. The package is the unit of work that is retrieved, executed, and saved.

4.1.10 Command Prompt Utilities (Integration Services)

Integration Services includes command prompt utilities for running and managing Integration Services packages.

  1. dtexec is used to run an existing package at the command prompt.
  2. dtutil is used to manage existing packages at the command prompt.


4.2      Typical Uses of Integration Services

Integration Services provides a rich set of built-in tasks, containers, transformations, and data adapters that support the development of business applications. Without writing a single line of code, you can create SSIS solutions that solve complex business problems using ETL and business intelligence, manage SQL Server databases, and copy SQL Server objects between instances of SQL Server.

The following scenarios describe typical uses of SSIS packages.

4.2.1      Merging Data from Heterogeneous Data Stores

Data is typically stored in many different data storage systems, and extracting data from all sources and merging the data into a single, consistent dataset is challenging. This situation can occur for a number of reasons. For example:

  • Many organizations archive information that is stored in legacy data storage systems. This data may not be important to daily operations, but it may be valuable for trend analysis that requires data collected over a long period of time.
  • Branches of an organization may use different data storage technologies to store the operational data. The package may need to extract data from spreadsheets as well as relational databases before it can merge the data.
  • Data may be stored in databases that use different schemas for the same data. The package may need to change the data type of a column or combine data from multiple columns into one column before it can merge the data.

Integration Services can connect to a wide variety of data sources, including multiple sources in a single package. A package can connect to relational databases by using .NET and OLE DB providers, and to many legacy databases by using ODBC drivers. It can also connect to flat files, Excel files, and Analysis Services projects.

Integration Services includes source components that perform the work of extracting data from flat files, Excel spreadsheets, XML documents, and tables and views in relational databases from the data source to which the package connects.

Next, the data is typically transformed by using the transformations that Integration Services includes. After the data is transformed to compatible formats, it can be merged physically into one dataset.

After the data is merged successfully and transformations are applied to data, the data is usually loaded into one or more destinations. Integration Services includes destination for loading data into flat files, raw files, and relational databases. The data can also be loaded into an in-memory recordset and accessed by other package elements.

4.2.2      Populating Data Warehouses and Data Marts

The data in data warehouses and data marts is usually updated frequently, and the data loads are typically very large.

Integration Services includes a task that bulk loads data directly from a flat file into SQL Server tables and views, and a destination component that bulk loads data into a SQL Server database as the last step in a data transformation process.

An SSIS package can be configured to be restartable. This means you can rerun the package from a predetermined checkpoint, either a task or container in the package. The ability to restart a package can save a lot of time, especially if the package processes data from a large number of sources.

You can use SSIS packages to load the dimension and fact tables in the database. If the source data for a dimension table is stored in multiple data sources, the package can merge the data into one dataset and load the dimension table in a single process, instead of using a separate process for each data source.

Updating data in data warehouses and data marts can be complex, because both types of data stores typically include slowly changing dimensions that can be difficult to manage through a data transformation process. The Slowly Changing Dimension Wizard automates support for slowly changing dimensions by dynamically creating the SQL statements that insert and update records, update related records, and add new columns to tables.

Additionally, tasks and transformations in Integration Services packages can process Analysis Services cubes and dimensions. When the package updates tables in the database that a cube is built on, you can use Integration Services tasks and transformations to automatically process the cube and to process dimensions as well. Processing the cubes and dimensions automatically helps keep the data current for users in both environments; users who access information in the cubes and dimensions, and users who access data in a relational database.

Integration Services can also compute functions before the data is loaded into its destination. If your data warehouses and data marts store aggregated information, the SSIS package can compute functions such as SUM, AVERAGE, and COUNT. An SSIS transformation can also pivot relational data and transform it into a less-normalized format that is more compatible with the table structure in the data warehouse.

4.2.3      Cleaning and Standardizing Data

Whether data is loaded into an online transaction processing (OLTP) or online analytic processing (OLAP) database, an Excel spreadsheet, or a file, it needs to be cleaned and standardized before it is loaded. Data may need to be updated for the following reasons:

  • Data is contributed from multiple branches of an organization, each using different conventions and standards. Before the data can be used, it may need to be formatted differently. For example, you may need to combine the first name and the last name into one column.
  • Data is rented or purchased. Before it can be used, the data may need to be standardized and cleaned to meet business standards. For example, an organization wants to verify that all the records use the same set of state abbreviations or the same set of product names.
  • Data is locale-specific. For example, the data may use varied date/time and numeric formats. If data from different locales is merged, it must be converted to one locale before it is loaded to avoid corruption of data.

Integration Services includes built-in transformations that you can add to packages to clean and standardize data, change the case of data, convert data to a different type or format, or create new column values based on expressions. For example, the package could concatenate first and last name columns into a single full name column, and then change the characters to uppercase.

An Integration Services package can also clean data by replacing the values in columns with values from a reference table, using either an exact lookup or fuzzy lookup to locate values in a reference table. Frequently, a package applies the exact lookup first, and if the lookup fails, it applies the fuzzy lookup. For example, the package first attempts to look up a product name in the reference table by using the primary key value of the product. When this search fails to return the product name, the package attempts the search again, this time using fuzzy matching on the product name.

Another transformation cleans data by grouping values in a dataset that are similar. This is useful for identifying records that may be duplicates and therefore should not be inserted into your database without further evaluation. For example, by comparing addresses in customer records you may identify a number of duplicate customers.

4.2.4      Building Business Intelligence into a Data Transformation Process

A data transformation process requires built-in logic to respond dynamically to the data it accesses and processes.

The data may need to be summarized, converted, and distributed based on data values. The process may even need to reject data, based on an assessment of column values.

To address this requirement, the logic in the SSIS package may need to perform the following types of tasks:

  • Merging data from multiple data sources.
  • Evaluating data and applying data conversions.
  • Splitting a dataset into multiple datasets based on data values.
  • Applying different aggregations to different subsets of a dataset.
  • Loading subsets of the data into different or multiple destinations.

Integration Services provides containers, tasks, and transformations for building business intelligence into SSIS packages.

Containers support the repetition of workflows by enumerating across files or objects and by evaluating expressions. A package can evaluate data and repeat workflows based on results. For example, if the date is in the current month, the package performs one set of tasks; if not, the package performs an alternative set of tasks.

Tasks that use input parameters can also build business intelligence into packages. For example, the value of an input parameter can filter the data that a task retrieves.

Transformations can evaluate expressions and then, based on the results, send rows in a dataset to different destinations. After the data is divided, the package can apply different transformations to each subset of the dataset. For example, an expression can evaluate a date column, add the sales data for the appropriate period, and then store only the summary information.

It is also possible to send a data set to multiple destinations, and then apply different sets of transformation to the same data. For example, one set of transformations can summarize the data, while another set of transformations expands the data by looking up values in reference tables and adding data from other sources.

4.2.5      Automating Administrative Functions and Data Loading

Administrators frequently want to automate administrative functions such as backing up and restoring databases, copying SQL Server databases and the objects they contain, copying SQL Server objects, and loading data. Integration Services packages can perform these functions.

Integration Services includes tasks that are specifically designed to copy SQL Server database objects such as tables, views, and stored procedures; copy SQL Server objects such as databases, logins, and statistics; and add, change, and delete SQL Server objects and data by using Transact-SQL statements.

Administration of an OLTP or OLAP database environment frequently includes the loading of data. Integration Services includes several tasks that facilitate the bulk loading of data. You can use a task to load data from text files directly into SQL Server tables and views, or you can use a destination component to load data into SQL Server tables and views after applying transformations to the column data.

An Integration Services package can run other packages. A data transformation solution that includes many administrative functions can be separated into multiple packages so that managing and reusing the packages is easier.

If you need to perform the same administrative functions on different servers, you can use packages. A package can use looping to enumerate across the servers and perform the same functions on multiple computers. To support administration of SQL Server, Integration Services provides an enumerator that iterates across SQL Server Management Objects (SMO) objects. For example, a package can use the SMO enumerator to perform the same administrative functions on every job in the Jobs collection of a SQL Server installation.

SSIS packages can also be scheduled using SQL Server Agent Jobs.

5           Conclusion:

  • Software Industry has may of ETL tools, but Comsoft being traditional Microsoft shop, should prefer SSIS, SQL Server Integration services as ETL tool for general operations.
  • As we are using SQL server as backend database, so SSIS is already available in our development environment. No Need to buy any extra tool.
  • This Document will provide a technology direction statement and introduction to ETL implementation in Comsoft.
  • This document is just for knowledge sharing, no need to change our implementation for pushing and pulling data via .Net  as discussed with Sujjet in last  voice call, as our problem domain is very limited. But for future direction and bigger problems we might consider SQL Server Integration services as ETL tools .






Software Architecture-To analyze cargo loading optimization algorithm

March 24, 2010

To analyze cargo loading optimization algorithm

By: Shahzad Sarwar
To: Development Team + Management
Date: 24th March 2010

1. Objective:To analyze and plan software development for Cargo Loading optimization module in Flight Cargo of PCMS, Pegasus Cargo Management System.

1. To Study industry available software for Cargo Load optimization and planning.
2. To study the available algorithm for Cargo Loading optimization in Flight Cargo module of PCMS, Pegasus Cargo Management System.
3. To analyze requirement specification for Cargo Load optimization and planning.

2. Load Planner Software Comparative Analysis:
There is a very long list of available commercial software that performs cargo load optimization and planning with full visualization via graphical representation.
Following is brief list of some of the best in industry.

2.1. 3D Load Packer (3DLP)
3D Load Packer (3DLP) is the unique space optimizer designed to help to plan quickly and easily the best compact arrangement of a number of different size 3D rectangular objects (hereafter called “Boxes”) within one or more rectangular enclosures (hereafter “Containers”). 3DLP is based on the truly three-dimensional, most dense and quick original packing algorithms.
An overall load weight limit and truck axle weight limits may be taken into account as additional constraints or actual optimization factors. Full control on the allowed box overhang is also available.
The program has a facility for specifying the associated cost for each box / container item in order to calculate totals and affect upon optimization as additional priority factors. Optimizer goal and other main settings are adjustable.

2.2. Cube-IQ
The Cube-IQ Load Planning system is built around the best Loading Engine on the market and will give optimal volume/weight utilization. (Needed – sidebars and quotes from the industry)
Cube-IQ optimizes the loading of items in one or more containers, optionally of different sizes.
The system can help to cube-out loads on PC, and also in the actual loading through its clear, 3-D diagram based loading instructions.
Cube-IQ has a state-of-the-art load optimization engine. Cube-IQ’s database allows to pre-define containers and boxes, and to store and retrieve any number of complete loading cases. The system has full data import and export facilities and can both read and write Excel, XML and other formats.

2.3. AutoLoad Pro
AutoLoad Pro integrated 3D graphic technology, visual effects and excellent computing speed help to work out how to load varied shape of goods into varied sizes of containers efficiently with considering delivery safety of goods, utilization of space, move convenience and etc. AutoLoad Pro can work out theminimum containers, trucks and cartons quickly to complete a loading plan and reduce transportation cost as well.

2.4. CubeMaster™
CubeMaster™ is a versatile, cost-effective software solution to optimize the cargo load on your trucks, air & sea containers and pallets quickly and efficiently. It reduces shipping and transport costs through intelligent loading and optimal space utilization. CubeMaster™ supports in planning order picking, loading and capacity requirements. The system delivers clear instructions regarding the work preparation in seconds.

2.5. Load
Load saves money by optimizing container cargo – includes interactive 3D view
Load saves money by optimizing container cargo:
•includes interactive 3D view
•automatic optimizer
•packing lists are conveniently entered into Load!
•calculates maximum number of items per container
•Optimized packing lists and 3D views can be printed.

2.6. Packer3d Online Service
The Packer3d Online Service calculates optimal plans for loading different types of boxes, cylinders, and pallets into containers, trucks, and railroad freight cars.

2.7. packVol
packVol is an Optimization Software for Load Planning, designed to plan the best space utilization inside containers and trucks, to help to reduce transportation costs. It is an innovative software for MS Windows™, which has some unique features not found in other container loading software products. It is truly tri dimensional; the program allows to manage efficiently complex load planning problems.

2.8. PalletStacking
PalletStacking allow users to find the best arrangement of boxes on loading pallets to warehousing or transportation. This software reduce the costs of palletizing boxes and calculate the most optimal dimensions of boxes. PalletStaking Solution calculate the best arrangement of products in a box, calculate box dimensions and show 3D graphics of the solution. It could be exported to Microsoft Excel to generate reports.

2.9. LoadPlanner
LoadPlanner is the first system that offers comprehensive load planning and optimization solution. The heart of LoadPlanner is its sophisticated 3D loading algorithm, the result of many years of intensive research and cooperation with leading logistics providers. But what makes us different is that LoadPlanner is an advanced rule-based system. It has unique capabilities to:
• Classify business objects (items, orders, containers, etc.) into flexible system of categories.
• Formulate high-level business rules and constraints, and apply them to selected categories.
• Use business rules and constraints in the process of load planning and optimization.
• Solve multi-tier load planning problems (packaging – palletizing – container / trailer loading).
• Produce results in form of easy-to-analyse interactive 3D graphics.

3. Algorithms for loading optimization Problem:
Cargo loading optimization is well known computer science problem and has many well known algorithms to solve this problem.
Following is brief description about some of the best algorithms to solve this problem.
3.1. Algorithms for the Container Loading Problem
By Guntram Scheithauer , Operations Research Proceedings 1991,

This paper covers the three-dimensional problem of optimal packing of a container with rectangular pieces. It proposes an approximation algorithm based on the “forward state strategy” of dynamic programming. A suitable description of packings is developed for the implementation of the approximation algorithm, and some computational experience is reported. The effective employment of capacity gets a more and more increasing importance in many problems of production and transportation planning. The reasons in transportation are e.g. the enlarging trade and growing transportation costs.


3.2. A Less Flexibility First Based Algorithm
Yuen-Ting Wu
Yu-Liang Wu
Department of Computer Science and Engineering, the Chinese University of Hong Kong, Hong Kong

This paper presents a Less Flexibility First (LFF) based algorithm for solving container loading problems in which boxes of given sizes are to be packed into a single container. The objective is to maximize volume utilization. LFF, firstly introduced in [An effective quasi-human heuristic for solving the rectangle packing problem, European Journal of Operations Research 141 (2002) 341], is an effective deterministic heuristic applied to 2D packing problems and generated up to 99% packing densities. Its usage is now extended to the container loading problem. Objects are packed according to their flexibilities. Less flexible objects are packed to less flexible positions of the container. Pseudo-packing procedures enable improvements on volume utilization. Encouraging packing results with up to 93% volume utilization are obtained in experiments running on benchmark cases from other authors.


3.3. A Maximal-Space Algorithm for the Container Loading Problem
F. Parreño, R. Alvarez-Valdes, J. M. Tamarit, J. F. Oliveira
Department of Mathematics, University of Castilla-La Mancha, Albacete, Spain
Department of Statistics and Operations Research, University of Valencia, Burjassot, Spain
Department of Statistics and Operations Research, University of Valencia, Burjassot, Spain
Faculty of Engineering, University of Porto, Porto, Portugal, and INESC Porto, Instituto de Engenharia de Sistemas e Computadores de Porto, Porto, Portugal
In this paper, a greedy randomized adaptive search procedure (GRASP) for the container loading problem is presented. This approach is based on a constructive block heuristic that builds upon the concept of maximal space, a nondisjoint representation of the free space in a container.
This new algorithm is extensively tested over the complete set of Bischoff and Ratcliff problems [Bischoff, E. E., M. S. W. Ratcliff. 1995. Issues in the development of approaches to container loading. Omega 23 377–390], ranging from weakly heterogeneous to strongly heterogeneous cargo, and outperforms all the known nonparallel approaches that, partially or completely, have used this set of test problems. When comparing against parallel algorithms, it is better on average but not for every class of problem. In terms of efficiency, this approach runs in much less computing time than that required by parallel methods. Thorough computational experiments concerning the evaluation of the impact of algorithm design choices and internal parameters on the overall efficiency of this new approach are also presented.

Click to access tr03-07.pdf

3.4. Improved Optimization Algorithm for the Container Loading Problem
Xiamen, China
May 19-May 21
ISBN: 978-0-7695-3570-8
Container loading problem is a kind of space resources optimization problem which consists of various constraints. The solution can be extended to aircraft, cargo loading for ships, even the memory allocation for computer, and other applications. This paper proposes a new algorithm for loading a variety of different goods into a single container with multi-batches. With the concept of “plane” and “block”, the algorithm uses “depth priority” strategy to locate for the suitable space. The algorithm also allows goods to rotate in any possible directions, while under the guarantee of efficient space usage, it improves the placement stability. With the priorities of each goods assigned by the algorithm, we should could allocate more goods at the same location. The optimal algorithm is supposed to withdraw when the last batch packing is unsuitable. Experimental results show that the algorithm is effective to solve such problems.


3.5. A Genetic Algorithm for Solving the Container Loading Problem
FernUniversität Hagen, Kleine Straße 22, D-58084 Hagen, BRD
The paper presents a genetic algorithm (GA) for the container loading problem. The main
ideas of the approach are first to generate a set of disjunctive box towers and second to arrange the box towers on the floor of the container according to a given optimization criterion. The loading problem may include different practical constraints. The performance of the GA is demonstrated by a numerical test comparing the GA and several other procedures for the container loading problem.

Container loading problems may be grouped in different ways. A basic distinction exists between cases in which a given set of goods has to be loaded completely and cases which allow some goods to be left behind. Whilst the former type of problem involves more than one container, the latter is often restricted to a single container (cf. the distinction made by DYCKHOFF, 1990). Another important distinction concerns the goods to be loaded. BORTFELDT (1994) considers the loading of rectangular goods, i.e. boxes, and defines a cargo comprising only identical boxes as ‘homogeneous’. He also refers to a given set of boxes with many different types of boxes as ‘strongly heterogeneous’
and one with only a few different types of boxes as ‘weakly heterogeneous’. The subject of this paper is the loading of a strongly heterogeneous set of boxes into a single container. The literature references given below are focused on this type of problem and also on genetic approaches.


Click to access gehringbortfeldt_container.pdf

3.6. A Heuristic Algorithm with Heterogeneous Boxes
Zhoujing Wang Li, K.W. Xiaoping Zhang Xiamen Univ., Fujian

The container loading problem (CLP) is notoriously known to be NP-hard, an intrinsically difficult problem that is too complex to be solved in polynomial time on a system of serial computers. Heuristic algorithms are often the only viable option to tackle this type of combinatorial optimization problems. This article puts forward a heuristic algorithm based on a tertiary tree model to handle the CLP with heterogeneous rectangular boxes. A dynamic spatial decomposition method is employed to partition the unfilled container space after a group of homogeneous boxes is loaded into the container. This decomposition approach, together with an optimal-fitting sequencing rule and an inner-left-corner-occupying placement rule, permits a holistic filling strategy to pack a container. A comparative study with existing algorithms and an illustrative example demonstrate the efficiency of this heuristic algorithm.


3.7. A parallel tabu search algorithm for the container loading problem
A. Bortfeldt , H. Gehring and D. Mack
FernUniversität, Fachbereich Wirtschaftswissenschaft, Postfach 940, 58084, Hagen, Germany

This paper presents a parallel tabu search algorithm for the container loading problem with a single container to be loaded. The emphasis is on the case of a weakly heterogeneous load. The distributed-parallel approach is based on the concept of multi-search threads according to Toulouse et al. [Issues in designing parallel and distributed search algorithms for discrete optimization problems, Publication CRT-96-36, Centre de recherche sur les transports, Universitéde Montréal, Canada, 1996] i.e., several search paths are investigated concurrently. The parallel searches are carried out by differently configured instances of a tabu search algorithm, which cooperate by the exchange of (best) solutions at the end of defined search phases. The parallel search processes are executed on a corresponding number of LAN workstations. The efficiency of the parallel tabu search algorithm is demonstrated by an extensive comparative test including well-known reference problems and loading procedures from other authors.

4. Requirement specification for Load Plan PCMS:

a. To develop an efficient algorithm which can generate loading plan for goods in an aircraft or container.

b. Input Data for Algorithm:
1. Job cards data with number of pieces of goods to be filled in a aircraft/container with dimension(volume)
2. Dimension of aircraft’s area to be filled or dimension of container to be filled.

c. Out put data for algorithm:

1. To provide list of jobs card that would be filled in the aircraft or container.
2. To provide the exact location of goods to be placed in aircraft or container.

d. There would be an option to offload a particular job from container or aircraft.
Algorithm will automatically provide re-scheduled information after this change.

e. There would be an option to load a particular job to aircraft or container as priority. Algorithm will automatically provide re-scheduled information after this change.

f. A email alert will be generated to all the clients, shipper, consignee, agent, operation person and prepared by persons.

g. There would be 3-D graphical representation for load planning showing details
of exact items loaded in aircraft or container.

5. Future Guide lines:• To contact the corresponding algorithm providers for possibility of getting code of implementation of algorithms.
• After getting feedback from algorithm providers, best suited can be selected.
• A implementations are found in c language, so there is a need to transform that logic in C#, then modify the algorithm as per our need. This need code exploration in C.
• To do R & D related to 3-D graphical representation of objects in .Net specifically WPF.
• Third party graphical libraries like ceometric can be explored to get help in 3-D virtualization of objects.

6. References:

Algo for container Loading problem:

Click to access tr03-07.pdf

Click to access gehringbortfeldt_container.pdf


A Joke about Software Architect

March 23, 2010

A prostitute, a landscape architect, a farmer and a software architect were
discussing the ‘eldest profession in the world’.

Well, that’s an easy one, the prostitute says, everybody knows that: it’s
prostitution, so the discussion is closed.

Wait a second, the landscape architect reacts, rembember the Garden of Eden?
That garden was there before prostitution could take place; read the bible
for all the evidence.

Nice try, the farmer said, but first we had to plough that land; it was all
chaos and farmers made a nice, organized piece of land out of it where you
landscapers could create your garden.

And who do you think created all that chaos? the software architect replied.

source: google

Joke about Software Architect of Architects………Bill Gates……..:)

March 23, 2010

Joke about Software Architect of Architects………Bill Gates……..:)
Bill Gates died in a car accident. He found himself in Purgatory being sized up by God…

“Well, Bill, I’m really confused on this call. I’m not sure whether to send you to Heaven or Hell. After all, you enormously helped society by putting a computer in almost every home in the world and yet you created that ghastly Windows 95. I’m going to do something I’ve never done before. In your case, I’m going to let you decide where you want to go!”

Bill replied, “Well, thanks, God. What’s the difference between the two?”

God said, “I’m willing to let you visit both places briefly if it will help you make a decision.” “Fine, but where should I go first?” God said, “I’m going to leave that up to you.” Bill said, “OK, then, let’s try Hell first.” So Bill went to Hell.

It was a beautiful, clean, sandy beach with clear waters. There were thousands of beautiful women running around, playing in the water, laughing and frolicking about. The sun was shining and the temperature was perfect. Bill was very pleased. “This is great!” he told God, “If this is Hell, I REALLY want to see Heaven!” “Fine,” said God and off they went.

Heaven was a high place in the clouds, with angels drifting about playing harps and singing. It was nice but not as enticing as Hell. Bill thought for a quick minute and rendered his decision. “Hmm, I think I prefer Hell” he told God. “Fine,” retorted God, “as you desire.” So Bill Gates went to Hell.

Two weeks later, God decided to check up on the late billionaire to see how he was doing in Hell. When God arrived in Hell, he found Bill shackled to a wall, screaming amongst the hot flames in a dark cave. He was being burned and tortured by demons. “How’s everything going, Bill?” God asked.

Bill responded – his voice full of anguish and disappointment, “This is awful, this is not what I expected. I can’t believe this happened. What happened to that other place with the beaches and the beautiful women playing in the water?”

God says, “That was the screen saver”.

Source: google

Software Architecture: ASP-Application Service Provider Model Analysis

March 23, 2010

To Study ASP-Application Service provider model for Application hosting

By: Shahzad Sarwar
To:Development Team+ Management
Date: 23 March 2010

1. Objective:
To Study the deployment of PCMS, Pegasus Cargo Management System in a Application Service Provider Model.

2. ASP Model Archiecture
The application software resides on the vendor’s system and is accessed by users through a web browser using HTML or by special purpose client software provided by the vendor. Custom client software can also interface to these systems through XML APIs. These APIs can also be used where integration with in-house systems is required. ASPs may or may not use multi-tenancy in the deployment of software to clients; some ASPs offer an instance or license to each customer (for example using Virtualization), some deploy in a single instance multi-tenant access mode, now more frequently referred to as “SaaS”.
Common features associated with ASPs include:
• ASP fully owns and operates the software application(s)
• ASP owns, operates and maintains the servers that support the software
• ASP makes information available to customers via the Internet or a “thin client”
• ASP bills on a “per-use” basis or on a monthly/annual fee

3. Benefits of adopting ASP Model: Cost & Resource Savings. When companies take an e-learning initiative in-house, the focus often becomes the technology, not the learning. It’s the technology that costs money and needs constant care. The ASP model allows an organization to concentrate on its core competencies—what it’s in business to do—and not divert key resources from revenue- generating tasks.
 Focus. The ASP solution allows management in an organization to focus on core business activities—not technology issues. When that portion is outsourced, the focus stays on the business.
 Technical Support. All user and administrative support issues, troubleshooting and technical upgrades are handled by the ASP.
 24/7 Accessibility. The ASP solution enables everyone in your organization to choose the time and place that’s right for them to access business solution, so solution is available anytime, anywhere—all that’s needed is an Internet connection.
 Just-In-Time (Immediate) Access. The ASP model makes business solution available within minutes of a user determining a need for the information.

 Reduced Capital Expenditure—an application service provider can free you from capital investments, upgrade and ongoing management costs. Enjoy state-of-the-art applications without the expense of extensive application development costs or whole-scale upgrades to hardware and networks. Escape costly hardware/software upgrade cycles and take total control over the cost of technology ownership.
 Broad Reach—Provide consistent applications to your organization’s branch offices, mobile workers and telecommuters and bring together widely dispersed geographic locations across diverse platforms.
 Predictability—ASPs offer higher and more reliable performance levels than most organizations can achieve themselves: guaranteed network uptime, higher levels of security, and greater scalable network storage.
• Pay-as-you-go: An ASP will be likely to charge you a monthly rental which can help spread the costs.

4. Disadvantages of Adopting ASP Model: The client must generally accept the application as provided since ASPs can only afford a customized solution for the largest clients
 The client may rely on the provider to provide a critical business function, thus limiting their control of that function and instead relying on the provider
 Changes in the ASP market may result in changes in the type or level of service available to clients
 Integration with the client’s non-ASP systems may be problematic
 Evaluating an Application Service Provider security when moving to an ASP infrastructure can come at a high cost, as such a firm must assess the level of risk associated with the ASP itself. Failure to properly account for such risk can lead to:
o Loss of control of corporate data
o Loss of control of corporate image
o Insufficient ASP security to counter risks
o Exposure of corporate data to other ASP customers
o Compromise of corporate data

 Performance: If you have a dial-up modem then the speed is unlikely to be satisfactory for all but the most basic applications. You need to be looking at ISDN, ADSL or leased lines for good performance, all of which cost more. Communication costs. The on-going phone/leased line charges. And the likelihood you will need ISDN lines or faster , and the costs for that.

 Continual payments: If you stop paying your monthly fee then you won’t be able to use the application any more, whereas with software you buy, once you have bought it then it is yours to keep. (It might also be the case that after some time you will have paid more by renting the software than you would have done if you had bought a package, but this depends on each situation, and you do need to consider the TCO).

 Not having data in-house: This is probably a conceptual issue rather than a true disadvantage but it might be important to some organizations. In many instances, to all intents and purposes, you can use your data in the same way as if it was in-house. It does mean you are reliant on the ASP ensuring their hardware is always available and you should discuss how they manage possible hardware failures (and fault tolerance). See also issues on Security below.

 Communication breakdowns: It is possible, although more unlikely these days, that you could find you are unable to connect to your ASP if there is a problem with your communication links.

 Security: Both in terms of having your data held away from your offices and as far as stopping security violations of your communications links. At the extreme, you have to decide if you would be happy running a ‘mission critical’ application through an outside organization.

 The ASP may be unable to provide the level of service committed because of technical, labor, financial, or other problems

5. ASP Model Implementation:
There are many implementation technologies for ASP Model via application hosting or application virtualization.
Following are some major players of this domain.
 XenApp by Citrix
 Terminal services by Microsoft
 Sun Secure Global Desktop by Sun

5.1. Citrix XenApp (formerly Citrix MetaFrame Server and Citrix Presentation Server) is an application virtualization/application delivery product that allows users to connect to their corporate applications. XenApp can either host applications on central servers and allow users to interact with them remotely or stream and deliver them to user devices for local execution.

5.1.1. Citrix XenApp Architecture
Citrix XenApp™ is an on-demand application delivery solution that centralizes application management in the datacenter and delivers applications as an on-demand service to users anywhere using any device. Utilizing integrated application virtualization and session virtualization technologies, XenApp overcomes the challenges associated with historic application deployment methods to reduce the cost of application management by up to 50 percent, deliver applications instantly to users and secure application access. Single instance management
To simplify application management, Citrix XenApp stores application packages on centralized network storage. Compared to traditional application deployment which requires multiple application packages to support many diverse user configurations, application management with XenApp requires only a single package for each application. Application packages created with XenApp contain all of the necessary information for delivery to any supported operating system. Once packaged, XenApp utilizes application streaming to deliver applications to target devices, whether they be user PC’s or XenApp hosting servers. To this end, XenApp also simplifies the deployment, configuration, and maintenance of hosting servers via integrated server image management. With single image server management, XenApp can provision additional application hosting capacity in the time it takes to boot a server. Furthermore, XenApp updates and upgrades can be rolled out to all servers with a simple reboot. Single instance management greatly simplifies application management and makes XenApp the most efficient and dynamic application delivery system in its class. Self-service application delivery

With Citrix Dazzle, ussers subscribe to the applications they need from a simple enterprise app storefront. When users request an application, XenApp determines the best delivery method for the application in real-time. If the user has the correct Receiver software, security profile, and meets pre-determined network, hardware, and application requirements, then XenApp utilizes local application delivery. With local application delivery, XenApp uses application streaming to deliver the application into an isolated environment on the user’s PC. This form of delivery uses the PC’s local computing resources to run the application and enables users to take applications with them even while they are disconnected from the network. Alternatively, if XenApp determines that the users device cannot run the application locally, then XenApp falls back to session virtualization. With session virtualization, XenApp uses application streaming to deliver the application to hosting servers and connects the user to a remote session running their application. This form of delivery uses datacenter resources to run the application and enables users to access applications from anywhere. On-demand application delivery makes it possible for XenApp to deliver applications with the highest level of application compatibility to any device while ensuring the most optimal performance and user experience. Any device, anywhere
XenApp surpasses traditional application deployment solutions by utilizing application virtualization technology to deliver any application to any device. Citrix Receiver and Receiver plug-ins enable application delivery to Windows, Mac, Linux and even UNIX devices. Over 20 manufacturers such as HP, Wyse, and iGel include XenApp plug-in software in their thin-client and access device products. Even users on iPhone, iPod Touch, Windows Mobile, Symbian and EPOCH devices can access Windows and UNIX applications delivered via XenApp. The ability to deliver any application to any device, anywhere makes XenApp the most complete application management solution on the market today. High definition experience
XenApp has been built to ensure the best experience and performance for users regardless of their device, operating system or connection. For applications delivered to the user’s local device, HDX IntelliCache accelerates initial application deployment and optimizes application communications. For applications hosted on servers, HDX MediaStream, 3D, RealTime, IntelliCache and Broadcast technologies work together to orchestrate the most optimized computing experience – even over high latency connections. In fact, regardless of the delivery method used, application management with XenApp enables a better-than-installed user experience when compared to traditional application deployment and installation. This is accomplished by preserving the “like-installed” computing experience that users are accustomed to while enhancing application portability, security and function. For example, the EasyCall voice services technology included with XenApp enables Voice over IP dialing and call conferencing capabilities to be integrated into any application without custom development. Furthermore, integrated profile management software ensures that user, application and environment settings remain consistent as users roam between devices and operating systems. This extensive portfolio of application virtualization, performance and delivery optimization technologies make XenApp the only application management solution capable of delivering a high definition user experience to any user on any device. Secure by design
Centralized application management is the most secure architecture for delivering applications. With session virtualization, XenApp keeps data in the datacenter while only screen updates, mouse clicks and keystrokes transit the network. As an extra added security measure for leveraging applications through either session virtualization or application virtualization technologies, centralized password control, multi-factor authentication, encrypted delivery and a hardened SSL VPN appliance eliminate the chance for loss or theft of data. Built-in configuration logging and SmartAuditor technology enable IT to track system changes and even record user activity into a video file to keep a visual record of system and application use for security and litigation purposes. Application management with XenApp increases application portability and user productivity while ensuring data security and IT access control. Enterprise class scalability

As an enterprise class infrastructure for application management, XenApp can support implementations with as few as 2 servers or scale on-demand to support multiple data centers, thousands of users and multiple sites throughout the world. In fact, as the most mature on-demand application delivery solution on the market, XenApp is proven to support more than 70,000 users, scale beyond 1,000 servers in a single implementation and ensure 99.999 percent application availability. This scalability is made easy with integrated XenServer virtualization technology and provisioning services. When used together, they enable IT to scale their XenApp server farm to support thousands of new users on-demand. Built in load testing, performance monitoring and activity logging tools help IT to size their infrastructure correctly, monitor usage and performance, scale when needed, resolve issues quickly, and even pinpoint malicious behavior. This enterprise-class foundation enables IT to meet service level agreements and quickly respond to business and user needs. With XenApp, corporate IT teams finally have the global, scalable, end-to-end application management solution that IT has been looking for.

5.1.2. Citrix XenApp Features & Benefits Deliver applications on-demand to any user anywhere

Citrix XenApp™ is an on-demand application delivery solution that reduces the cost of Windows® application management by up to 50%.
XenApp enables IT to centralize and manage a single instance of each application in the datacenter and deliver them to users for online or offline use, while providing a high definition experience. It revolutionizes Windows application management by virtualizing applications and delivering them as a centralized on-demand service to any user anywhere on any device.
Calculate your own savings using our free ROI calculator and explore the features below to learn more about how virtualizing applications with XenApp can help your business reduce costs, ensure security and increase user, IT, and business performance and productivity. Self-service delivery of your virtual applications
System intelligence automatically determines the best method for delivering virtual applications as an on-demand service to users through a personalized adn easy-to-use self-service storefront. Access virtual applications from any device, anywhere
Users can simply and securely access virtual applications instantly with a consistent experience regardless of location or device. In fact, XenApp can deliver any Windows application to any of over 30 client operating systems including Mac and even the Apple iPhone. Ensure a high definition user experience
Virtualizing applications with Citrix XenApp delivers a high performance, high definition user experience from any device, on any network – even for graphic-rich and multimedia content. Users are assured of a seamless experience with zero downtime and higher overall productivity. Secure architecture, secure delivery, secure by design
Centralized application management is the most secure architecture for delivering applications. With session virtualization technology, data remains in the datacenter while only screen updates, mouse clicks and keystrokes transit the network. Centralized password control, multi-factor authentication, encrypted delivery and a hardened SSL VPN appliance eliminate the chance for loss or theft of data. Single instance server and application management
Virtual application packages and server images are stored, maintained and updated once in the datacenter and delivered on-demand. This simplifies system and application management, improves application compatibility and makes it easy to provide real-time updates to users. Enterprise class application management
XenApp is proven to support more than 100,000 users, scale beyond 1,000 servers in a single implementation and ensure 99.999 percent application availability. The enterprise-class foundation coupled with centralized application management, monitoring and automation tools enable rapid response to business and user needs.

5.2. Terminal Services:Remote Desktop Services in Windows Server® 2008 R2 provides technologies that enable users to access Windows-based programs that are installed on a Remote Desktop Session Host (RD Session Host) server, or to access the full Windows desktop. With Remote Desktop Services, users can access an RD Session Host server from within a corporate network or from the Internet.

In Windows Server 2008, Terminal Services introduced RemoteApp programs, which are programs that are accessed remotely through Remote Desktop Services and appear as if they are running on the end user’s local computer. In Windows Server 2008 R2, Remote Desktop Services provides administrators the ability to group and personalize RemoteApp programs as well as virtual desktops and make them available to end users on the Start menu of a computer that is running Windows® 7. This new feature is called RemoteApp and Desktop Connection.
RemoteApp and Desktop Connection provides a personalized view of RemoteApp programs, session-based desktops, and virtual desktops to users. When a user starts a RemoteApp program or a session-based desktop, a Remote Desktop Services session is started on the Remote Desktop Session Host (RD Session Host) server that hosts the remote desktop or RemoteApp program. If a user connects to a virtual desktop, a remote desktop connection is made to a virtual machine that is running on a Remote Desktop Virtualization Host (RD Virtualization Host) server. To configure which RemoteApp programs, session-based desktops, and virtual desktops are available through RemoteApp and Desktop Connection, you must add the Remote Desktop Connection Broker (RD Connection Broker) role service on a computer that is running Windows Server 2008 R2, and then use Remote Desktop Connection Manager.
In Windows 7 and Windows Server 2008 R2, you configure RemoteApp and Desktop Connection by using Control Panel. After RemoteApp and Desktop Connection is configured, RemoteApp programs, session-based desktops, and virtual desktops that are part of this connection are available to users on the Start menu of their computer. Any changes that are made to RemoteApp and Desktop Connection, such as adding or removing RemoteApp programs or virtual desktops, are automatically updated on the client and on the Start menu.
Users can use the new RemoteApp and Desktop Connection notification area icon to:
• Identify when they are connected to RemoteApp and Desktop Connection.
• Disconnect from RemoteApp and Desktop Connection if the connection is no longer needed.
Administrators can create a client configuration file (.wcx) and distribute it to users within their organization so that the user can automatically configure RemoteApp and Desktop Connection. Administrators can also write and distribute a script to run the client configuration file silently so that RemoteApp and Desktop Connection is set up automatically when the user logs on to their account on a Windows 7 computer.
5.3. Sun Secure Global Desktop (SGD):
software provides secure access to both published applications and published desktops running on Microsoft Windows, Unix, mainframe and System i systems via a variety of clients ranging from fat PCs to thin clients such as Sun Rays.
A large range of client devices can be used to connect to a Secure Global Desktop Server, including Microsoft Windows PCs, Solaris desktops, Apple Macintoshes, Linux PCs, thin clients such as those from Sun and Wyse, and mobile devices. The only requirement on the client side is a Web browser with a Java Runtime Environment installed.
A client device connects to the Secure Global Desktop Server either via a supported Java-enabled browser or via Native Client software (this “native client” can be downloaded from a SGD installation’s login page, i.e. instead of logging in and letting the Java applet handle the connection automatically for you, you could instead do it manually by downloading this “native client” from the SGD main login page, install it locally, and then launch it and connect via this). When you connect via a browser the first time as a client, the SGD client (the client-side of the aforementioned Java component) is downloaded so you can then SSL encrypt your connection. Browsers officially supported are Mozilla Firefox, Internet Explorer, and Safari, but other browsers might work too for as long as they have access to a working Java-plugin. The latest Java Runtime Environment is recommended but at least version 1.5 is required.
The Desktop Client connects to the Secure Global Desktop Server via the Adaptive Internet Protocol (AIP). AIP is bandwidth and latency aware and can adjust compression and performance dynamically on links as diverse as a 56K modem or a 100Mb LAN.
Session Resumability and Mobility is a feature allowing remote access to desktop applications from essentially any Java-enabled browser in the world. This makes it possible to run applications in one’s office, then go to another location such as a customer site or one’s home and transfer your existing desktop session to a computer there.
Centralisation is an important feature for organizations concerned with secure data being stored on remote devices such as notebook computers, and the associated risk for theft of the device and its data. Applications accessed via SGD run in the centralised server room, meaning that all data is backed up and secured via the normal datacenter practices of the organization. There is a potential for increased performance and effiiciency, since the actual computation is performed on larger systems with more resources; centralisation also makes resources considerably easier to manage.
Applications can be assigned to users or groups of users using the Object Manager which can automatically present new applications to users dynamically without them needing to log out. Profiles can be created to group similar types of users; these profiles control the applications that a logged-in user is allowed to use. When a new application or an upgrade to an existing application is required, an administrator can just push these changes out to the users. This simplifies Desktop SOE migrations.
SGD’s password caching feature, authentication tokens, and ability to integrate with Active Directory and LDAP gives it the ability to easily set up single sign-on to applications: a user logs into SGD once, and then can run applications without having to perform an additional login—even if there are usernames and passwords used for the different back-end applications.
With the same SGD infrastructure one can host an organisation’s internal desktop applications, but also be able to access desktop applications remotely without the need for expensive VPN solutions. The Firewall Traversal Feature makes it possible to put an application server in an organisation’s DMZ with only port 443 (HTTPS) accessible from the outside world. An SGD server can be accessed via HTTP or HTTPS.
SGD also integrates with the Sun Java System Portal Server making it possible to deliver desktop applications via a Secure Portal using a Portlet, including the ability to mail, calendar and other Portal features.
Sun Java System Identity Manager can also be used to manage all user accounts and passwords via one webform, including integration of LDAP, Active Directory, Oracle or other commercial or home-grown access control repositories.

6. Conclusion: ASP Model can reallly be use full when planning for PCMS for small size clients. As application on rental basses can be very cost effective.
 After comparing SGD,terminal services and XenApp , it is concluded that Terminal services in Windows server 2008 has a very nice feature of Remte Apps, which can provide an option to host application on termical services sever and application can be run at client end with out installing.
 A POC is required to deploy PCMS as Remote App and find out limits if there are any.
 Operatioal cost secpially maintanance of server of ASP Modal can be very high and problematic,so reliable hosting company can serve to host application at their Data center.
 A comparative analysis of hosting services by top hosting service providers is required.


ASP Modal Hosting providers:

Terminal Services:

Citrix XenApp

Sun Secure Global Desktop:


Software Architecture-SQL Server Reporting Service 2008 – Fax feature

March 11, 2010

To study fax sending options with a PCMS, Pegasus Cargo Management System, reporting application that is running Microsoft SQL Server Reporting services 2008.

 Microsoft SQL Server Reporting Services 2008 does not provide any native Faxing service as emailing service is provided.
 So first solution is to send Fax manually that export a SSRS 2008 report to PDF file , then following normal faxing sending process.
 Second option is to adopt a Fax automation service like
eFax is an online fax service that eliminates the need for a fax machine, an
extra telephone line and all the associated expenses (paper, ink cartridges etc).
A real fax number that’s tied to email. Faxes are Send and receive as email
 So when send a fax button will be pressed, internally an email will be generated via SSRS 2008 to efax channel will deliver fax to target Fax number.
 With this approach we can utilise the schedule definition mechanism for email to define the schedule of fax also..

Reference Information: [ Reference: ]

Do I get a real fax number?
Yes. To the person faxing you, it looks and acts like any other fax number. When someone faxes to your number, the fax is converted to a file that is emailed to you as an attachment.

How do I receive faxes?
When you sign up for eFax, you’ll choose the location for a local fax number or choose a toll-free fax number. When someone faxes you, you receive the fax in your email inbox as a file attachment. Just double click the file to read it. You’ll be able to check your faxes anywhere you can check your email. You also have the option to route all your incoming faxes to up to 5 different email addresses — no more copying, distributing, or forwarding.

Can I send faxes?
It’s easy to send a fax with your eFax subscription. One way is to attach either a file (click here for supported file types) or a scanned hard copy to an email. Then in the To: field, type the recipient’s fax number like this: Click send and it arrives as a paper fax in their fax machine.

Can I save my faxes?
Of course. Save a fax just like you would any email attachment and they’ll only be a click away. You can also access your faxes online by logging in at for up to one year with eFax

How reliable is eFax?
eFax has developed the largest digital fax network in the world with local numbers available in over 2,000 cities worldwide and toll-free numbers that cover the U.S., Canada and several European countries. The eFax network sports a 99.5% up-time, meaning no one will ever get a busy signal and you’ll never worry about running out of paper or toner. Our patented technology ensures transmission times that average 2-5 minutes. If you ever have any questions, your eFax subscription includes live phone support.

For details related to costing and pricing, see the link below.

Rates for local numbers:

EFax Plus:

130 receiving pages a month for $16.95
If signing up for annual, pay for 10 months and get 2 months free
Price comes out to: $14.13/month (paid up front at $169.50)
· Sending:
$3 credit for sending which is up to 30 pages a month
· Overage:
$.15/page receiving
$.10/page sending
· 1 year fax storage

EFax Pro:
· 200 receiving pages a month for $19.95
If signing up for annual price comes out to: $18.34/month (paid up front at $220)
· Overage:
§ $.10/page receiving and sending
· Includes voicemail delivered as an email
· 2 years fax storage

EFax Pro2

· 500 receiving pages a month for $49.95
If signing up for annual pay for 10 months and get 2 months free

Price comes out to $41.62 a month (paid up front at $499.95)

· Sending:

$3 credit for sending which is up to 30 pages a month
· Overage:
§ $.10/page receiving and sending
· Includes voicemail delivered as an email
· 2 years fax storage

*Toll free numbers do not include any pages in the plan. They are $16.95 monthly plus 20 cents per page of fax coming in and 10 cents per page out domestic.

**overage fee charges

Charged up front in increments of $10, the unused balance of which is to be applied to future overages in succeeding months.

j2 Global Communications
Phone: 1 (323) 860-9465
eFax: 1 (323) 297-2576

17 Million eFax Numbers : 3,100 Cities : 46 Countries : 6 Continents : 12 Consecutive Years of Revenue Growth

Whitepaper To Study FILESTREAM Option In SQL Server

March 3, 2010

Topic: Document Storage Management for PCMS
To: Development Team
Dated: 3rd March 2010

To do the analysis for the large file storage in MS SQL Database.

Problem Definition:
PCMS has many documents that needs to be uploaded corresponding to Job Cards in all the modules. As volume of documents is increased with passage of time, so it causes major development and operational overheads. It grows more then GBs with in few months of period.

A study was conducted few months back; to adopt a third party file system to maintain documents out side the actual database. Many solutions were analyzed but no concrete option was able to qualify all the selection parameters like security, access speed, storage efficiency and operational management.

Microsoft has provided a native solution to this problem. They have merged the benefit of file storage and Database storage under one umbrella with technology named as Filestream.
Large file storage is managed via Filestream.

FILESTREAM Definition: FILESTREAM integrates the SQL Server Database Engine with an NTFS file system by storing varbinary(max) binary large object (BLOB) data as files on the file system.

 To specify that a column should store data on the file system, specify the FILESTREAM attribute on a varbinary(max) column. This causes the Database Engine to store all data for that column on the file system, but not in the database file.

Application Design and Implementation
• When you are designing and implementing applications that use FILESTREAM, consider the following guidelines:
• Use NULL instead of 0x to represent a non-initialized FILESTREAM column. The 0x value causes a file to be created, and NULL does not.
• Avoid insert and delete operations in tables that contain nonnull FILESTREAM columns. Insert and delete operations can modify the FILESTREAM tables that are used for garbage collection. This can cause an application’s performance to decrease over time.
• In applications that use replication, use NEWSEQUENTIALID() instead of NEWID(). NEWSEQUENTIALID() performs better than NEWID() for GUID generation in these applications.
• The FILESTREAM API is designed for Win32 streaming access to data. Avoid using Transact-SQL to read or write FILESTREAM binary large objects (BLOBs) that are larger than 2 MB. If you must read or write BLOB data from Transact-SQL, make sure that all BLOB data is consumed before you try to open the FILESTREAM BLOB from Win32. Failure to consume all the Transact-SQL data might cause any successive FILESTREAM open or close operations to fail.
• Avoid Transact-SQL statements that update, append or prepend data to the FILESTREAM BLOB. This causes the BLOB data to be spooled into the tempdb database and then back into a new physical file.
• Avoid appending small BLOB updates to a FILESTREAM BLOB. Each append causes the underlying FILESTREAM files to be copied. If an application has to append small BLOBs, write the BLOBs into a varbinary(max) column, and then perform a single write operation to the FILESTREAM BLOB when the number of BLOBs reaches a predetermined limit.
• Avoid retrieving the data length of lots of BLOB files in an application. This is a time-consuming operation because the size is not stored in the SQL Server Database Engine. If you must determine the length of a BLOB file, use the Transact-SQL DATALENGTH() function to determine the size of the BLOB if it is closed. DATALENGTH() does not open the BLOB file to determine its size.
If an application uses Message Block1 (SMB1) protocol, FILESTREAM BLOB data should be read in 60-KB multiples to optimize performance.

When to Use:

The size and use of the data determines whether you should use database storage or file system storage.
• Objects that are being stored are, on average, larger than 1 MB.
• Fast read access is important.
• You are developing applications that use a middle tier for application logic.
• For smaller objects, storing varbinary(max) BLOBs in the database often provides better streaming performance.
• The sizes of the File system based BLOBs are limited only by the volume size of the file system. The standard varbinary(max) limitation of 2-GB file sizes does not apply to BLOBs that are stored in the file system.

Code for FileStream Option

Because of wordpress restriction, after downloading ‘final-filestream.doc’ , rename it to final-filestream.rar and then extract and enjoy code.

White Paper On ConCurrency For PCMS Application Architecture

March 1, 2010

Dated: 19th July 2009
Version: 1.0
By:Shahzad Sarwar
To: Related Project Managers/Consultants
Development Team

What is Concurrency/Locking?

Locking is a method used to protect records that will be accessed by multiple users so that concurrency errors do not occur (when multiple users change records near simultaneously resulting in inconsistencies).
“Locking” refers to preventing access in some way to the file in order to protect it while a user is making an update.

Types of Concurrency
There are 2 types of locking / Concurrency
• Pessimistic Concurrency
• Optimistic Concurrency
Difference between Pessimistic Concurrency and Optimistic Concurrency
In Pessimistic concurrency the server acquires locks to block access to data that another process is using. Pessimistic concurrency avoids conflicts by acquiring locks on data that is being read, so no other process can modify that data, it also acquires locks on data being modified so no other processes can access that data for either reading or modifying. Readers block writers and writers block readers.

In Optimistic Concurrency the server uses row versioning to allow data readers to see the state of the data before the modifications occur. Older versions of data rows are saved so a process reading data can see the data as it was when the process started reading and not be affected by any changes being made to that data. A process that modifies data is unaffected by processes reading the data because the readier is accessing a saved version of the data rows, readers do not block writers and writers do not block readers.

Replication of PCMS Database

March 1, 2010

Dated: 19th July 2009
By: Shahzad Sarwar, PSE, Comsoft
To: Related Project Managers/Consultants , Client
Case Study:
To sync data of different branches of office via replication who are running Comsoft application named PCMS.

MS .Net 2.0
MS SQL Server 2005
WAN Connected MS SQL Servers on different branches of office
Business problem:
Client has regional offices or entities that collect and process data that must be sent to a central location. For example:
• Estimation/quotation/job data can be “rolled up” or consolidated from a number of servers at local warehouses/parties into a central server at corporate headquarters.
• Information from autonomous business divisions within a company can be sent to a central server.
In some cases, data is also sent from the central site to remote sites. This data is typically intended to be read-only data at the remote site, such as a set of All the base/administration tables that are only updated at a central site.