• submit to reddit

Getting Started with Lean Software Development

By Curt Hibbs , Steve Jewett and Mike Sullivan

23,420 Downloads · Refcard 93 of 204 (see them all)


The Essential Lean Development Cheat Sheet

Lean Software Development is an outgrowth of the larger Lean movement that includes areas such as manufacturing, supply chain management, product development, and office operations. Its goal is the same- to deliver value to the customer more quickly by eliminating waste and improving quality. This DZone Refcard covers the specific practices involved in implementing a Lean Software Development process. Adopt one, several, or all the practices and take your first step into the world of Lean Software Development.
HTML Preview
Getting Started with Lean Software Development

Getting Started with Lean Software Development

By Curt Hibbs, Steve Jewett, and Mike Sullivan

About Lean Software Development

Lean Software Development is an outgrowth of the larger Lean movement that includes areas such as manufacturing, supply chain management, product development, and back-office operations. Its goal is the same: deliver value to the customer more quickly by eliminating waste and improving quality. Though software development differs from the manufacturing context in which Lean was born, it draws on many of the same principles.

Seven Principles of Lean Software Development

Lean Software Development embodies seven principles, originally described in the book Implementing Lean Software Development: From Concept to Cash1, by Mary and Tom Poppendieck. Each of these seven principles contributes to the “leaning out” of a software development process.

Eliminate Waste

Waste is anything that does not contribute value to the final product, including inefficient processes, unnecessary documentation, and features that won't be used. Eliminating waste is the guiding principle in Lean Software Development.

Build Quality In

Building quality into a product means preventing defects, rather than using post-implementation integration and testing to detect them after the fact.

Create Knowledge

The knowledge necessary to develop a project, including requirements, architecture, and technologies, is seldom known or understood completely at project startup. Creating knowledge and recording it over the course of the project ensures the final product is in line with customer expectations.

Defer Commitment

Making irreversible decisions at the last reasonable moment allows time for the creation of more knowledge, which results in better decisions. Deferring commitment is positive procrastination.

Deliver Fast

Delivering fast puts the product in front of the customer quickly so they can provide feedback. Fast delivery is accomplished using short iterations, which produce software in small increments by focusing on a limited number of the highest priority requirements.

Respect People

Respecting people means giving the development team's most important resource, its members, freedom to find the best way to accomplish a task, recognizing their efforts, and standing by them when those efforts are unsuccessful.

Optimize the Whole

Optimizing the whole development process generates better results than optimizing local processes in isolation, which is usually done the expense of other local processes.

Lean vs. Agile

Comparing Lean and Agile software development reveals they share many characteristics, including the quick delivery of value to the customer, but they differ in two significant ways: scope and focus. The narrow scope of Agile addresses the development of software and focuses on adaptability to deliver quickly. Lean looks at the bigger picture, the context in which development occurs, and delivers value quickly by focusing on the elimination of waste. As it turns out, they are complementary, and real world processes often draw from both.


Newcomers to Lean Software Development sometimes have trouble implementing a Lean process. The Lean principles don't describe an “out-of-the-box” solution, so one approach is to start with an Agile methodology. However, a number of methodologies exist, and choosing the right one can be difficult.

One Step at a Time

All is not lost. What follows is a set of inter-related practices organized in a step-by-step fashion to allow projects to implement Lean Software Development one step at a time.

The following practices can stand-alone, and implementing any of them will have a positive effect on productivity. Lean Software Development relies on prioritization, so the practices are prioritized to generate the highest return on investment. While implementing any one practice will help lean out a process, doing them in order will return the most “bang for the buck.”

The list of six practices is preceded by two prerequisites, or “zero practices”, every software project should be doing, whether Lean, Agile or something more traditional. If your project doesn't do these things, this is the best place to start.


Source Code Management and Scripted Builds are prerequisites for other practices outlined here. They are referred to as zero practices because they need to be in place before taking the first step toward Lean Software Development.

Source Code Management

Source code management (SCM) is a shared repository for all artifacts needed to build the project from scratch, including source code, build scripts, and tests. SCM maintains the latest source code so developers and build systems have up-to-date code.

Source Code Management Diagram

Figure 1: Centralized Repository

Source code management is the first practice described because it is the foundation for a practical development environment, and it should be implemented before going any further.

  • Select an appropriate SCM system. Subversion is a popular open source option. Git is a newer distributed SCM system useful for large projects and distributed teams.
  • Put everything needed to build the product from scratch into the SCM system so critical knowledge isn't held only by specific individuals.

Scripted Builds

Scripted builds automate a build process by executing a set of commands (a script) that creates the final product from the source code stored in SCM. Scripts may be simple command files, make files, or complex builds within a tool such as Maven or Ant.

Scripted builds eliminate the potential errors of manual builds by executing the same way each time. They complete the basic development cycle of making changes, updating the SCM repository, and rebuilding to verify there are no errors. Select an appropriate build tool for your project. Integrated development environments like Visual Studio or Eclipse have build managers or integrate with 3rd party build managers. Create a script that builds the product from scratch, starting with source code from SCM.

Source Code Flow Diagram

Figure 2: Zero Practices

Lean Principles

  • Create Knowledge: SCM consolidates project knowledge in a single place.
  • Eliminate Waste: Manual work is eliminated by automating builds.
  • Build Quality In: Automating builds eliminates a source of errors.


Daily standup meetings allow each team member to provide status and point out problems or issues. The meetings are short and not intended to resolve problems, rather they serve to make all team members aware of the state of the development effort.

Borrowing from the Scrum methodology, standups are conducted by having each member of the team answer three questions:

What did I do yesterday?
What will I do today?
What problems do I have?

Effective daily standups result from adhering to several simple rules:

  • Require all team members to attend. Anyone who cannot attend must submit their status via a proxy (another team member, email, etc.).
  • Keep the meeting short, typically less than 15 minutes. Time-boxing the meeting keeps the focus on the three questions.
  • Hold the meeting in the same place at the same time, everytime.
  • Avoid long discussions. Issues needing further discussion are addressed outside the meeting so only the required team members are impacted.

Hot Tip

The Japanese word tsune roughly translated means “daily habits”. It refers to things such as taking a shower that are so ingrained into a daily routine that skipping them leaves one feeling that something is missing2. Make the daily standup part of the team's tsune so that a day without a standup feels incomplete.

Lean Principles

  • Respect People: Standups foster a team-oriented attitude; team members know what other members are doing and can get or give help as needed to move the project forward.
  • Create Knowledge: Sharing information regularly creates group knowledge from individual knowledge.


Automated testing is the execution of tests using a single command. A test framework injects pre-defined inputs, validates outputs against expected results, and reports the pass/fail status of the tests without the intervention of a human tester. Automated testing ensures tests are run the same way every time and are not subject to the errors and variations introduced by testers.

While automated testing can be applied to all types of testing from unit and integration tests to user acceptance and performance/load tests, unit and integration testing is the best place to start.

  • Identify an appropriate test framework for the language in use. JUnit (for Java) and NUnit (for Microsoft .NET languages) are common frameworks.
  • Require all new code modules to have a unit test suite before being included in the build.
  • Retrofit unit test suites to existing legacy code only when the code is modified (writing unit tests for code which is already written and functional usually is not cost effective).
  • Develop integration tests by combining code modules and testing the modules together.
  • Use stubs and mock objects to stand in for code which has not yet been developed.

Hot Tip

Developing automated tests alongside production code may be a paradigm shift for many developers. One way to help developers adjust is to define testing standards calling out both what to test and how to do it. Adherence to the standards will create a culture where automated tests are the norm and will pay off in higher quality software.

Automated Testing Flow Diagram

Figure 3: Automated Testing

Test Execution

Each developer runs unit tests on individual code modules prior to adding them to the source code repository, ensuring all code within the repository is functional. An automated build runs both unit and integration tests to ensure changes do not introduce errors. The next practice, continuous integration, will make use of the build scripts and test suites to test the entire system automatically each time changes are checked into the repository.

Lean Principles

  • Build Quality In: Automated tests executed regularly and in a consistent manner prevent defects.
  • Eliminate Waste: Defects detected early are easier to correct and don't propagate.
  • Create Knowledge: Tests are an effective way to document how the code functions.


Continuous integration (CI) is the frequent integration of small changes during implementation. It seeks to reduce, or even eliminate, the long, drawn-out integration phase traditionally following implementation. Integrating small changes doesn't just spread the effort out over the whole cycle, it reduces the amount of integration time because small changes are easier to integrate and aid debugging by isolating defects to small areas of code.

CI systems use a source code repository, scripted builds, and automated tests to retrieve source code, build software, execute tests, and report results each time a change is made.

  • Use a dedicated build machine to host the CI system. Refer to the Continuous Integration: Servers and Tools Refcard (#87) for details on setting up a CI system.
  • Check code changes into the repository a minimum of once a day (per developer); once an hour or more is even better.
  • Immediately address any failures in the build. Fixing the build takes precedence over further implementation.

Hot Tip

While the use of a dedicated computer, or build machine, to host the CI system may seem obvious for a large project, it provides advantages on small projects as well:

  • Dedicated machines don't compete for resources, so builds are quicker and the results get back to the developers sooner.
  • Dedicated machines have a stable, well-known configuration. Builds don't fail because a new version of a library was loaded or the runtime environment was changed.

A CI system can also check coding standards, analyze code coverage, create documentation, create deployment packages, and deploy the packages. Anything that can be automated can be included in a CI system.

CI Flow Diagram

Lean Principles

  • Build Quality In: Continuous build and test ensures code is always functional.
  • Eliminate Waste: Frequent, small integrations are more efficient than an extended integration phase.


Less code is not about writing less software, it's about implementing required functionality with a minimum amount of code. Large code bases mean more implementation, integration, and debugging time, as well as higher long term maintenance costs. All of these are non-value added work (i.e., waste) when the code base contains unneeded or inefficient code.

All aspects of software development can affect the code base size. Requirements analysis resulting in features with little likelihood of use and overly generic, all-encompassing designs generate extra code. Scope creep and unnecessary features increase the amount of code. Even testing can generate unnecessary code if the code under test is itself unnecessary.

Minimizing code base size requires two actions: identify and eliminate unnecessary code, and write efficient code. Minimizing code base size is not unlike a fitness program: diet to eliminate the excess, and exercise to shape up what's left.

Eliminate Unnecessary Code

Eliminating unnecessary code means identifying the code, or the forces that create it, and removing it.

  • Adopt a fierce, minimalist approach. Every bit of code added to the code base must be justifiable. Remove excessive requirements, simplify designs, and eliminate scope creep.
  • Reuse code and employ libraries to reduce the amount of new code that must be written.
  • Prioritize requirements so developers implement important features first. As customers adjust the priorities over the course of development, they drive development of only useful features; unused features never get implemented.
  • Develop only for the current iteration. Working too far ahead risks doing work that will be thrown away as requirements and design change over time.

Improve Code Efficiency

Code efficiency doesn't refer to creating small, compact code by using arcane tricks and shortcuts. In fact, the opposite is true; efficient code uses coding standards and best practices.

  • Use coding standards to write readable and understandable code. Use best practices and proven techniques that are well understood by other developers.
  • Develop flexible, extensible code. Design and implement code with design patterns, refactoring, and emergent design.

Hot Tip

The “big design up front”, or BDUF, approach to design can lead to overdesign and unused code. The opposite approach, sometimes referred to as “you ain't gonna need it” or YAGNI, creates only what is needed at the moment, but it can lead to brittle designs and inefficient code. A compromise that creates only what currently is necessary, but tempers that with some thought for the future, is a better approach. Scott Bain's book Emergent Design2 describes such an approach.

Lean Principles

  • Eliminate Waste: Frequent, small integrations are more efficient than an extended integration phase.
  • Build Quality In: Automated tests executed regularly and in a consistent manner prevent defects.


Iterations are complete development cycles resulting in the release of functional software. Traditional development methodologies often have iterations of six months to a year, but Lean Software Development uses much shorter iterations, typically 2 to 4 weeks. Short iterations generate customer feedback, and more feedback means more chances to adjust the course of development.

Feedback and Course Corrections

Feedback from the customer is the best way to discover what's valuable to them. Each delivery of new functionality creates a new opportunity for feedback, which in turn drives course corrections due to clarification of the customer's intent or actual changes to the requirements. Short iterations produce more feedback opportunities and allow more course corrections, so developers can hone in on what the customer wants.

Course correction Diagram

Figure 5: Course Corrections

Using Short Iterations

Several techniques aid in the implementation of a process using short iterations:

  • Work requirements in priority order. High priority requirements typically are well-defined, easiest to implement, and provide the most functionality in a short period of time.
  • Define a non-negotiable end date for the iteration; sacrifice functionality to keep the team on schedule. End dates focus the team on delivering the required functionality. Even if some features are not completed, delivering those that are ensures customers get new functionality on a regular basis.
  • Mark the end of the iteration with a demo and an official handoff to the customer. Demos foster pride in the product by allowing the team to show off its work.
  • Deliver the product to the customer, whether it's a ready-for-deployment application or an interim release. Getting the product in the customer's hands for in-depth evaluation is the best way to generate feedback.

Hot Tip

Teams struggling to complete iterations successfully are often tempted to lengthen their iterations; however, longer iterations tend to hide problems. Instead, struggling teams should reduce the iteration length, which reduces the scope, focuses the team on a smaller goal, and brings impediments to the surface more quickly so they can be resolved.

Lean Principles

  • Eliminate Waste: Frequent, small integrations are more efficient than an extended integration phase.
  • Deliver Fast: New, functional software is delivered to the customer in closely-spaced intervals.


Customer participation in traditional projects typically is limited to requirements specification at the beginning of the project and acceptance testing at the end. Collaboration between customers and developers in the intervening time is limited, typically consisting of status reports and the occasional design review.

Lean Software Development approaches customer participation as an on-going activity spanning the entire development effort. Customers write requirements and developers produce functional software from those requirements. Customers provide feedback on the software and developers act on that feedback, ensuring developers are producing what the customer really wants.

Involve the Customer

Key to establishing effective customer collaboration is involving the customer in the entire development process, not just at the beginning and end. Engaging the customer, reporting status, and providing a feedback path all help keep the customer involved.

  • Engage the customer by having them write and prioritize the requirements. Customers get a sense of ownership, and they can direct the course of development.
  • Have the customers write the acceptance tests (or at least specify their content), and, if possible, run the tests as well. Involvement in testing the product allows customers to specify exactly what it means to satisfy a requirement.
  • Provide useful, easily-accessible status. For example, a list of the requirements in work and the status of each. Include status on problems affecting development to avoid surprises.
  • Provide access to the product so the customer can see for themselves how it works, and provide a simple, direct feedback path so customers can input feedback easily.


Collaborating directly with the customer is necessary for developers to refine the requirements and understand exactly what the customer wants.

  • Designate a customer representative. The representative writes and/or collects requirements and prioritizes them. The representative clarifies requirements for developers.
  • Schedule face-to-face time with the customer. At the very least, include a demo at the end of each iteration.

Hot Tip

Actual customers make the best customer representatives, but when customer representatives are not available a customer proxy can fill the role. A customer proxy should be from the development team's organization and must have a good understanding of the customer's needs and business environment.

Lean Principles

  • Create Knowledge: Through collaboration, requirements are discovered and refined over time.
  • Defer Commitment: Involving customers throughout the process eliminates the need to make decisions up front.


Most discussions of Lean Software Development don't define specific practices for implementing the process, and the large number of Agile methodologies to choose from can leave newcomers confused and uncertain where to start. The specific practices outlined here provide a step-by-step approach to implementing a Lean Software Development process. Adopt one, several, or all the practices and take your first step into the world of Lean Software Development.


1 Implementing Lean Software Development: From Concept to Cash, Poppendieck/Poppendieck, Addison-Wesley Professional, 2006

2 Moving Toward Stillness, Lowry, Tuttle Publishing, 2000.

3 Emergent Design: The Evolutionary Nature of Professional Software Development, Bain, Addison-Wesley Professional, 2008

Some of the concepts and material in this Refcard were adapted from The Art of Lean Software Development, Hibbs/ Jewett/Sullivan, O'Reilly Media, 2009.

About The Authors

Photo of author Curt Hibbs

Curt Hibbs

Curt Hibbs co-leads the Boeing team responsible for the adoption of Lean and Agile software engineering practices across Boeing's Defense, Space & Security business unit. He has been a software engineer for 30+ years, and during that time he has done just about everything related to developing software products, from working for WordStar, Hewlett Packard, the C.I.A, and more, to being the CTO of several startups. He has worked for Boeing since 2003.

Photo of author Steve Jewett

Steve Jewett

Steve Jewett is a software developer with the Boeing Company, where he is involved in the development of cognitive decision support systems. Over a 25 year career he has developed software for automated test equipment, weapon/aircraft integration, embedded systems and desktop and web applications. He currently leads an agile software development team and works to promote Lean-Agile software development at Boeing.

Photo of author Mike Sullivan

Mike Sullivan

Mike Sullivan has over 6 years of experience teaching at the university level, and has spent the last 5+ years working with software teams in small companies and large corporations to drive valuable solutions and improve team dynamics. He is currently working in a small research team within a large corporation, implementing Lean techniques to improve the software his team delivers.

Recommended Book

Art of Lean Software Development

This succinct book explains how you can apply the practices of Lean software development to dramatically increase productivity and quality. The Art of Lean Software Development is ideal for busy people who want to improve the development process but can't afford the disruption of a sudden and complete transformation. The Lean approach has been yielding dramatic results for decades, and with this book, you can make incremental changes that will produce immediate benefits.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Eclipse Plug-in Development

By James Sugrue

19,139 Downloads · Refcard 70 of 204 (see them all)


The Essential Eclipse Plug-in Cheat Sheet

Eclipse consists of many plug-ins, which are bundles of code that provide some functionality to the entire system. Plug-ins contribute functionality to the system by implementing pre-defined extension points. You can provide extension points in your own plug-in to allow other plug-ins to extend your functionality. This DZone Refcard takes the user through the process of Eclipse-based plug-in development, from plug-in basics to the OSGi manifest, the plug-in manifest, hot tips, and more.
HTML Preview
Eclipse Plug-in Development

Eclipse Plug-in Development

By James Sugrue

About Eclipse Plug-ins

The Eclipse platform consists of many plug-ins, which are bundles of code that provide some functionality to the entire system. Plug-ins contribute functionality to the system by implementing pre-defined extension points. You can provide extension points in your own plug-in to allow other plug-ins to extend your functionality.

Hot Tip

Eclipse has a dedicated perspective for development of plug-ins, the PDE (Plug-in Development Environment). You can download Eclipse for RCP/ Plug-in Developers with all you need to get started from http://www.eclipse.org.

How plug-ins work

A plug-in describes itself to the system using an OSGi manifest (MANIFEST.MF) file and a plug-in manifest (plugin.xml) file. The Eclipse platform maintains a registry of installed plug-ins and the function they provide. As Equinox, the OSGi runtime, is at the core of Eclipse, you can think of a plug-in as an OSGi bundle. The main difference between plug-ins and bundles is that plug-ins use extension points for interaction between bundles.

Plug-ins take a lazy-loading approach, where they can be installed and available on the registry but will not be activated until the user requests some functionality residing in the plug-in.

the osgi manifest

MANIFEST.MF, usually located in the META-INF directory, deals with the runtime details for your plug-in. Editing of the manifest can be done through the editor provided, or directly in the MANIFEST.MF tab. The following is an example of one such manifest for a simple plug-in:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Myplugin
Bundle-SymbolicName: com.dzone.tests.myplugin
Bundle-Version: 1.0.0.qualifier
Bundle-Activator: com.dzone.tests.myplugin.Activator
Require-Bundle: org.eclipse.ui, org.eclipse.core.runtime
Bundle-ActivationPolicy: lazy
Bundle-RequiredExecutionEnvironment: JavaSE-1.6

The Eclipse OSGi Framework implements the complete OSGi R4.1 Framework specification and all of the Core Framework services. Here we list the most common manifest headers and directives.

Manifest Entry Use Example
Manifest-Version Manifest versioning information for your own records 1.0
Bundle-ManifestVersion A bundle manifest mayexpress the version of the syntax in which it is written by specifying a bundle manifest version. If using syntax from OSGi Release 4 or later, you must specify a bundle manifest version. The bundle manifest version defined by OSGi Release 4 is '2'. 2
Bundle-Name Human readable name for the plug-in. MyPlugin
Bundle-SymbolicName A unique name for this plug-in, usually in package naming convention. com.dzone.tests.myplugin
Bundle-Version The version of this plugin. This should follow the typical three number versioning format of < major version>. <minor version>. < revision> This can also be appended by an alphanumeric qualifier. 1.0.1.alpha
Bundle-Activator The activator, or plug-in class, that controls this plug-in. com.dzone.tests.myplugin.Activator
Bundle-Vendor Human readable string for the plug-in provider. DZone
Bundle-Classpath A comma-separated list of directories and jar files used to extend this bundle's functionality. lib/junit.jar,lib/xerces.jar
Require-Bundle A comma-separated list of symbolic names of other bundles required by this plug-in. rg.eclipse.ui,org.eclipse. core.runtime
Bundle-ActivationPolicy Manifest header identifying the bundle's activation policy. This replaces the deprecated Eclipse-LazyStart directive. Lazy
Bundle-Required ExecutionEnvironment Manifest header identifying the required execution environment for the bundle. The platform may run this bundle if any of the execution environments named in this header match one of the execution environments it implements. JavaSE-1.6
Export-package A list of the packages that this bundle provides for export to other plug-ins. com.dzone.tests.api

Plug-in Runtime

The Require-bundle manifest header has some extra functionality to help you manage your runtime dependencies. Bundles can be marked as optional dependencies by annotating the bundle with ;resolution:=optional.

You can also manage which version of the bundle your dependent on needs to be present using the ;bundleversion=' <values>' annotation. Here, the <values> that we refer to are a range of versions where you can specify minimum and maximum version ranges. The syntax of this range value is illustrated through these examples:

Example Meaning
3.5 Dependent only on version 3.5 of this bundle
[3.5, 3.5.1] Must be either version 3.5 or 3.5.1
[3.0, 4.0] Must be a version of 3.0 or over, but not 4.0

Additional Eclipse Bundle Headers

Eclipse provides a number of addition bundle headers and directives. These extra headers are not part of the OSGi R4.1 specification, but allow developers to use additional Eclipse OSGi Framework functionality.

Manifest Entry Use Example
Additional directives
are available to manage
the access restriction of
exported packages.
The default value for this
property is false. When
internal packages are
specified as true using this
option, the Eclipse PDE
discourages their use.
This option is similar to
x-internal, but allows
certain bundles to use the
exported packages that
have this option. Other
bundles are discouraged.
The x-internal option
takes precedence over
This allows you to set
particular rules for your
bundle before it can start.
osgi.nl for language
osgi.os for operating
osgi.arch for
osgi.ws for windowing
Eclipse- PlatformFilter: (& (osgi.ws=win32) (osgi.os=win32) (osgi.arch=x86))

All entries in the manifest can be internationalized by moving them to a separate plugin.properties file.

the plug-in manifest

With the Manifest.MF file looking after the runtime dependencies, plugin.xml deals with the plug-in extensions and extension points.

An extension allows you to extend the functionality of another plug-in in your system. An extension can be added through the plug-in editor's Extensions tab, or to your plugin.xml.

<extension point='org.eclipse.ui.preferencePages'>
     name='Sample Preferences'>

Each extension point has a XML schema which specifies the elements and attributes that make up the extension. As you can see in the listing above, each extension point has a unique identifier. The <page> element above is specified in the XML schema for the org.eclipse.ui.preferencesPages extension.

Hot Tip

Plug-ins and extension points are expected to have the same unique identifiers following the Java package naming pattern.

You can also define your own extension points, and we will detail that process in a later section.

Plug-in mode l

The plug-in class is a representation of your plug-in running in the Eclipse platform. A plug-in class in Eclipse must extend org. eclipse.core.runtime.Plugin, which is an abstract class that provides generic facilities for managing plug-ins. When using the project wizard in the PDE, this class typically gets assigned Activator as its default name. Whatever name you assign to this plug-in class, it must be the same as that mentioned in the Bundle-Activator directive of your MANIFEST.MF.

The class has start and stop methods that refer to the BundleContext and are provided by the BundleActivator interface. These methods allow you to deal with the plug-ins lifecycle, so that you can do both initialization and cleanup activities at the appropriate times. When overriding these methods be sure to always call the superclass Implementations.

Hot Tip

Plug-ins that contribute to the UI will have activators that extend AbstractUIPlugin, while non-UI plug-ins will extend Plugin.

Bundle Context

A BundleContext is associated with your plug-in when it is started. As well as providing information about the plug-in, the BundleContext can provide information about other plug-ins in the system. By providing a listener to BundleEvent, you can monitor the lifecycle of any other plug-in.


The terms Bundle and Plug-in may be used interchangeably when discussing Eclipse. The Bundle class provides us with the OSGi unit of modularity. There are six states associated with bundles:

State Meaning
UINSTALLED The bundle is uninstalled and not available.
INSTALLED A bundle is in the INSTALLED state when it has been installed in the Framework but is not or cannot be resolved
RESOLVED Before a plug-in can be started, it must first be in the RESOLVED state.
A bundle is in the RESOLVED state when the Framework has successfully resolved the bundle's code dependencies.
STARTING A bundle is in the STARTING state when its start method is active.
If the bundle has a lazy activation policy, the bundle may remain in this state until the activation is triggered.
STOPPING A bundle is in the STOPPING state when its stop method is active.
When the BundleActivator.stop method completes the bundle is stopped and must move to the RESOLVED state.
ACTIVE A bundle is in the ACTIVE state when it has been successfully started and activated.

Lazy Loading

Plug-ins are normally set to load lazily, so that the code isn't loaded into memory until it is required. This is normally a good thing as you don't want to affect the startup time of Eclipse. If you do require your plug-in to start up and load when Eclipse launches, you can use the org.eclipse.ui.startup extension point.

<extension point='org.eclipse.ui.startup'>
   <startup class='com.myplugin.StartupClass'>

The startup class listed above must implement the org.eclipse.ui.IStartup interface which provides an earlyStartup() method. The method is called in a separate thread after the workbench initializes.

Extension Points

The Eclipse platform provides a number of extension points that you can hook into, to provide additional functionality. The concept behind an extension point is that a class provides some extendable behavior, and publishes this behavior as an extension point. In order to run this code, the plug-in requires a host '" in this case your own plug-in.

In your plugin.xml you take this extension point and provide extra information to help it run. You will usually need to provide some class that implements a particular interface in order to do this.

Here we will run through some useful extension points in the Eclipse platform. Note, that to make some of these available for your plug-in, you will usually need to add dependencies.

Example Meaning
org.eclipse.core.runtime .preferences Allows plug-ins to use the Eclipse preferences mechanism, including the setting of default preference values.
org.eclipse.core.runtime .applications A plug-in that wishes to use the platform but control all aspects of its execution is an application.
org.eclipse.core.resources .builders Useful for IDE builders who wish to provide an incremental project builder, processing a set of resource changes.
org.eclipse.core.resources .markers Markers are used to tag resources with use information '" this marker can then be utilized in the problems view.
org.eclipse.ui.activities The activity extension point allows the filtering of plug-in contributions from users until they wish to use them.
org.eclipse.ui.editors Allows the addition of new editors to the workbench, which can be tied to particular file extension types.
org.eclipse.ui.intro When Eclipse is first started up the welcome page, or intro is displayed. This extension point allows contributions to the welcome page.
org.eclipse.ui.menus Allows custom menus to be added to the workbench either in the main menu, toolbar or popup menus through the locationURI attribute.
org.eclipse.ui.perspective Allows the addition of a perspective factory to the workbench, defining a particular layout of windows.
org.eclipse.ui.propertyPages Adds a property page for objects of a given type.
org.eclipse.ui.themes Allows the customization of the user interface, overriding the default colors and fonts.
org.eclipse.ui.views Provides the ability to add views to the workbench.

Creating your own extension points

As well as being a user of extension points, a plug-in can provide its own extensions for other plug-ins. Extension points allow loose coupling of functionality '" your plug-in exposes a set of interfaces and an extension point definition for others to use.

Extension Point Definition

You can create your extension point through the plugin.xml file, or through the Add button in the Extension Points tab of the plug-in editor.

For identifying your extension point you need to provide a unique identifier and a human readable name. At this point you can also point to a schema file and edit it afterwards. An extension point schema must have .exsd as its suffix.

Extension Point Wizard

Figure 1: The New Extension Point Wizard

Defining an Extension Point Schema

The PDE provides an editor for defining your .exsd file, consisting of three tabs. First, the Overview tab allows you to provide documentation and examples for your extension point. This is an essential step if you want your extension point to be adopted. Next, the Definition tab presents a graphical way to define your schema, while the Source tab allows editing of the .exsd XML definition.


Figure 2: The Extension Point editor

When creating your extension point, you will first want to create one or more elements with attributes that will be used. Each extension point attribute has a number of associated properties:

Attribute Use
Name The name of the extension point attribute.
Deprecated Whether the attribute is deprecated or not.
Use Whether the attribute is optional, required or default. Default allows you to specify a value for the attribute if it hasn't been used.
Type The available types are Boolean, String, Java, Resource and Identifier. While Boolean and String are self '"explanatory, Resource should be used if the attribute is a file. Identifier provides a reference id for the extension point.
Extends If the type is Java this must be the name of the class that the attribute must extend.
Implements If the type is Java this must be the name of the class that the attribute must implement.
Translatable If the type is String this Boolean value indicates whether the attribute should be translated.
Restrictions If the type is String this can be used to limit the choice of value to a list of strings.
Description Attribute documentation.
References If the type is Identifier, this provides the id of the extension point that you want to reference. This will allow implementers of the extension point to easily find the id, without having to look through the plug-in registry.

Once you have created your elements and attributes for the extension point, the element can be added to a sequence for this extension. You can control the multiplicity of your extension here.

The mapping of XML to extension point declaration is simple; for users of your point, an xml element in the extension point will always appear on the left hand side tree, as part of the extension point declaration, while the xml attributes will appear as extension point attributes.

The Code Behind an Extension Point

With the extension point defined, the producer of this needs to provide some implementation that makes use of any extension point contributions.

To get a list of all the implementers of your extension point you can query the extension registry as follows, providing your extension point identifier as the parameter.

IConfigurationElement[] config = Platform.getExtensionRegistry()

To use the implementing extension point, you can get the object from the IConfigurationElement.

final Object o = config[i].createExecutableExtension('class');

Useful Tools

The PDE plug-in editor provides a number of useful utilities for working with your plug-ins. The Dependencies tab in particular is essential for organizing your runtime.

The Dependencies Tab

From here you can investigate the plug-in dependency hierarchy, starting with your plug-in as the root. You can also see which plug-ins are dependent on your own plug-in, as well as find any unused dependencies. This can be useful if you previously added a dependency to use an extension point, but have found that it is since no longer required. Finally, and most importantly, the tab provides a utility for investigating for cyclic dependencies.

Another useful tool for plug-in development is the Plug-in Registry view. This can be accessed from the Window>Show View>Other..>Plug-in Development category. This view will display all the plug-ins that are currently available in your Eclipse installation.

The Dependencies Tab

Figure 4: Plug-in Registry

Hot Tip

When launching an application containing you plugin, use the -consoleLog program argument from the Run Configurations dialog to see output to the system console.


It is recommended to log to a file, rather than using System. out. The Activator or plug-in class provides a facility to access the plug-in logging mechanism through the getLog() method, returning the org.eclipse.core.runtime.ILog interface.

Each log entry using this framework is of type IStatus. Any CoreExceptions thrown in Eclipse have an associated IStatus object. An implementation of this interface, Status, is available for use. There is also a MultiStatus class which allows multiple statuses to be logged at once.

Distributing your PL ug-in

Since Eclipse 3.4, p2 has been used as the method to provision your application with new or updated plug-ins. For build managers who have used the Update Site mechanism before, there doesn't need to be any change.

To create an update site you can use the wizard provided to create a new site.xml file. Using the Software Updates menu, users can point to your update site on the web and download the plug-in.

By adding some extra functionality over this simple implementation, you can leverage p2 to add extra meta data to your update site, which will make the installation experience faster for end users.

p2 Update Site Publisher

The UpdateSite Publisher application is provided by p2 to generate an artifact.xml and content.xml files for your standard update site. You can run this application in headless mode using org.eclipse.equinox.p2.publisher.UpdateSitePublisher. The following shows an example of how to run this application, taken from the p2 wiki.

java -jar <targetProductFolder>/plugins/org.eclipse.equinox.
 -application org.eclipse.equinox.p2.publisher.UpdateSitePublisher
 -metadataRepository file:/<some location>/repository
 -artifactRepository file:/<some location>/repository
 -source /< location with a site.xml>
 -configs gtk.linux.x86

Read more about p2 at http://wiki.eclipse.org/Equinox/p2

Enha ncing your plug-in

When developing your plug-in, you should be aware of the wide variety of projects available in the Eclipse eco-system that help make your development easier and faster. This section gives an overview of just a few of the useful projects that exist, and explains how they can be used in your project.

Eclipse Modeling Project

The Eclipse Modelling Project provides a large set of tools for model driven development. The most popular part of this project is the Eclipse Modelling Framework (EMF). Using this technology, you can define a model in the ecore format, generate Java code to represent, serialise and de-serialise the model. Other tools within the modelling project utilise EMF to provide more specialised frameworks for developers.

The Connected Data Objects (CDO) project provides a threetier architecture for distributed and shared models.

The Graphical Modelling Framework (GMF) allows you to generate graphical editors for your model based on EMF and the Graphical Editing Framework (GEF). For developers who want to provide textual editor for their own language or DSL, XText provides a EBNF grammar language and generates a parser, meta-model and Eclipse text editor from this input.

Eclipse Communication Framework

If your plugin requires any communication functionality, the ECF project is the first place to look. ECF consists of a number of bundles that expose various communication APIs. These APIs range from instant messaging, dynamic service discovery, file transfer to remote and distributed OSGi. Real-time shared editing functionality is also available in the framework, allowing you to collaborate remotely on anything that you are editing within your plug-in's environment.

Business Intelligence and Reporting Tools

BIRT is an open source reporting system based on Eclipse. BIRT provides both programmatic access to report creation, as well as functionality to create your own report template within the Eclipse IDE. While BIRT allows you to generate reports in file formats such as PDF, it is also possible to use BIRT on an application server to serve reports through a web browser.


As we have described in this card, Equinox is the Eclipse implementation of the OSGi R4 core framework specification, and provides the real runtime for all your plug-ins. However, as well as running your plug-ins on the desktop on an instance of Eclipse, you can take Equinox and run it on a server, allowing your plug-in to run on browsers as well as the desktop.

Rich Ajax Platform

With the emergence of the web as a real platform for rich applications, the Rich Ajax Platform allows you to take a standard RCP project, and with some minor modifications, make it deployable to the web. This idea of single-sourcing is key to the RAP project, and reduces the burden for developers to make an application ready for either the desktop or the web.

The same programming model is used, while qooxdoo is used for the client side presentation of your SWT and JFace widgets.

About The Author

Photo of author James Sugrue

James Sugrue

James Sugrue is a software architect at Pilz Ireland, a company using many Eclipse technologies. James is also editor at both EclipseZone and Javalobby. Currently he is working on TweetHub, a Twitter client based on RCP and ECF. James has also written previous Refcardz covering EMF and Eclipse RCP.

Zone Leader: EclipseZone, Javalobby

Twitter: @dzonejames

Recommended Books


This book presents detailed, practical coverage of every aspect of plug-in development--with specific solutions for the challenges you’re most likely to encounter.

Eclipse Platform

In Eclipse Rich Client Platform, two leaders of the Eclipse RCP project show exactly how to leverage Eclipse for rapid, efficient,cross-platform desktop development.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Getting Started with ASP.NET MVC 1.0

By Simone Chiaretta and Keyvan Nayyeri

17,078 Downloads · Refcard 69 of 204 (see them all)


The Essential ASP.NET MVC Cheat Sheet

ASP.NET MVC is Microsofts framework for building Web applications, and this DZone Refcard will help you to get started with it. First off, youll learn how to setup your environment to work with ASP.NET MVC and how to create a web application. Then, youll get to go deeper into detail and learn about components of the framework along with the structure of the main API. The Refcard concludes with a sample of a standard operation that developers can do with ASP.NET MVC.
HTML Preview
Getting Started Oracle Berkeley DB

Getting Started with ASP.NET MVC 1.0

By Simone Chiaretta and Keyvan Nayyeri


ASP.NET MVC is a new framework for building Web applications developed by Microsoft; it was found that the traditional WebForm abstraction, designed in 2000 to bring a “desktop-like” development experience to the Web, was sometimes getting in the way, and could not provide proper separation of concerns, so it was difficult to test. Therefore a new, alternative framework was built in order to address the changing requirements of developers. It was built with testability, extensibility and freedom in mind.

This Refcard will first explain how to setup your environment to work with ASP.NET MVC and how to create an ASP.NET MVC Web application. Then it will go deeper in details explaining the various components of the framework and showing the structure of the main API. Finally, it will show a sample of standard operation that developers can do with ASP.NET MVC.


The ASP.NET MVC is a new framework, but it’s based on ASP.NET core API: in order to understand and use it, you have to know the basic concepts of ASP.NET. Furthermore, since it doesn’t abstract away the “Web” as the traditional WebForm paradigm does, you have to know HTML, CSS and JavaScript in order to take full advantage of the framework.


To develop a Web site with ASP.NET MVC, all you need is Visual Studio 2008 and the .NET Framework 3.5 SP1. If you are an hobbyist developer you can use Visual Web Developer 2008 Express Edition, which can be downloaded for free at the URL: http://www.microsoft.com/express/vwd/.

You also need to install the ASP.NET MVC library, which can be downloaded from the official ASP.NET Web site at http://www.asp.net/mvc/download.

You can also download everything you need, the IDE, the library, and also a free version of SQL Server (Express Edition) through the Web Platform Installer, available at: http://www.microsoft.com/web/.

The MVC pattern

As you probably have already guessed from the name, the framework implements the Model View Controller (MVC) pattern.


The UI layer of an application is made up of 3 components:

MVC Component Description
Model The component responsible for data interactions with data storage system (typically a database) and main business logic implementations.
View The component responsible for displaying data passed from Controller to it which also renders the user interface of the site.
Controller The component that acts like a bridge between the model and the view to load data based on the request and pass them to view, or pass the data input by user to the model.

And the flow of an operation is depicted in the diagram:

  1. The request hits the Controller.
  2. The Controller delegates the execution of “main” operation to the Model.
  3. The Model sends the results back to the Controller.
  4. The Controller formats the data and sends them to the View.
  5. The View takes the data, renders the HTML page, and sends it to the browser that requested it.

Build your first application

Starting the developing of an ASP.NET MVC application is easy. From Visual Studio just use the “File > New Project” menu

command, and select the ASP.NET MVC Project template (as shown in the following figure).


Type in the name of the project and press the “OK” button. It will ask you whether you want to create a test project (I suggest choosing Yes), then it will automatically create a stub ASP.NET MVC Web site with the correct folder structure that you can later customize for your needs.


As you can see, the components of the applications are wellseparated in different folders.

Folder Name Contains
/Content Static contents for your site, like CSS and images
/Controllers All the Controllers of the application, one per file
/Models The classes that encapsulate the interaction with the Model
/Scripts The JavaScript files used by your application (by default it contains jQuery
/Views All the views of the application, in sub-folders that are related one to one with the controllers

The Fundame ntals of ASP.NET MVC

One of the main design principles of ASP.NET MVC is “convention over configuration”, which allows components to fit nicely together based on their naming conventions and location inside the project structure.

The following diagram shows how all the pieces of an ASP.NET MVC application fit together based on their naming conventions:



The routing engine is not part of the ASP.NET MVC framework, but is a general component introduced with .NET 3.5 SP1. It is the component that is first hit by a request coming from the browser. Its purpose is to route all incoming requests to the correct handler and to extrapolate from the URL a set of data that will be used by the handler (which, in the case of an ASP.NET MVC Web application, is always the MvcHandler) to respond to the request.


To accomplish its task, the routing engine must be configured with rules that tell it how to parse the URL and how to get data out of it. This configuration is specified inside the RegisterRoutes method of the Global.asax file, which is in the root of the ASP.NET MVC Web application.

public static void RegisterRoutes(RouteCollection routes)
“Default”, //Route Name
“{controller}/{action}/{id}”, //Route Formats
new { controller = “Home”, action = “Index”, id = “” } //Defaults

The snippet above shows the default mapping rule for each ASP.NET MVC application: every URL is mapped to this route, and the first 3 parts are used to create the data dictionary sent to the handler. The last parameter contains the default values that must be used if some of the URL tokens cannot be populated. This is required because, based on the default convention, the data dictionary sent to the MvcHandler must always contain the controller and the action keys.

Examples of other possible route rules:

URL Rule Data Dictionary

Format: “{controller}/
Default: new { controller
= “Home”, action = “Index”,
id = “” }

Controller = Posts Action = Show Id = 5

Format: /archive/{date}/{title}
Default: { controller = “Posts”, action
= “show”}

Controller = Posts
Action = Show
Date = 2009-10-02
Title = My post


ASP.NET MVC, unlike other MVC-based frameworks like Ruby on Rails (RoR), doesn’t enforce a convention for the Model. So in this framework the Model is just the name of the folder where you are supposed to place all the classes and objects used to interact with the Business Logic and the Data Access Layer. It can be whatever you prefer it to be: proxies for Web services, ADO.NET Entity Framework, NHibernate, or anything that returns the data you have to render through the views.


The controller is the first component of the MVC pattern that comes into action. A controller is simply a class that inherits from the Controller base class whose name is the name of a controller and ends with “Controller,” and is located in the Controllers folder of the application folder structure. Using that naming convention, the framework automatically calls the specified controller based on the parameter extrapolated by the URL.

namespace MyMvcApp.Controllers
	public class PageController : Controller
	   //Controller contents.

The real work, however, is not done by the class itself, but by the method that lives inside it. These are called Action Methods.

Action Method

An action method is nothing but a public method inside a Controller class. It usually returns a result of type ActionResult and accepts an arbitrary number of parameters that contain the data retrieved from the HTTP request. Here is what an action method looks like:

public ActionResult Show(int id)
	//Do stuff
	return View();

The ViewData is a hash-table that is used to store the variables that need to be rendered by the view: this object is automatically passed to the view through the ActionResult object that is returned by the action. Alternatively, you can create your own view model, and supply it to the view.

public ActionResult Show(int id)

  //Do stuff
  return View(myValue);

This second approach is better because it allows you to work with strongly-typed classes instead of hash-tables indexed with string values. This brings compile-time error checking and Intellisense.

Once you have populated the ViewData or your own custom view model with the data needed, you have to instruct the framework on how to send the response back to the client. This is done with the return value of the action, which is an object that is a subclass of ActionResult. There are various types of ActionResult, each with its specific way to return it from the action.

ActionResult Type Method Purpose
ViewResult View() Renders a view whose path is inferred by the current controller and action: /View/controllerName/ ActionName.aspx
ViewResult View(viewName)

Renders a view whose name is
specified by the parameter:

ViewResult View(model) Renders the view using the default path, also passing a custom View Model that contains the data that needs to be rendered by the view.
PartialViewResult PartialView() Same as View, but doesn’t return a complete HTML page, only a portion of it. Looks for the file at following the path: /View/controllerName/ ActionName.ascx
PartialViewResult PartialView(viewName) Renders a partial view whose name is specified by the parameter: /View/controllerName/ viewName.ascx
PartialViewResult PartialView(model) Renders a partial view using the default path, also passing a custom View Model that contains the data that needs to be rendered by the partial view.
RedirectResult Redirect(url) Redirects the client to the URL specified.
RedirectToRouteResult RedirectToAction(actionName) Redirects the client to the action specified. Optionally you can specify also the controller name and an additional list of parameters.
RedirectToRouteResult RedirectToR Redirects the client to the route specified. Optionally you can specify an additional list of parameters.
ContentResult Content(content) Sends to the content specified directly to the client. Optionally you can specify the content type and encoding.
JsonResult Json(data) Serializes the data supplied in Json format and sends the Json string to the client.
FileResult File(filename,contenttype) Sends the specified file directly to the client. Optionally you can provide a stream or a byte array instead of a physical path.
JavaScriptResult JavaScript(javascript) Sends the script provided as external JavaScript file.
EmptyResult new EmptyResult() Doesn’t do anything: use this in case you handle the result directly inside the action (not recommended).

Model Binder

Using the ActionResults and the ViewData object (or your custom view model), you can pass data from the Action to the view. But how can you pass data from the view (or from the URL) to the Action? This is done through the ModelBinder. It is a component that retrieves values from the request (URL parameters, query string parameters, and form fields) and converts them to action method parameters.

As everything in ASP.NET MVC, it’s driven by conventions: if the action takes an input parameter named Title, the default Model Binder will look for a variable named Title in the URL parameters, in the query string, and among the values supplied as form fields.


But the Model Binder works not only with simple values (string and numbers), but also with composite types, like your own objects (for example the ubiquitous User object). In this scenario, when the Model Binder sees that an object is composed by other sub-objects, it looks for variables whose name matches the name of the properties of the custom type. Here it’s worth taking a look at a diagram to make things clear:



The next and last component is the view. When using the default ViewEngine (which is the WebFormViewEngine) a view is just an aspx file without code-behind and with a different base class.

Views that are going to render data passed only through the ViewData dictionary have to start with the following Page directive:

<%@ Page Language=”C#” MasterPageFile=”~/Views/Shared/Site.Master”
Inherits=”System.Web.Mvc.ViewPage” %>

If the view is also going to render the data that has been passed via the custom view model, the Page directive is a bit different, and it also specifies the type of the view model:

<%@ Page Language=”C#” MasterPageFile=”~/Views/Shared/Site.Master”
Inherits=”System.Web.Mvc.ViewPage<PageViewModel>” %>

You might have noticed that, as with all normal aspx files, you can include a view inside a master page. But unlike traditional Web forms, you cannot use user controls to write your HTML markup: you have to write everything manually. However, this is not entirely true: the framework comes with a set of helper methods to assist with the process of writing HTML markup. You’ll see more in the next section.

Hot Tip

Another thing you have to handle by yourself is the state of the application: there is no ViewState and no Postback.

HTML helper

You probably don’t want to go back writing the HTML manually, and neither does Microsoft want you to do it. Not only to help you write HTML markup, but also to help you easily bind the data passed from the controller to the view, the ASP.NET MVC Framework comes with a set of helper methods collectively called HtmlHelpers. They are all methods attached to the Html property of the ViewPage. For example, if you want to write the HTML markup for a textbox you just need to write:

<%= Html.Textbox(“propertyName”)%>

And this renders an HTML input text tag, and uses the value of the specified property as the value of the textbox. When looking for the value to write in the textbox, the helper takes into account both the possibilities for sending data to a view: it first looks inside the ViewData hash-table for a key with the name specified, and then looks inside the custom view model, for a property with the given name. This way you don’t have to bother assigning values to input fields, and this can be a big productivity boost, especially if you have big views with many fields.

Let’s see the HtmlHelpers that you can use in your views:

Helper Purpose

actionName, …)

Renders a HTML link with the text specified, pointing to the URL that represents the action and the other optional parameters specified (controller and parameters). If no optional parameters are specified, the link will point to the specified action in the current controller.

routeValues, …)

Renders a HTML link as the method ActionLink, but now using the route values, and optionally the route name, as input.
Html.BeginForm(actionName,…) Renders the beginning HTML form tag, setting as action of the form the URL of the action specified. The URL creation works exactly the same as the ActionLink method.
Html.EndForm() Renders the form closing tag.
Html.Textbox(name) Renders a form input text box, populating it with the value retrieved from the ViewData or custom view model object. Optionally you can specify a different value for the field, or specify additional HTML attributes.
Html.TextArea(name, rows, cols, …) Same as Textbox, but renders a textarea, of the specified row and column size.
Html.Checkbox(name) Renders a checkbox.
Html.RadioButton(name, value) Renders a radio button with the given name, the given value and optionally specifying the checked state.
Html.Hidden(name) Renders a form input field of type hidden.


Renders a select HTML element, reading the options from the selectList variable, which is a list of name-value pairs.


Same as the DropDownList method, but enables the ability to select multiple options.

ValidationMessage(modelName, …)

Displays a validation message if the specified field contains an error (handled via the ModelState).
Html.ValidationSummary(…) Displays the summary with all the validation messages of the view.


Renders on the view the contents of the specified partial view.

As alternative to writing Html.BeginForm and Html.CloseForm methods, you can write an HTML form by including all its elements inside a using block:

<% using(Html.BeginForm(“Save”)) { %>
<!—all form elements here -->
<% } %>

To give you a better idea of how a view that includes an editing form looks like, here is a sample of a complete view for editing an address book element:

<%@ Page Language=”C#” MasterPageFile=”~/Views/Shared/Site.Master”
Inherits=”System.Web.Mvc.ViewPage<EditContactViewModel>” %>
<% using(Html.BeginForm(“Save”)) { %>
Name: <%= Html.Textbox(“Name”) %> <br/>
Surname: <%= Html.Textbox(“Surname”) %> <br/>
Email: <%= Html.Textbox(“Email”) %> <br/>
Note: <%= Html.TextArea(“Notes”, 80, 7, null) %> <br/>
Private <%= Html.Checkbox(“IsPrivate”) %><<br/>
<input type=”submit” value=”Save”>
<% } %>

T4 Templates

But there is more: bundled with Visual Studio there is a template engine (made T4 as in Text Template Transformation Toolkit) that helps automatically generate the HTML of your views based on the ViewModel that you want to pass to the view.

The “Add View” dialog allows you to choose with which template and based on which class you want the views to be generated

Template Name Purpose
Create Generates a form to create a new instance of the item you selected
Details Generates a view that shows all the properties of the item you selected
Edit Generates a form to edit a instance of the item you selected
Empty Generates an empty view, only with the declaration of the class it’s based on
List Generates a view with a list of the items you selected

Add View dialog

What these templates do is mainly iterating over all the properties of the ViewModel class and generating the same code you would have probably written yourself, using the HtmlHelper methods for the input fields and the validation messages.

For example, if you have a view model class with two properties, Title and Description, and you choose the Edit template, the resulting view will be:

<%@ Page Title=”” Language=”C#” MasterPageFile=”~/Views/Shared/Site.
Inherits=”System.Web.Mvc.ViewPage<IssueTracking.Models.Issue>” %>
<asp:Content ID=”Content1” ContentPlaceHolderID=”TitleContent”

<asp:Content ID=”Content2” ContentPlaceHolderID=”MainContent”
  <%= Html.ValidationSummary(“Edit was unsuccessful. Please correct
the errors and try again.”) %>
  <% using (Html.BeginForm()) {%>
		<label for=”Title”>Title:</label>
		   	<%= Html.TextBox(“Title”, Model.Title) %>
		    <%= Html.ValidationMessage(“Title”, “*”) %>
			<label for=”Description”>Description:</label>
			<%= Html.TextArea(“Description”,
            <%= Html.ValidationMessage(“Description”, “*”) %>
            <input type=”submit” value=”Save” />
<% } %>
       <%=Html.ActionLink(“Back to List”, “Index”) %>


The last part of ASP.NET MVC that is important to understand is AJAX. But it’s also one of the easiest aspects of the framework.

First, you have to include the script references at the top of the page where you want to enable AJAX (or in a master page if you want to enable itfor the whole site):

<script src=”/Scripts/MicrosoftAjax.js” type=”text/javascript”><script>
<script src=”/Scripts/MicrosoftMvcAjax.js” type=”text/javascript”></script>

And then you can use the only 2 methods available in the AjaxHelper: ActionLink and BeginForm.

They do the exact same thing as their HtmlHelper counterpart, just asynchronously and without reloading the page. To make the AJAX features possible, a new parameter is added to configure how the request and the result should be handled. It’s called AjaxOptions and is a class with the following properties:

Parameter Name Purpose
UpdateTargetId The id of the html element that will be updated
InsertionMode Where the new content will be inserted:
  • Replace: new content will replace old one
  • InsertAfter: new content will be placed after the current one
  • InsertBefore: new content will be placed before
Confirm The question that will be asked to the user to confirm their will to proceed
OnBegin Generates an empty view, only with the declaration of the class it’s based on
OnSuccess Generates a view with a list of the items you selected
OnFailure Name of the JavaScript function to be called before the request starts
OnComplete Name of the JavaScript function to be called when the request is complete, either with a success or a failure
Url The URL to sent the request to, if you want to override the URL calculated via the usual actionName and controllerName parameters
LoadingElementId The id of the HTML element that will be made visible during the execution of the request

For example, here is a short snippet of code that shows how to update a list of items using the AJAX flavor of the BeginForm method:

<ul id=”types”>
<% foreach (var item in Model) { %>
  <li><%= item.Name %></li>
<% } %>
<% using(Ajax.BeginForm(“Add”,”IssueTypes”,new AjaxOptions() {
    InsertionMode = InsertionMode.InsertAfter,
    UpdateTargetId = “types”,
    OnSuccess = “myJsFunc”
  })) { %>
Type Name: <%= Html.TextBox(“Name”) %>
<input type=”submit” value=”Add type” />
<% } %>

The AJAX call will be sent to the Add action inside the IssueType controller. Once the request is successful, the result sent by the controller will be added after all the list items that are inside the types element. And then the myJsFunc will be executed.

But what the ASP.MVC library does is just enabling these two methods: if you want more complex interactions you have to use either the AJAX in ASP.NET library or you can use jQuery, which ships as part of the ASP.NET MVC library.

If you want to use the AJAX in ASP.NET library, you don’t have to do anything because you already referenced it in order to use the BeginForm method, but if you want to use jQuery, you have to reference it as well.

<script src=”/Scripts/jquery-1.3.2.js” type=”text/javascript”></script>

One benefit of having the jQuery library as part of the ASP.NET MVC project template is that you gain full Intellisense support. But there is an extra step to enable it: you have to reference the jQuery script both with the absolute URL (as above) needed by the application and with a relative URL, which is needed by the Intellisense resolution engine. So, at the end, if you want to use jQuery and enable Intellisense on it, you have to add the following snippet:

<script src=”/Scripts/jquery-1.3.2.js” type=”text/javascript”>
<% if(false> { %>
<script src=”../../Scripts/jquery-1.3.2.js” type=”text javascript”>
<% } %>

About The Author

Photo of Simone Chiaretta

Simone Chiaretta

Simone Chiaretta is a software architect and developer who enjoys sharing his development experience and more than 10 years' worth of knowledge on Web development with ASP.NET and other Web technologies. He is currently working as a senior solution developer for Avanade, an international consulting company. He is an ASPInsider Microsoft MVP in ASP.NET, a core member of Subtext, a popular Open Source blogging platform, an active member of the Italian .NET User Group, co-founder of the Italian ALT.NET user group and a frequent speaker for community events throughout Italy.

Photo of Simone Chiaretta

Keyvan Nayyeri

Keyvan Nayyeri is a software architect and developer who has a bachelor of science degree in applied mathematics. He was born in Kermanshah, Kurdistan, in 1984. Keyvan's main focus is on Microsoft development technologies and their related technologies. Keyvan has a serious passion for community activities and open source software. He is also a team leader and developer of some prominent .NET Open Source projects, where he tries to learn many things through writing code for special purposes. Keyvan also has received a number of awards and recognition from Microsoft, its partners, and online communities. Some major highlights include Microsoft VSX Insider and Telligent Community Server MVP.

Recommended Book


If you have a background in .NET and ASP.NET and are seeking to learn ASP.NET MVC, then this is the book for you. Relying heavily on MVC concepts, ASP.NET MVC principles, and code to demonstrate the main content, this valuable resource walks you through the necessary components to solve real-world problems.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Getting Started with Oracle Berkeley DB

By Masoud Kalali

11,796 Downloads · Refcard 68 of 204 (see them all)


The Essential Berkeley DB Cheat Sheet

The Oracle Berkeley DB (BDB) family consists of three open source data persistence products that provide developers with fast, reliable, high performance, enterprise ready local databases implemented in the ANSI C and Java programming languages. This DZone Refcard provides a brief introduction to the Oracle Berkeley DB family. The author, Masoud Kalali, then moves on to discuss in depth the Oracle Berkeley DB Java Edition, including the following topics: Transaction Support, Performance Tuning, Backup, and Recovery. This DZone Refcard is perfect for anyone interested in learning more about the Oracle Berkeley DB family and its capabilities.
HTML Preview
Getting Started with Oracle Berkeley DB

Getting Started with Oracle Berkeley DB

By Masoud Kalali

About Oracle Berkeley DB

The Oracle Berkeley DB (BDB) family consists of three open source data persistence products which provide developers with fast, reliable, high performance, enterprise ready local databases implemented in the ANSI C and Java programming languages. The BDB family typically stores key-value pairs but is also flexible enough to store complex data models. BDB and BDB Java Edition share the same base API, making it possible to easily switch between the two.

We will review the most important aspects of Oracle BDB family briefly. Then we will dig deep into Oracle BDB Java Edition and see what its exclusive features are. We discuss Based API and Data Persistence Layer API. We will see how we can manage transactions DPL and Base API in addition to persisting complex objects graph using DPL will form the overall development subjects. Backup recovery, tuning, and data migration utilities to migrate data between different editions and installations forms the administrations issues which we will discuss in this Refcard.

The BDB Family

Oracle BDB Core Edition

Berkeley DB is written in ANSI C and can be used as a library to access the persisted information from within the parent application address space. Oracle BDB provides multiple interfaces for different programming languages including ANSI C, the Java API through JNI in addition to Perl, PHP, and Python.

Oracle BDB XML Edition

Built on top of the BDB, the BDB XML edition allows us to easily store and retrieve indexed XML documents and to use XQuery to access stored XML documents. It also supports accessing data through the same channels that BDB supports.

BDB Java Edition

BDB Java Edition is a pure Java, high performance, and flexible embeddable database for storing data in a key-value format. It supports transactions, direct persistence of Java objects using EJB 3.0-style annotations, and provides a low level key-value retrieval API as well as an “access as collection” API.

Key Features

Each of the BDB family members supports different feature sets. BDB XML edition enjoys a similar set of base features as the Core BDB. BDB Java edition on the other hand is implemented in a completely different environment with an entirely different set of features and characteristics (See Table 4). The base feature sets are shown in Table 1.

Table 1: Family Feature Sets
Feature Set Description
Data Store (DS) Single writer, multiple reader
Concurrent Data Store (CDS) Multiple writers, multiple snapshot readers
Transactional Data Store (TDS) Full ACID support on top of CDS
High Availability (HA) Replication for fault tolerance. Fail over recovery support

Table 2 shows how these features are distributed between the different BDB family members.

Table 2: Different Editions’ Feature Sets DS CDS TS HA





BDB Java Edition



Additional Features

The BDB family of products has several special features and offers a range of unique benefits which are listed in Table 3.

Table 3: Family Features and Benefits
Feature Benefit
Locking High concurrency
Data stored in application-native format Performance, no translation required
Programmatic API, no SQL Performance, flexibility/control
In process, not client-server Performance, no IPC required
Zero administration Low cost of ownership
ACID transactions and recovery Reliability, data integrity
Dual License Open/Closed source distributions
In memory or on disk operation Transacted caching/ persisted data store
Similar data access API Easy switch between JE and BDB
Just a set of library Easy to deploy and use
Very large databases Virtually no limit on database size

Features unique to BDB Java Edition are listed in Table 4.

Table 4: BDB Java Edition Exclusive Features
Feature Benefit
Fast, indexed, BTree Ultra fast data retrieval
Java EE JTA and JCA support Integration with Java EE application servers
Efficient Direct Persistence Layer EJB 3.0 like annotation to store Java Objects graph
Easy Java Collections API Transactional manipulation of Base API through enhanced Java Collections
Low Level Base API Work with dynamic data schema
JMX Support Monitor able from within parent application

These features, along with a common set of features, make the Java edition a potential candidate for use cases that require caching, application data repositories, POJO persistence, queuing/buffering, Web services, SOA, and Integration.

Introducin g Berkeley DB java Edition


You can download BDB JE from http//bit.ly/APfJ5. After extracting the archive you’ll see several directories with selfdescribing names. The only file which is required to be in the class path to compile and run the included code snippet is je- 3.3.75.jar (the exact file name may vary) which is placed inside the lib directory. Notice that BDB JE requires J2SE JDK version 1.5.0_10 or later.

Hot Tip

All editions of Berkeley DB are freely available for download and can be used in open source products which are not distributed to third parties. A commercial license is necessary for using any of the BDB editions in a closed source and packaged product. For more information about licensing visit: http://bit.ly/17pMwZ

Access APIs

BDB JE provides three APIs for accessing persisted data. The Base API provides a simple key-value model for storing and retrieving data. The Direct Persistence Layer (DPL) API lets you persist any Java class with a default constructor into the database and retrieve it using a rich set of data retrieval APIs. And finally the Collections API which extends the well known Java Collections API with data persistence and transaction support over data access.

Base API sample

The Base API is the simplest way to access data. It stores a key and a value which can be any serializable Java object.

EnvironmentConfig envConfig = new EnvironmentConfig();
Environment dbEnv = new Environment(new File(“/home/masoud/dben”),
DatabaseConfig dbconf = new DatabaseConfig();
dbconf.setSortedDuplicates(false);//allow update
Database db = dbEnv.openDatabase(null, “SampleDB “, dbconf);
DatabaseEntry searchEntry = new DatabaseEntry();
DatabaseEntry dataValue = new DatabaseEntry(“ data content”.
DatabaseEntry keyValue = new DatabaseEntry(“key content”.
db.put(null, keyValue, dataValue);//inserting an entry

db.get(null, keyValue, searchEntry, LockMode.DEFAULT);//retrieving
String foundData = new String(searchEntry.getData(), “UTF-8”);
dataValue = new DatabaseEntry(“updated data content”.
db.put(null, keyValue, dataValue);//updating an entry
db.delete(null, keyValue);//delete operation

There are multiple overrides for the Database.put method to prevent duplicate records from being inserted and to prevent record overwrites.

DPL Sample

DPL sample consists of two parts, the entity class and the entity management class which handle CRUD over the entity class.

Entity Class

public class Employee {
  public String empID;
  public String lastname;
  @SecondaryKey(relate = Relationship.MANY_TO_MANY,
  relatedEntity = Project.class,onRelatedEntityDelete =
  public Set<Long> projects;
  public Employee() { }
  public Employee(String empID, String lastname, Set<Long> projects)
   this.empID = empID;
   this.lastname = lastname;
   this.projects = projects;

This is a simple POJO with few annotations to mark it as an entity with a String primary key. For now ignore the @Secondarykey annotation, we will discuss it later.

The data management Class

EnvironmentConfig envConfig = new EnvironmentConfig();
Environment dbEnv = new Environment(new File(“/home/masoud/dbendpl”),
StoreConfig stConf = new StoreConfig();
EntityStore store = new EntityStore(dbEnv, “DPLSample”, stConf);
PrimaryIndex userIndex;
userIndex = store.getPrimaryIndex(String.class, Employee.class);
userIndex.putNoReturn(new Employee(“u180”, “Doe”, null));//insert
Employee user = userIndex.get(“u180”);//retrieve
userIndex.putNoReturn(new Employee(“u180”, “Locke”, null));//

These two code snippets show the simplest from of performing CRUD operation without using transaction or complex object relationships.

Sample code description

An Environment provides a unit of encapsulation for one or more databases. Environments correspond to a directory on disk. The Environment is also used to manage and configure resources such as transactions. EnvironmentConfig is used to configure the Environment, with options such as transaction configuration, locking, caching, getting different types of statistics including database, locks and transaction statistics, etc.

One level closer to our application is DatabaseConfig and Database object when we use Base API. When we use DPL these objects are replaced by StoreConfig and EntityStore.

In Base API DatabaseConfig and Database objects provide access to the database and how the database can be accessed.

Configurations like read-only access, record duplication handling, creating in-memory databases, transaction support, etc. are provided through DatabaseConfig.

In DPL StoreConfig and EntityStore objects provide access to object storage and how the object storage can be accessed. Configurations such as read only access, data model mutation, creating in-memory databases, transaction support, etc. are provided through StoreConfig.

The PrimaryIndex class provides the primary storage and access methods for the instances of a particular entity class. There are multiple overrides for the PrimaryIndex.put method to prevent duplicate entity insertion and provide entity overwrite prevention.

Hot Tip

When closing an Environment or Database or when we commit a Transaction in a multi thread application we should ensure that no thread still has in-progress tasks.

BDB Java edition environment anatomy

A BDB JE database consists of one or more log files which are placed inside the environment directory.

The log files are named NNNNNNNN.jdb where NNNNNNNN is an 8-digit hexadecimal number that increases by 1 (starting from 00000000) for each log file written to disk. BDB JE rolls to next file when the current file size reaches the predefined configurable size. The predefined size is 10MB.

A BDB database can be considered like a relational table in an RDBMS. In Base API we directly use the database we need to access, while in DPL we use an EntityStore which may interact with multiple databases under the hood.

Each BDB environment can contain tens of databases and all of these databases will be stored in a single row of log files. (No separate log files per-database). Figure 1 shows the concept visually.

Hot Tip

To create in-memory database we can use DatabaseConfig. setTemporary(true) and StoreConfig.setTemporary(true) to get an in-memory instance with no data persisted beyond the current session.


Figure 1: BDB JE environment and log files.

Hot Tip

The environment path should point to an already existing directory, otherwise the application will face and exception. When we create an environmnt object for the first time, necessary files are created inside that direcory.

BDB Java edition environment anatomy

Table 5 shows Base API characteristics and benefits. The keyvalue access model provides the most flexibility.

Table 5: Base API Features
Key value store retrieval, value can be anything
Cursor API to traverse in a dataset forward and backward
JCA (Java Connectivity Architecture) support
JMX (Java Management eXtension) support
Table 6: DPL API Features
Type Safe access to persisted objects
Updating classes with adding new fields is supported
Persistent class fields can be private, package-private, protected or public
Automatic and extendable data binding between Objects and underlying storage
Index fields can be accessed using a standard java.util collection.
Java annotations are used to define metadata like relations between objects
Field refactoring is supported without changing the stored date.(Called mutation)

Table 6 and Table 7 list the features that mostly determine when we should use which API. Table 7 lists possible use cases for each API.

Table 7: Which API is suitable for your case
Use case characteristics Suitable API
Data model is highly dynamic and changing Base API
Data model has complex object model and relationships DPL
Need application portability between Java Edition and Core Edition DPL, Base API

Transaction Support

Transaction Support is an inseparable part of enterprise software development. BDB JE supports transaction and provides concurrency and record level locking. To add transaction support to DPL in our DPL sample code we can introduce the following changes:

TransactionConfig txConf = new TransactionConfig();
Transaction tx= dbEnv.getThreadTransaction();
dbEnv.beginTransaction(tx, txConf);
userIndex.putNoReturn(tx, new Employee(“u180”, “Doe”, null));//

The simplicity of BDB JE Transaction Support makes it very suitable for transactional cache systems. The isolation level, deferrable and manual synchronization of transactional data with hard disk (Durability), replication policy, and transaction lock request and transaction lifetime timeout can be configured using the Transaction and TransactionConfig objects.

Hot Tip

Environment, Database, and EntityStore are thread safe meaning that we can use them in multiple threads without manual synchronization

Transaction support in Base API is a bit different, as in Base API we directly deal with databases while in DPL we deal with environment and EntityStore objects. The following changes will allow transaction support in Base API.

TransactionConfig txConf = new TransactionConfig();
Transaction tx = dbEnv.getThreadTransaction();
Database db = dbEnv.openDatabase(tx, «SmpleDB», dbconf);
dbEnv.beginTransaction(tx, txConf);
db.put(tx, keyValue, dataValue);//inserting an entry

More transaction related supported features are demonstrated in this snippet. Environment, Database, EntityStore, etc. configuration is omitted from these snippet for sake of simplicity.

Hot Tip

Once a transaction is committed, the transaction handle is no longer valid and a new transaction object is required for further transactional activities.

Persisting Complex Object Graph using DPL

For this section we leave Base API alone and focus on using DPL for complex object graphs. We continue with introducing secondary index and many-to-many mapping.

Let’s look at some important annotations that we have for defining the object model.

Table 8: BDB JE annotations
Annotation Description
@Entity Declares an entity class; that is, a class with a primary index and optionally one or more indices.
@PrimaryKey Defines the class primary key and must be used one and only one time for every entity class.
@SecondaryKey Declares a specific data member in an entity class to be a secondary key for that object. This annotation is optional, and can be used multiple times for an entity class.
@Persistent Declares a persistent class which lives in relation to an entity class.
@NotTransient Defines a field as being persistent even when it is declared with the transient keyword.
@NotPersistent Defines a field as being non-persistent even when it is not declared with the transient keyword.
@KeyField Indicates the sorting position of a key field in a composite key class when the Comparable interface is not implemented. The KeyField integer element specifies the sort order of this field within the set of fields in the composite key.

We used two of these annotations in practice and you saw @SecondaryKey in the Employee class. Now we are going to see how the @SecondaryKey annotation can be used. Let’s create the Project entity which the Employee class has a many-to-many relation with.

public class Project {
	public String projName;
	@PrimaryKey(sequence = «ID»)
	public long projID;
	public Project() {
	public Project(String projName) {
	this.projName = projName;

The @PrimaryKey annotation has a string element to define the name of a sequence from which we can assign primary key values automatically. The primary key field type must be numerical and a named sequence can be used for multiple entities.

Now let’s see how we can store and retrieve an employee with its related project objects.

PrimaryIndex<String, Employee> empByID;
PrimaryIndex<Long, Project> projByID;
empByID = store.getPrimaryIndex(String.class, Employee.class);
projByID = store.getPrimaryIndex(Long.class, Project.class);
SecondaryIndex<Long, String, Employee> empsByProject;
empsByProject = store.getSecondaryIndex(empByID, Long.class,
Set<Long> projects = new HashSet<Long>();
Project proj = null;
proj = new Project(“Develop FX”);
proj = new Project(“Develop WS”);
empByID.putNoReturn(new Employee(“u146”, “Shephard”, projects));//
empByID.putNoReturn(new Employee(“u144”, “Locke”, projects));//
EntityIndex projs = empsByProject.subIndex(proj.
EntityCursor<Employee> pcur = projs.entities();
for (Employee entity : pcur) {
   //process the employees
EntityCursor<Employee> emplRange = empByID.entities(“e146”, true,
“u148”, true);
for (Employee entity : emplRange) {
   //process the employees

The Environment and EntityStore definitions are omitted. The SecondaryIndex provides primary methods for retrieving objects related to the Secondary Key of a particular object. The SecondaryIndex can be used to retrieve the related objects through a traversable cursor. We can also use SecondaryIndex to query for a specific range of objects in a given range for its primary key.

Table 9: Supported Object Relation
Relation Description
ONE_TO_ONE A single entity is related to a single secondary key value.
ONE_TO_MANY A single entity is related to one or more secondary key values.
MANY_TO_ONE One or more entities are related to a single secondary key value.
MANY_TO_MANYOne or more entities are related to one or more secondary key values.

A SecondaryIndex can be used to traverse over the collection of secondary key’s values to retrieve the secondary objects.

Hot Tip

Multiple processes can open a database as long as only one process opens it in read-write mode and other processes open the database in read-only mode. The readonly processes get an open-time snapshot of the database and won’t see any changes coming from other process.

BDB JE Collections API

The only different between using BDB JE collections API and classic collections is the fact that when we use BDB JE Collections API we are accessing persisted objects instead of in-memory objects which we usually access in classic collection APIs.

The Collections API Characteristics
An implementation Map, SortedMap, Set, SortedSet, and Iterator.
To stay compatible with Java Collections, Transaction is supported using TransactionWorker and TransactionRunner which the former one is the interface which we can implement to execute our code in a transaction and later one process the transaction.
Keys and values are represented as Java objects. Custom binding can be defined to bind the stored bytes to any type or format like XML, for example.
Data binding should be defined to instruct the Collections API about how keys and values are represented as stored data and how stored data is converted to and from Java objects. We can use one of the two (SerialBinding, TupleBinding) default data bindings or a custom data binding.
Environment, EnvironmentConfig, Database and DatabaseConfig stay the same as it was for Base API.
Collections API extends Java serialization to store class description separately to make data records much more compact.

To get a real sense about BDB JE Collections API think of it as we can persist and retrieve objects using a collection class like SortedMap’s methods like tailMap, subMap or put, putall, get, and so on.

But before we use the SortedMap object to access the stored data, we need to initialize the base objects like Database and Environment; we should create the ClassCatalog object, and finally we should define bindings for our key and value types.

Collections API sample

Now let’s see how we can store and retrieve our our objects using Collections API. In this sample we are persisting a pair of Integer key and String value using SortedMap.

First lets analyze the TransactionWorker implementation.

public class TransWorker implements TransactionWorker {
	private ClassCatalog catalog;
	private Database db;
	private SortedMap map;
	public TransWorker(Environment env) throws Exception {
	   DatabaseConfig dbConfig = new DatabaseConfig();
   Database catalogDb = env.openDatabase(null, “catalog”, dbConfig);
	   catalog = new StoredClassCatalog(catalogDb);
	   // use Integer tuple binding for key entries
	   TupleBinding keyBinding =
	   // use String serial binding for data entries
	 SerialBinding dataBinding = new SerialBinding(catalog,
	 db = env.openDatabase(null, “dben-col”, dbConfig);
	 map = new StoredSortedMap(db, keyBinding, dataBinding,
    /** Performs work within a transaction. */
    public void doWork() throws Exception {
	   // check for existing data and writing
	   Integer key = new Integer(0);
	   String val = (String) map.get(key);
	   if (val == null) {
           map.put(new Integer(10), “Second”);
     //Reading Data
    Iterator iter = map.entrySet().iterator();
       while (iter.hasNext()) {
          Map.Entry entry = (Map.Entry) iter.next();
          //Process the entry

TransWorker implements TransactionWorker which makes it necessary to implement the doWork method. This method is called by TransactionRunner when we pass an object of TransWorker to its run method. The TransWorker constructor simply receive an Environment object and construct other required objects . Then it opens the database in Collection mode, creates the required binding for the key and values we want to store in the database and finally it creates the SortedMap object which we can use to put and retrieve objects using it. Now let’s see the driver code which put this class in action.

public class CollectionSample {
   public static void main(String[] argv)
		throws Exception {
		// Creating the environment
		EnvironmentConfig envConfig = new EnvironmentConfig();
		Environment dbEnv = new Environment(new File(“/home/
masoud/dben-col”), envConfig);
		// creating an instance of our TransactionWorker
		TransWorker worker = new TransWorker(dbEnv);
		TransactionRunner runner = new TransactionRunner(dbEnv);

The steps demonstrated in the CollectionSample are self describing. The only new object in this snippet is the TransactionRunner object which we used to run the TransWorker object. I omit many of the safe programming portions to keep the code simple and conscious. we need exception handling and properly closure of all BDB JE objects to ensure data integrity

BDB JE Backup/Recovery and Tuning

Backup and Recovery

We can simply backup the BDB databases by creating an operating system level copy of all jdb files. When required we can put the archived files back into the environment directory to get a database back to the state it was at. The best option is to make sure all transactions and the write process are finished to have a consistent backup of the database.

The BDB JE provides a helper class located at com.sleepycat. je.util.DbBackup to perform the backup process from within a Java application. This utility class can create an incremental backup of a database and later on can restore from that backup. The helper class ideally freezes the BDB JE activities during the backup to ensure that the created backup exactly represents the database state when the backup process started.


Berkeley DB JE has 3 daemon threads and configuring these threads affects the overall application performance and behavior. These 3 threads are as follow:

Cleaner Thread Responsible for cleaning and deleting unused log files. This thread is run only if the environment is opened for write access.
Checkpointer Thread Basically keeps the BTree shape consistent. Checkpointer thread is triggered when environment opens, environment closes, and database log file grows by some certain amount.
Compressor Thread For cleaning the BTree structure from unused nodes.

These threads can be configured through a properties file named je.properties or by using the EnvironmentConfig and EnvironmentMutableConfig objects. The je.properties file, which is a simple key-value file, should be placed inside the environment directory and override any further configuration which we may make using the EnvironmentConfig and EnvironmentMutableConfig in the Java code.

The other performance effective factor is cache size. For ondisk instances cache size determines how often the application needs to refer to permanent storage in order to retrieve some data bucket. When we use in-memory instances cache size determines whether our database information will be paged into swap space or it will stay in the main memory.

je.cleaner.minUtilization Ensures that a minimum amount of space is occupied by live records by removing obsolete records. Default occupied percentage is 50%.
je.cleaner.expunge Determines the cleaner behavior in the event that it is able to remove an entire log file. If “true” the log file will be deleted, otherwise it will be renamed to nnnnnnnn.del
je.checkpointer.bytesInterval Determines how often the Checkpointer should check the BTree structure. If it performs the checks little by little it will ensure a faster application startup but will consume more resources specially IO.
je.maxMemory Percent Determines what percentage of JVM maximum memory size can be used for BDB JE cache. To determine the ideal cache size we should put the application in the production environment and monitor its behavior.

A complete list of all configurable properties, with explanations, is available in EnvironmentConfig Javadoc. The list is comprehensive and allows us to configure the BDB JE at granular level.

All of these parameters can be set from Java code using the EnvironmentConfig object. The properties file overrides the values set by using EnvironmentConfig object.

Helper Utilities

Three command line utilities are provided to facilitate dumping the databases from one environment, verifying the database structure, and loading the dump into another environment.

DbDump Dumps a database to a user-readable format.
DbLoad Loads a database from the DbDump output.
DbVerify Verifies the structure of a database.

To run each of these utilities, switch to BDB JE directory, switch to lib directory and execute as shown in the following command:

java -cp je-3.3.75.jar com.sleepycat.je.util.DbVerify

The JAR file name may differ depending on your version of BDB JE. These commands can also be used to port a BDB JE database to BDB Core Edition.

Hot Tip

A very good set of tutorials for different set of BDB JE APIs are available inside the docs folder of BDB JE package. Several examples for different set of functionalities are provided inside the examples directory of the BDB JE package.

About The Author

Photo of Masoud Kalali

Masoud Kalali

Masoud Kalali holds a software engineering degree and has been working on software development projects since 1998. He has experience with a variety of technologies (.Net, J2EE, CORBA, and COM+) on diverse platforms (Solaris, Linux, and Windows). His experience is in software architecture, design and server side development. Masoud has several articles in Java.net. He is one of founder members of NetBeans Dream Team. Masoud’s main area of research and interest includes Web Services and Service Oriented Architecture along with large scale and high throughput systems' development and deployment.

Blog: http://weblogs.java.net/blog/kalali/

Contact: Kalali@gmail.com

Recommended Book

Oracle Berkeley DB

The Berkeley DB Book is a practical guide to the intricacies of the Berkeley DB. This book covers in-depth the complex design issues that are mostly only touched on in terse footnotes within the dense Berkeley DB reference manual. It explains the technology at a higher level and also covers the internals, providing generous code and design examples.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Getting Started with Selenium

By Frank Cohen

24,604 Downloads · Refcard 67 of 204 (see them all)


The Essential Selenium Cheat Sheet

Selenium is a portable software testing framework for Web applications. Selenium works well for QA testers needing record/playback authoring of tests and for software developers needing to author tests in Java, Ruby, Python, PHP, and several other languages using the Selenium API. The Selenium architecture runs tests directly in most modern Web browsers. This DZone Refcard starts with how to install Selenium and then moves on to cover working with TinyMCE, Ajax Objects, Reporting options and even the Future of Selenium.
HTML Preview
Getting Started with Selenium

Getting Started with Selenium

By Frank Cohen

About Selenium

Selenium is a portable software testing framework for Web applications. Selenium works well for QA testers needing record/playback authoring of tests and for software developers needing to author tests in Java, Ruby, Python, PHP, and several other languages using the Selenium API. The Selenium architecture runs tests directly in most modern Web browsers, including MS IE, Firefox, Opera, Safari, and Chrome. Selenium deploys on Windows, Linux, and Macintosh platforms.

Selenium was developed by a team of programmers and testers at ThoughtWorks. Selenium is open source software, released under the Apache 2.0 license and can be downloaded and used without royalty to the originators.

Architecture in a Nutshell

Selenium Browserbot is a JavaScript class that runs within a hidden frame within a browser window. The Browserbot runs your Web application within a sub-frame. The Browserbot receives commands to operate against your Web application, including commands to open a page, type characters into form fields, and click buttons.

Selenium architecture offers several ways to play a test.

Selenium architecture

Functional testing (Type 1) uses the Selenium IDE add-on to Firefox to record and playback Selenium tests in Firefox. Functional testing (Type 2) uses Selenium Grid to run tests in a farm of browsers and operating environments. For example, run install Selenium Grid on 3 operation environments (for example, Windows Vista, Windows XP, and Ubutu) and on each install 2 browser (for example, Microsoft Internet Explorer and Firefox) to smoke test, integration test, and functional test your application on 6 combinations of operating environment and browser. Many more combinations of operating environment and browser are possible. An option for functional testing (Type 2) is to use the PushToTest TestMaker/TestNode open source project. It uses Selenium RC to provide Selenium Gridlike capability with the added advantage of providing datadriven Selenium tests, results analysis charts and graphs, and better stability of the test operations.

The PushToTest open-source project provides Selenium datadriven testing, load testing, service monitoring, and reporting. TestMaker runs load and performance tests (Type 3) in a PushToTest TestNode using the PushToTest SeleniumHTMLUnit library and HTMLUnit Web browser (and Rhino JavaScript engine.)

Hot Tip

HTMLUnit runs Selenium tests faster than a real browser and requires much less memory and CPU resources.

Installing selenium

Selenium IDE installs as a Firefox add-on. Below are the steps to download and install Selenium IDE:

  1. Download selenium-ide-1.0.2.xpi (or similar) from http://seleniumhq.org.
  2. From Firefox open the .xpi file. Follow the Firefox instructions.
  3. Note: Selenium Grid runs as an Ant task. You need JDK 1.6, Ant 1.7, and the Selenium Grid 1.0 binary distribution. Additional directions can be found at http://selenium-grid.seleniumhq.org/get_started.html
  4. See http://www.pushtotest.com/products for TestMaker installation instructions.

Record/playback using selenium ide

Hot Tip

Selenium IDE is a Firefox add-on that records clicks, typing, and other actions to make a test, which you can play back in the Firefox browser. Open Selenium IDE from the Firefox Tools drop-down menu, Selenium IDE command.

Selenium IDE

Selenium IDE preferences

Selenium IDE records interactions with the Web application, with one command per line. Clicking a recorded command highlights the command, displays a reference page, and displays the command in a command form editor. Click the command form entry down-triangle to see a list of all the Selenium commands.

Run the current test by clicking the Run Test Case icon in the icon bar. Right click a test command to choose the Set Breakpoint command. Selenium IDE runs the test to a breakpoint and then pauses. The icon bar Step icon continues executing the test one command at a time.

With Selenium IDE open, the menu bar context changes to provide access to Selenium commands: Open/Close Test Case and Test Suite. Test Suites contain one or more Test Cases.

Use the Options dropdown menu, Options command to set general preferences for Selenium IDE.

Selenium IDE provides an extensibility API set called User Extensions. You can implement custom functions and modify Selenium IDE behavior by writing JavaScript functions. We do not recommend writing User Extensions as the Selenium project makes no guarantees to be backwardly compatible from one version to the next.

Selenium Context Menu provides quick commands to insert new Selenium commands, evaluate XPath expressions within the live Web page, and to show all available Selenium commands. Right click on commands in Selenium IDE, and right-click on elements in the browser page to view the Selenium Context Menu commands.

Selenese Table Format

Selenium IDE is meant to be a light-weight record/playback tool to facilitate getting started with Selenium. It is not designed to be a full test development environment. While Selenium records in an HTML table format (named Selenese) the table format only handles simple procedural test use cases. The Selenese table format does not provide operational test data support, conditionals, branching, and looping. For these you must Export Selenese files into Java, Ruby, or other supported languages.

Selenium Command reference

Selenium comes with commands to: control Selenium test operations, browser and cookie operations, pop-up, button, list, edit field, keyboard, mouse, and form operations. Selenium also provides access operations to examine the Web application (details are at http://release.seleniumhq.org/selenium-core/0.8.0/reference.html).

Command Value, Target, Wait Command
Selenium Control
setTimeout milliseconds
setMouseSpeed number of pixels
setSpeed milliseconds
addLocationStrategy strategyName
allowNativeXpath boolean
ignoreAttributesWithoutValue boolean
assignId locator
captureEntirePageScreenShot filename, kwargs
echo message
pause milliseconds
runScript javascript
waitForCondition javascript
waitForPageToLoad milliseconds
waitForPopUp windowID
fireEvent locator
Browser Operations
open url
openWindow url
goBack goBackAndWait
refresh refreshAndWait
deleteCookie name
deleteAllVisibleCookies deleteAllVisibleCookiesAndWait
setBrowserLogLevel logLevel
Cookie Operations
createCookie nameValuePair
deleteCookie name
deleteAllVisibleCookies deleteAllVisibleCookiesAndWait
Popup Box Operations
answerOnNextPrompt answer
chooseCancelOnNextConfirmation chooseCancelOnNextConfirmationAndWait
chooseOkOnNextConfirmation chooseOkOnNextConfirmationAndWait
Checkbox & Radio Buttons
check locator
uncheck locator
Lists & Dropdowns
addSelection locator
removeSelection removeSelectionAndWait
removeAllSelections removeAllSelectionsAndWait
Edit Fields
type locator
typeKeys locator
setCursorPosition locator
Keyboard Operations
keyDown locator
keyPress locator
keyUp locator
altKeyDown altKeyDownAndWait
altKeyUp altKeyUpAndWait
controlKeyDown controlKeyDownAndWait
controlKeyUp controlKeyUpAndWait
metaKeyDown metaKeyDownAndWait
metaKeyUp metaKeyUpAndWait
shiftKeyDown shiftKeyDownAndWait
shiftKeyUp shiftKeyUpAndWait
Mouse Operations
click locator
clickAt locator
doubleClick locator
doubleClickAt locator
contextMenu locator
contextMenuAt locator
mouseDown locator
mouseDownA locator
mouseMove locator
mouseMoveAt locator
mouseOut locator
mouseOver locator
mouseUp locator
mouseUpAt locator
dragAndDrop locator
dragAndDropToObject sourceLocator
Form Operations
submit formLocator
Windows/Element Selection
select locator
selectFrame locator
selectWindow windowID
focus locator
highlight locator
windowFocus windowFocusAndWait
windowMaximize windowMaximizeAndWait

Selenese Table Format

Selenium commands identify elements within a Web page using:

identifier=id Select the element with the specified @id attribute. If no match is found, select the first element whose @name attribute is id.
name=name Select the first element with the specified @name attribute. The name may optionally be followed by one or more elementfilters, separated from the name by whitespace. If the filterType is not specified, value is assumed. For example, name=style value=carol
dom=javascriptExpression Find an element using JavaScript traversal of the HTML Document Object Model. DOM locators must begin with "document." For example: dom=document.forms['form1'].myList dom=document.images[1]
xpath=xpathExpression Locate an element using an XPath expression. Here are a few examples:

xpath=//img[@alt='The image alt text']

link=textPattern Select the link (anchor) element which contains text matching the specified pattern.
css=cssSelectorSyntax Select the element using css selectors. For example:

css=span#firstChild + span

Selenium 1.0 css selector locator supports all css1, css2 and css3 selectors except namespace in css3, some pseudo classes(:nthof-type, :nth-last-of-type, :first-of-type, :last-of-type, :only-of-type, :visited, :hover, :active, :focus, :indeterminate) and pseudo elements(::first-line, ::first-letter, ::selection, ::before, ::after). Without an explicit locator prefix, Selenium uses the following default strategies:

dom, for locators starting with "document." xpath, for locators starting with "//" identifier, otherwise

Your choice of element locator type has an impact on the test playback performance. The following table compares performance of Selenium element locators using Firefox 3 and Internet Explorer 7.

Locator used Type Firefox 3 Internet Explorer 7
q Locator 47 ms 798 ms
//input[@name='q'] XPath 32 ms 563 ms
//html[1]/body[1]//form[1]//input[2] XPath 47 ms 859 ms
//input[2] XPath 31 ms 564 ms
document.forms[0].elements[1] DOM Index 31 ms 125 ms

Additional details on Selenium performance can be found at: http://www.pushtotest.com/docs/thecohenblog/symposium

Script-Driven Testing

Selenium implements a domain specific language (DSL) for testing. Some applications do not lend themselves to record/ playback: 1) The test flow changes depending on the results of a step in the test, 2) The input data changes depending on the state of the application, and 3) The test requires asynchronously operating test flows. For these conditions, consider using the Selenium DSL in a script driven test. Selenium provides support for Java, Python, Ruby, Groovy, PHP, and C#.

Selenium IDE helps get a script-driven test started by exporting to a unit test format. For example, consider the following test in the Selenese table format:

Selenese table format

Use the Selenium IDE File menu, Export, Python Selenium RC command to export the test to a jUnit-style TestCase written in Python. The following shows the Java source code:

package com.example.tests;

from selenium import selenium
import unittest, time, re

class franktest(unittest.TestCase):
	def setUp(self):
		self.verificationErrors = []
		self.selenium = selenium("localhost", 4444, "*chrome", \
	def test_franktest(self):
		sel = self.selenium
		sel.type("q", "sock puppet")
	def tearDown(self):
		self.assertEqual([], self.verificationErrors)
if __name__ == "__main__":

An exported test like the one above has access to all of Python's functions, including conditionals, looping and branching, reusable object libraries, inheritance, collections, and dynamically typed data formats.

Selenium provides a Selenium RC client package for Java, Python, C#, Ruby, Groovy, PHP, and Perl. The client object identifies the Selenium RC service in its constructor:

self.selenium = selenium("localhost", 4444, "*iexplore", \

The above code identifies the Selenium RC service running on the localhost machine at port 4444. This client will run the test in Microsoft Internet Explorer. The third parameter identifies the base URL from which the recorded test will operate.

Selenium RC service

Using the selenium.start() command initializes and starts the Selenium RC service. The Selenium RC client module (import selenium in Python) provides methods to operate the Selenium DSL commands (click, type, etc.) in the Browserbot running in the browser. For example, selenium.click("open") tells the Browserbot to a click command to the element with an id tag equal to "open". The browser responds to the click command and communicates with the Web application.

At the end of the test the selenium.stop() command ends the Selenium RC service.

Selenium and Ajax

Ajax uses asynchronous JavaScript functions to manipulate the browser's DOM representation of the Web page. Many Selenium commands are not compatible with Ajax. For example, ClickAndWait will time-out waiting for the browser to load the Web page because Ajax functions that manipulate the current Web page in response to a click event do not reload the page. We recommend using Selenium commands that poll the DOM until the Ajax methods complete their tasks. For example, waitUntilElementPresent polls the DOM until the JavaScript function adds the desired element to the page before continuing with the rest of the Selenium script.

Consider the following checklist when using Selenium with Ajax applications:

Check mark

Your Selenium tests may require a large number of extra commands to ensure the test stays in synchronization with the Ajax application. Consider an Ajax application that requires a log-in, then displays a selection list of items, then presents an order form. Ajax enabled applications often deliver multiple steps of function on a single page and show-and-hide elements as you work with the application. Some even disable form submit buttons and other user interface elements until you enter enough valid information. For an application like this you will need a combination of Selenium commands. Consider the following Selenium test:

waitForElementPresent pauses the test until the Ajax application adds the requisite element to the page. waitForCondition pauses the test until the JavaScript function evaluates to true.

Check mark

Some Ajax applications use lazy-loading techniques to improve user interaction with the application. A stock market application provides a list of 10 stock quotes asynchronously after the user clicks the submit button. The list may take 10 to 50 seconds to completly update on the screen. Using waitForXPathCount pauses the test until the page contains the number of nodes that match the specified XPath expression.

Check mark

Many Ajax applications use dynamic element id tags. The Ajax application that named the Log-out button app_6 may later rename the button to app_182. We recommend using DOM element locator techniques, or XPath techniques if needed, to dynamically find elements on a positional or other attribute means.

Command window

Working with tinymce and Ajax objects

Ajax is about moving functions off the server and into the browser. Selenium architecture supports innovative new browser-based functions because Selenium's Browserbot is a JavaScript class itself. The Browserbot even lets Selenium tests operate JavaScript functions as part of the test. For example, TinyMCE (http://tinymce.moxiecode.com) is a graphical text editor component for embedding in Web pages. TinyMCE supports styled text and what-you-see-is-what-you-get editing. Testing a TinyMCE can be challenging. Selenium offers click and type functions that interact with TinyMCE but no direct commands for TinyMCE's more advanced functions. For example, imagine testing TinyMCE's ability to stylize text. The test needs to insert test, move the insertion point, select a sentence, bold the text, and drag the sentence to another paragraph. This is beyond Selenium's DSL. Instead, the Selenium test may include JavaScript commands that interact with TinyMCE's published API (http://tinymce.moxiecode.com/documentation.php).

Here is an example of using the TinyMCE API from a Selenium test context:

('mceInsertContent',false,'<b>Hello world!!</b>');

Run the above JavaScript function from within a Selenium test using the AssertEval command.

AssertEval javascript:this.browserbot.getCurrentWindow().tinyMCE.
execCommand('mceInsertContent',false,'<b>Hello world!!</b>');

Data Production

Selenium offers no operational test data production capability itself. For example, a Selenium test of a sign-in page usually needs sign-in name and sign-in password operational test data to operate. Two options are available: 1) Use the data access features in Java, Ruby, or one of other supported languages, 2) Use PushToTest TestMaker's Selenium Script Runner to inject data from comma separated value (CSV) files, relational databases, objects, and Web services. See http://tinyurl.com/btxvn4 for details.

Create a Comma-Separated-Value file. Use your favorite text editor or spreadsheet program. Name the file data.csv. The contents must be in the following form.

Comma-Separated-Value file

The first row of the data file contains column names. These will be used to map values into the Selenium test. Change the Selenium test to refer to mapping name. PushToTest maps the data from the named column in the CSV data file to the Selenium test data using the first row definitions.

Connect the Data Production Library (DPL) to the Selenium test in a TestMaker TestScenario. Begin by definition a HashDPL. This DPL reads from CSV data files and provides the data to the test.

	<dpl name="mydpl" type="HashDPL">
		<argument name="file" dpl="rsc" value="getDataByIndex" index="0"/>

Next, tell the TestScenario to send the data.csv and Selenium test files to the TestNodes that will operate the test.

	<data path="data.csv"/>
	<selenese path="CalendarTest.selenium"/>

Then tell the Selenium ScriptRunner to use the DPL provided data when running the Selenium test.

<run name="CalendarTest" testclass="CalendarTest.selenium"
	method="runSeleneseFile" langtype="selenium">
	<argument dpl="mydpl" name="DPL_Properties" value="getNextData"/>

The getNextData operation gets the next row of data from the CSV file. The Selenium ScriptRunner injexts the data into the Selenium test.

Browser Sandbox, Redirect, and proxy issues

Selenium RC launches the browser with itself as the proxy server to inject the Javascript of the Browserbot and your test. This architecture makes it possible to run the same test on multiple browsers. However, some browsers will warn the user of possible security threats when the proxy starts and when the test requests functions or pages outside of the originating domain. The browser takes control and stops the Browserbot operations to display the warning message. When this happens, the test stops until a user dismisses the warning. There are no reliable cross-browser workarounds.

Some Web applications redirect from http to https URLs. The browser will often issue a warning that stops the Selenium test.

Selnium does not support a test moving across domains. For example, a test that started with a baseurl of www.mydomain. com may not open a page on www.secondomain.com.

selenium RC browser profiles

Selenium Remote Control (RC) enables test operation on multiple real browsers. A browser profile attribute may be any of the following installed browsers: chrome, konqueror, piiexplore, iehta, mock, opera, pifirefox, safari, iexplore and custom. Append the path to the real browser after browser profile if your system path does not state the path to the browser. For example:

*firefox /Applications/Firefox.app/Contents/MacOS/firefox

Component approach example

Many organizations pursue a "Test and Trash" methodology to achieve agile software development lifecycles. For example, an organization in pursuit of agile techniques may change up to 30% of an application with an application lifecycle of 8 weeks. Without giving the change much thought, up to 30% of their recorded tests break!

Sample test

We recommend a component approach to building tests. Test components perform specific test operations. We write or record tests as individuals components of test function. For example, a component operates the sign-in function of a private Web application. When the sign-in portion of the application changes, we only need to change the sign-in test and the rest of test continues to perform normally.

Selenium supports the component approach in three ways: Selenium IDE supports Test Suites and Test Cases, exporting Selenium tests to dynamic languages (Java, Ruby, Perl, etc.) creates reusable software classes, and 3) PushToTest TestMaker supports multiple use cases with parameterized test use cases.

In Selenium IDE, the File menu enables tests to be saved as test cases or test suites. Record a test, use File -> Save Test Case. Create a second Test Case by choosing File -> New Test Case. Record the second test use case. Save the TestSuite for these two test use cases by choosing File -> Save TestSuite. Click the "Run entire test suite" icon from the Selenium IDE tool bar.

TestMaker defines test use cases using a simple XML notation:

	<usecase name="MailerCheck_usecase">
		<run name="LogIn" testclass="Login.selenium" instance="myinst"
			method="runSeleneseFile" langtype="selenium">
		<run name="OrderProduct" testclass="OrderProduct.selenium" instance="myinst"
			method="runSeleneseFile" langtype="selenium">

Reporting options

Selenium offers no results reporting capability of its own. Two options are available: 1) Write your tests as a set of JUnit tests and use JUnit Report (http://ant.apache.org/manual/OptionalTasks/junitreport.html) to plot success/failure charts, 2) Use PushToTest TestMaker Results Analysis Engine to produce more than 300 charts from the transaction and step time tracking of Selenium tests.

For example, TestMaker tracks Selenium command duration in a test suite or test case. Consider the following chart. This shows the "Step" time it takes to process each Selenium command in a test use case over 10 equal periods of time that the test took to operate.

Step contribution

Selenium Biosphere

Test Maker allows repurposing Selenium tests as load test service monitors. http://www.pushtotest.com

BrowserMob facilitates low-cost Selenium load testing. http://browsermob.com/load-testing

SauceLabs provides a farm of Selenium RC servers for testing. http://saucelabs.com/

ThoughtWorks Twist can be used for test authoring and management. http://studios.thoughtworks.com/twist-agile-test-automation

Running a Selenium test as a functional test in TestMaker. TestMaker displays the success/failure of each command in the test and the duration in milliseconds of each step.

The Future, Selenium 2.0 (AKA Webdriver )

The Selenium Project started the WebDriver project, to be delivered as Selenium 2.0. WebDriver is a new architecture that plays Selenium tests by driving the browser through its native interface. This solves the test playback stability issue in Selenium 1.0 but requires the Selenium project to maintain individual API drivers for all the supported browsers. While there is no release date for Selenium 2.0, the WebDriver code is already functional and available for download at http://code.google.com/p/webdriver.

Available Training

SkillsMatter.com, Think88com, PushToTest.com, RTTSWeb.com, and Scott Bellware (http://blog.scottbellware.com) offer training courses fro Selenium. PushToTest offers free Open Source Test Workshops (http://workshop.pushtotest.com) as a meet-up for Selenium and other Open Source Test tool users.

About The name Selenium

Selenium lore has it that the originators chose the name of Selenium after learning that Selenium is the antidote to Mercury poisoning. There appears to be no love between the Selenium team and HP Mercury, but perhaps a bit of envy

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Getting Started with ServiceMix 4.0


9,587 Downloads · Refcard of 204 (see them all)


The Essential Getting Started with ServiceMix 4.0 Cheat Sheet

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Getting Started with Eclipse RCP

By James Sugrue

19,648 Downloads · Refcard 62 of 204 (see them all)


The Essential Eclipse RCP Cheat Sheet

The Eclipse Rich Client Platform (RCP) provides a foundation for building and deploying rich client applications. It includes Equinox, a component framework based on the OSGi standard, and an integrated update mechanism for deploying desktop applications from a central server. This DZone Refcard introduces you to the Eclipse plug-in development environment and shows you how to add key functionality to your RCP application using Views, Perspectives and Editors. Learn how to add a Menu to your Plug-in, create a Help system for your user, and how to brand and productize your Eclipse RCP application.
HTML Preview
Getting Started with Eclipse RCP

Getting Started with Eclipse RCP

By James Sugrue

About the Rich client platform

The Eclipse Rich Client Platform (RCP) is a platform for building and deploying rich client applications. It includes Equinox, a component framework based on the OSGi standard, the ability to deploy native GUI applications to a variety of desktop operating systems, and an integrated update mechanism for deploying desktop applications from a central server. Using the RCP you can integrate with the Eclipse environment, or can deploy your own standalone rich application.

Introducing the Plug-in development environment

To get started in developing your own plug-ins, first download a version of Eclipse including the Plug-in Development Environment (PDE). Eclipse Classic is the best distribution for this.

When developing plug-ins, you should use the Plug-in Development perspective. You'll notice this perspective provides another tab in your Project Navigator listing all the plug-ins available.

To create an RCP application, go to the File menu and select New > Project where you will be presented with the new project wizard. From here choose Plug-in Project.

New project wizard

Figure 1: The New Project Wizard

The next screen allows you to assign a name to your plugin. Usually, plug-in name follow Java's package naming conventions. RCP applications should be targeted to run on a particular version of Eclipse, here we choose to run on Eclipse 3.5.

RCP Project Settings Page

Figure 2: RCP Project Settings Page

The next page in the project wizard allows you to set some important attributes of your plug-in. This page allows you to specify whether your plug-in will make contributions to the UI. In the case of RCP plug-ins, this will usually be true. You can choose whether to create your own RCP application, or to create a plug-in that can be integrated with existing Eclipse installations.

The following table summarizes other plug-in settings and what they mean for your application. All of these settings can be changed in the generated MANIFEST.MF file for your project at any stage.

Attribute Name Default Value Meaning
id <project name> The identifier for this RCP plug-in
Version 1.0.0.qualifier The plug-in version. Multiple versions of any plug-in are possible in your Eclipse environment provided they have unique version numbers
Name RCP Application The readable name of this plug-in
Provider The second part of your project package name. The provider of this plug-in

Hot Tip

Get started quickly with your first RCP application by using the included RCP Mail Template, available when you choose to create a standalone RCP application.

Manifest.mf explained

The generated META-INF/MANIFEST.MF file is the centre of your RCP plug-in. Here you can define the attributes, dependencies and extension points related to your project. In addition, you may have a plugin.xml file. The contents of both these files are show in the plug-in manifest editor.

Plug-In manifest editor

Figure 3: The plug-in manifest editor

The Overview tab in this editor allows you to change the settings described earlier in the new project wizard. It also provides a shortcut where you can launch an Eclipse application containing your new RCP plug-in.

The Dependencies tab describes how this plug-in interacts with others in the system. All plug-ins which you are dependent on will need to be added here.

The Runtime tab allows you to contribute packages from your own plug-in to others to use or extend. You can also add libraries that don't exist as plug-ins to your own project in the Classpath section.

Hot Tip

While you may change your build path through the Dependencies or Runtime tab, changing dependent plug-ins in the Java Build Path of the project properties tab will not reflect properly in the plug-ins manifest.

The Extensions tab is where you go to define how this plug-in builds on the functionality of other plug-ins in the system, such as for adding menus, views or actions. We will describe these extension points in more detail in the relevant sections. The Extension Points tab allows you to define your own extensions for other plug-ins to use.

The Standar d Widget Toolkit and Jface

While developing UI code for your RCP application, it is important to understand the Standard Widget Toolkit (SWT). This is a layer that wraps around the platform's native controls. JFace provides viewers, in a similar way to Swing, for displaying your data in list, tables, tree and text viewers.

The UI toolkits used in Eclipse applications are a large topic, so we assume that the reader will be aware of how to program widgets in SWT and JFace.

Adding a menu to your plug-in

One of the first things that you will want to do with your RCP plug-in is to provide a menu, establishing its existence with the Eclipse application that it is built into. To do this, as with any additions to our plug-in, we start in the Extensions tab of the plug-in manifest editor.

Up to Eclipse 3.3 Actions was the only API available to deal with menus, but since then the commands API has become available, which we will focus on here.

To add a menu in the command API you will need to follow similar steps to these:

Declare a command

To do this we use the org.eclipse.ui.commands extension point. Simply click on the Add... button in the Extensions tab and chose the relevant extension point.

First, you will need to associate this command with a category. Categories are useful for managing large numbers of commands. From the org.eclipse.ui.commands node, select New > Category. The required fields are a unique ID and a readable name.

After this right click on the node and choose New>Command. The important attributes for a command are listed below.

Attribute Name Required Use
id Yes A unique id for this command
Name Yes A readable name for the command
Description No A short description for display in the UI
CategoryID No The id of the category for this command (that you described in the previous step).
DefaultHandler No A default handler for this command. Usually you will create your own handler.

Declare a Menu Contribution for the Command

To create a menu, you will first need to add the org.eclipse. ui.menus extension point. From the created node in the UI, select New>menuContribution. The required attribute for the menu contribution is it's locationURI, which specifies where the menu should be placed in the UI. This URI takes the format of [scheme]:[id]?[argument-list]

An example of the more useful locationURI's in the Eclipse platform follow:

Attribute Name Required
Insert this contribution on the main menu bar after the Window menu
menu:file?after=additions Inserts contribution in the File menu after the additions group
Insert this contribution on the main toolbar
Adds this contribution to any popup menu in the application

Once the location of your contribution is chosen, click on New>command on this contribution to define the menu. The following attributes exist for each command:

Attribute Required Use
commandId Yes The id of the Command object to bind to this element, typically already defined, as in our earlier step. Click Browse... to find this
Label No The readable label to be displayed for this menu item in the user interface
id No A unique identifier fo this item. Further menu contributions can be placed under this menu item using this id in the locationURI
mnemonic No The Character within the label to be assigned as the mnemonic
icon No Relative path to the icon that will be displayed to the left of the label
tooltip No The tooltip to display for this menu item

Hot Tip

Defining a toolbar item is a similar process. Based on a menuContribution with the correct locationURI, select New>Toolbar providing a unique id. Create a new command under the toolbar similar to the menu item approach.

Create a Handler for the Command

The final extension point required for the menu is org.eclipse. ui.handlers. A handler has two vital attributes. The first is the commandId which should be the same as the command id specified in the beginning. As you can see, this is the glue between all three parts of the menu definition.

You will also need to create a concrete class for this handler, which should implement the org.eclipse.core.commands. IHandler interface.

Hot Tip

Clicking on the class hyperlink on the manifest editor will pop up a New Class Wizard with the fields autofilled for this.

Finally, you will need to define when this command is enabled or active. This can be done programmatically in the isEnabled() and isHandled() methods. While this is easiest, the recommended approach is to use the activeWhen and enabledWhen expressions in the plug-ins manifest editor which avoids unnecessary plug-in loading.

Hot Tip

Once you have added in an extension point plugin. xml will become available. All extension points can be added through the manifest editor, or in XML format through this file.


In an RCP applications Views are used to present information to the user. A viewer must implement the org.eclipse. ui.IViewPart interface, or subclass org.eclipse.ui.parts.ViewPart.

Difference between perspective, editor and view

Figure 4: An illustration of the difference between perspective, editor and view.

To create a View, you will need to add the org.eclipse. ui.views extension point. In order to group your views, it is useful to create a category for them. Select New>Category from the org.eclipse.ui.views node to do this. The required fields are a unique ID and a readable name.

Next, choose New>View from the extension point node and fill in the necessary details.

Attribute Required Use
id Yes The unique id of the View
name Yes A readable name for this view
class Yes The class that implements the IViewPart interface
category No The id of the category that contains this view. This category should be used if you wish to group views together in the Show Views... dialog.
icon No The image to be displayed in the top left hand corner of the view
allowMultiple No Flag indicating whether multiple views can be instantiated. The default value is false.

The code behind the view is in a class that extends org.eclipse.ui.ViewPart. All controls are created programmatically In the createPartControl() method in this class.

To facilitate lazy loading, a workbench page only holds IViewReference objects, so that you can list out the views without loading the plug-in that contains the view definition.

When created, you will see your view in the Window>Show View>Other... dialog

Hot Tip

It's good practice to store the view's id as a public constant in your ViewPart implementation, for easy access.

Loose Coupling

To facilitate loose coupling, your ViewPart should implement org.eclipse.ui.ISelectionListener. You will also need to register this as a selection listener for the entire workbench:


This allows your view to react to selections made outside of the view's own context.


An editor is used in an Eclipse RCP application when you want to create or modify files, or other resources. Eclipse already provides some basic text and Java source file editors.

In your plug-in manifest editor, add in the org.eclipse. ui.editors extension point, and fill in the following details

Attribute Required Use
id Yes The unique id of this editor
name Yes A readable name for this editor
icon No The image to be displayed in the top left hand corner of the editor when it is open
extensions No A string of comma separated file extensions that are understood by the editor
class No The class that implements the IEditorPart interface
command No A command to run to launch and external editor
launcher No The name of a class that implements IEditorLauncher to an external editor
contributorClass No A class that implements IEditorActionBarContributor and adds new actions to the workbench menu and toolbar which reflect the features of the editor type
default No If true this editor will be used as the default for this file type. The default value is false
filenames No A list of filenames understood by the editor. More specific than the extensions attribute
matchingStrategy No An implementation of IEditorMatchingStrategy that allows an editor to determine whether a given editor input should be opened

Editors implement the org.eclipse.ui.IEditorPart interface, or subclass org.eclipse.ui.parts.EditorPart.

Like views, to facilitate lazy loading, a workbench page only holds IEditorReference objects, so that you can list out the editors without loading the plug-in that contains the editor definition.


Perspectives are a way of grouping you views and editors together in a way that makes sense to a particular context, such as debugging. By creating your own perspective, you can hook into the Window>Open Perspective dialog.

To create a perspective, you need to extend the org.eclipse.ui.perspectives extension point.

Attribute Required Use
id Yes The unique id of this perspective
name Yes A readable name for the perspective
class Yes The class that implements the IPerspectiveFactory interface
icon No The image to be displayed related to this perspective
Fixed No Whether this perspective can be closed or not. Default is false

The class driving the perspective implements org.eclipse. ui.IPerspectiveFactory. This class has one method createInitialLayout(), within which you can use the IPageLayout.addView() method to add views directly to the perspective. To group many views together in a tabbed fashion, rather than side by side, IPageLayout.createFolder() can be used.

Hot Tip

When running your application you need to ensure that you have all required plug-ins included. Do this by checking your Run Configurations. Go to the plugins tab and click Validate Plug-ins. If there are errors click on Add Required Plug-ins to fix the error.


Now that you have created a perspective and a view for your RCP application, you will probably want to provide some preference pages. Your contributed preference pages will appear in the Window>Preferences dialog.

To provide preference pages you will need to implement the org.eclipse.ui.preferencePages extension in the plug-in manifest editor.

Attribute Required Use
id Yes The unique id of this preference page
name Yes A readable name for the preference page
class Yes The class that implements the IWorkbenchPreferencePage interface
category No Path indicating the location of the page in the preferences tree. The path may be defined using the parent preference page id or a sequence of ids separated by "/". If no category is specified, the page will appear at the top level of the preferences tree.

While the preference page class will implement org.eclipse. ui.IWorkbenchPreferencePage, it is useful to extend org. eclipse.jface.preference.FieldEditorPreferencePage as it provides createFieldEditors() method which is all you need to implement, along with the init() method in order to display a standard preference page. A complete list of FieldEditors is provided in the org.eclipse.jface.preference package.

Loading and Storing Preferences

Preferences for a plug-in are stored in an org.eclipse.jface. preference.IPreferenceStore object. You can access a plug-ins preference through the Activator, which will typically extend org.eclipse.ui.plugin.AbstractUIPlugin. Each preference you add to the store has to be assigned a key. Preferences are stored as String based values, but methods are provided to access the values in number of formats such as double, int and Boolean.

Property Sheets

While preferences are used to display the overall preferences for the plug-in, property sheets are used to display the properties for views, editors or other resources in the Eclipse environments. By hooking into the Properties API, the properties for you object will appear in the Properties view (usually displayed at the bottom of your Eclipse application).

The Properties view will check if the selected object in the workspace can supports the org.eclipse.ui.views. properties.IPropertySource interface, either through implementation or via the getAdapter() method of the object. Each property gets a descriptor and a value through the IPropertySource interface.


All good applications should provide some level of user assistance. To add help content to the standard Help>Help Contents window, you can use the org.eclipse.help.toc extension point. Add a number of toc items to this extension point, the only mandatory attribute for each toc entry is the file that contains the table of contents definition.

Hot Tip

To see a quick example of what help content should look like, choose the Help Content item from the Extension Wizards tab when adding to the plug-ins manifest.

<toc label="Getting Started" link_to="toc.xml#gettingstarted">
<topic label="Main Topic" href="html/gettingstarted/
<topic label="Sub Topic" href="html/gettingstarted/
subtopic.html" />
<<topic label="Main Topic 2">
<topic label="Sub Topic 2" href="html/gettingstarted/
subtopic2.html" />

Each topic entry should have a link to a HTML file with the full content for that topic. The above XML extract from a table of contents file illustrates this. There is also the choice to use the definition editor for help content. This will open by default in Eclipse when choosing a toc file.

Cheat Sheets

Another user assistance mechanism used in Eclipse is a cheat sheet, which guides the user through a series of steps to achieve a task. To create your initial cheat sheet content, use the New>Other...>User Assistance>Cheat Sheet. This presents you with an editor to add an Intro and a series of items, with the option to hook in commands to automate the execution of the task.

To add this cheat sheet to your plug-in manifest, the cheat sheet editor has a Register this cheat sheet link on the top right hand corner. When registering the cheat sheet you will need to provide it with a category and a description.

Cheat sheet registration dialog

Figure 5: Cheat sheet registration dialog

Clicking finish on this dialog will add the org.eclipse. ui.cheatsheets.cheatSheetContent extension point to your manifest. You can modify the details of the cheat sheet from here if necessary.


You can help the user to load up your plug-in(s) as a single part, by combining them into one feature. Eclipse provides a wizard to create your feature through the New Project> Plug-in Development >Feature Project wizard.

This wizard generated a feature.xml file which has an editor, similar to the plug-in manifest editor, where you can change the details of your feature.

The most important section is the Plug-ins tab, which lists the plug-ins required for your feature. The Included Features tab allows you to specify sub-features to include as part of your feature. On the Dependencies tab, you can get all the plugins or features that you are dependent on by clicking on the Compute button.

A simple feature.xml may look as follows:

<?xml version="1.0" encoding="UTF-8"?>
<description url="http://www.example.com/description">
[Enter Feature Description here.]
<copyright url="http://www.example.com/copyright">
[Enter Copyright Description here.]
<license url="http://www.example.com/license">
[Enter License Description here.]
<import plugin="org.eclipse.ui"/>
<import plugin="org.eclipse.core.runtime"/>


The feature also provides a single location where you can define all the branding for your application. In the Overview tab, you can assign a Branding Plug-in to the feature.

The branding plug-in needs to contain the following artefacts:

Item Purpose
about.html A HTML file that will be displayed in the Plug-in Details>More Info dialog
about.ini This file contains most of the branding information for the feature described below
about.properties Used for localisation of the strings from the about.ini file. The values are referenced using the %key notation


Property Purpose
aboutText Multiline description containing name, version number and copyright information. Will appear in the About>Feature Details>About Features dialog.
featureImage A 32x32 pixel icon representation of the feature to be used across the relevant About dialogs

All of the icons and files referenced by the about.ini file should be placed in this plug-in also.

Product Branding

A product is an entire distribution of an RCP application, rather than a feature intended to be part of an existing distribution. As such, products have additional branding requirements. To specify these extra parameters, a contribution to the org. eclipse.core.runtime.products extension point is required.

The product must be assigned the application to run, the name of the product (for the title bar) and a description. Further properties are added as name/value pairs underneath the product.

Hot Tip

An application can be provided by using the org. eclipse.core.runtime.applications extension point

Property Purpose
windowImages The image used for this application, in windows and dialogs. This should be in the order of the 16x16 pixel image, followed by the 32x32
aboutImage Larger image to be placed in the About dialog
aboutText Multiline description containing name, version number and copyright information. Will appear in the About>Feature Details>About Features dialog.

You can also provide most of these details in the Branding tab of the generated .product file.

Splash Screen

The .product file that is generated while creating your product includes a Splash tab. Here you can specify the plug-in that contains the splash.bmp file for your Splash screen. Typically, this should reside in your branding plug-in. The splash screen can also be customized with templates, and can include a progress bar with messages.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Getting Started with Drupal 7

By Cindy McCourt

23,459 Downloads · Refcard 59 of 204 (see them all)


The Essential Getting Started with Drupal 7 Cheat Sheet

This is an update to DZone's Drupal Refcard, and includes a discussion of Drupal pages, from types to content nodes, as well as instructions for installation. Readers will also find a selection of brief descriptions and useful instructions for managing content, installing modules, and configuring themes.
HTML Preview
Getting Started with Drupal 7

Getting Started with Drupal 7

By Cindy McCourt

What is Drupal?

Though Drupal could be described as an open-source content management system, in reality it is much more: a framework upon which web developers can build many types of online experiences, including database-driven websites, document repositories, interactive applications, and more.

A Drupal Page

The word page is often used inconsistently in the Drupal community, but for this guide, I'll use webpage to refer to that which is displayed via a URL path such as /about or /contact-us. By understanding how a webpage is created in Drupal by default, you will better understand the tutorials to follow.


Drupal pages are created by the theme (barring the use of page layout modules) and consist primarily of the following:

  • A header displaying the logo, site name, site slogan, and more.
  • A main menu that includes links to the section landing pages.
  • Multiple regions where you can place blocks of content and other functionality.
  • A region displaying pages that define the webpage URL.

Webpage Header

Starting at the top, the content in the header area is managed in multiple locations. Site name and slogan are entered in the Site information, the logo is uploaded in the theme settings, and if you want other features to show in this area, your theme can be customized to accommodate.

Drupal Menus

Drupal 7 comes with four menus:

  • The Main Menu is typically used to display links to the main sections of your site.
  • The Navigation Menu provides links to create content and might include other links generated by modules.
  • The Management Menu is the black admin bar you will see across the top of your screen.
  • The User Menu includes the My Account and Logout links.

You can create your own menu as well. All menus are available as blocks and can be displayed in page regions.

Page Regions

The quantity, placement, behavior, and style of webpage regions are defined by your theme. The diagrams on this page show sample theme regions. There are countless region configurations if you create your own theme.

Regions can hold one or more blocks. If a block is not present, the region will collapse and become undetectable to the user.


Types of Drupal Pages

For this guide, the word page refers to that which appears in the content region (see diagrams). Pages generate URL paths that, in turn, deliver the webpage. The most common page is the node. The node is your article, your blog post, your event page, and so on. Other types of pages include (but are not limited to) the following.

Default teaser pages. By default, Drupal's homepage displays a teaser list of nodes. It also has a term teaser page, which displays all nodes tagged with a specific taxonomy term.

Views pages. These pages help you query the database and display bits of information about your nodes or other data. Views pages don't come by default with Drupal, but they're usually added when developing a site.

Panel pages. By including bundles of page layout modules, Panel pages provide a means of creating custom pages and node page layouts.

Webform pages. Webforms are online forms used to collect data from users and export that data for use elsewhere. Examples include a survey, contact form, sign up form, and so on.

Module pages. Module pages vary based on the module. If a module displays content or a form, it will likely come with a URL path for you to use. For example, the search module produces a search results page.

Administrative pages. In order to configure Drupal, you need to use Drupal's administrative pages. If you develop a module, you might create an admin page to configure the module.

Types of Content Nodes

Nodes are created using content types. Content types are Drupal forms used to collect, relate, and display data in the content region of the webpage. Content types have at least five things in common: a title, body (most of the time), author, post date, and URL path. But you can add more. Two content types come enabled by default: Basic Page and Article. Four content types come preconfigured in module and are available to be enabled: Blog Entry, Forum Topic, Book Page, and Poll. You can also create your own content type.

Content Type Description Unique Feature
Basic Page Used when the type of content does not repeat such as About Us and Directions. NA
Article Used when type of content will be added on a regular basis. Includes an image and a tags field. Promoted to the default teaser homepage.
Blog Entry Alternative to Article. Comes with multiple teaser pages and links to user's blog.
Book Used relating nodes similar to a table of contents. Unique book navigation on the node and a book navigation block. Outline feature that creates the relationship works on all content types.
Forum Topic Used to post a discussion topic in a forum. Comes with multiple forum list pages. Engages Drupal Taxonomy system to organize topics into forums.
Poll Ask your visitors a single, multiple-choice question. Choose the number of options for the question and length of availability. Comes with a list page and with most recent poll question displayed.

Getting Started

Install Drupal

After you've planned your site, the next step is to install Drupal on a web server. For detailed documentation on Drupal installation, please visit http://drupal.org/documentation/install. These instructions focus on installing Drupal on an actual web server. If you aren't ready to commit to a hosted solution, you can learn Drupal via a Drupal site installed on your personal computer.

One way to set a site on your computer is to use Acquia's dev desktop. This package will convert your computer to a local web environment and set up your Drupal site in a matter of minutes. Go to https://www.acquia.com/ downloads and download and install the Dev Desktop just as you would any other software package.

Configure Site Information

1. Click on Configuration in the black admin menu at the top of your screen.

2. Click on Site Information and configure any of the following:

  • Your site name
  • Site slogan (make sure your theme supports slogans)
  • Default email address for the site
  • Number of nodes to display in the homepage teaser
  • Set alternate pages for errors

Create Content

To create either a Basic Page or an Article in Drupal 7:

1. Click Add content in the grey shortcut menu at the top of your screen.

2. Click the type of content you want to create.

3. Complete the form.

4. Save.

Find Content

1. Click on Find Content in the grey shortcut menu.

2. Use the filters to find a specific type of content.

3. Update one or more nodes with the update options.

4. Use the operations to edit or delete a node.

Create a Content Type

In order to create your own content in Drupal 7:

1. Click on Structure in the black admin menu.

2. Click on Content Types.

3. Click on Add Content Type.

4. Complete the form. Note: review each option carefully. There are too many to convey here.

5. Save your content type or save and add fields.

Hot Tip

The configuration options apply to the nodes that the content type will create. When creating the node, you can override many of these settings node-by-node if you have the Administer Content permission.

Add Fields to Content Types

You can add fields to all contents types, even those content types created using a module. The steps to add a field will vary, depending on the field you're adding.

Add New Field

You have two options: 1. Use a field whose type comes with Drupal, or 2. Use a field module downloaded from Drupal.org (see the Modules section of this card).

1. Assuming you aren't already in the Manage Fields interface, click on Structure and then Content Types.

2. Click Manage Fields for the content type to which the field will be added.

3. Locate the Add New Field option.


User-defined functions can be created by providing a name, zero or more arguments, and a return value (optional). For example:

4. Enter a label for your field, type of data, and form element.


Anonymous Functions

Before you save, you have the option to edit the field name.

5. Save.

6. Read the screen that follows and respond accordingly and click Save Field Settings to be sent to the field configuration interface.

7. Read the configuration options, respond accordingly and click Save Settings.

Reuse an Existing Field

Drupal comes with an image and term reference field (Tags) for the Article content type. If you want to add an image field to an existing content type or your custom content type, you can reuse the existing image field.


Should you reuse a field? Here some factors to consider.

  • Will you want to change the field settings from one content type to another? Click edit next to a field and observe features in the Settings box and features in the Field setting box. You can change Settings from one content type to another and not affect how that field is used. If you change the Field setting options (for example, Number of values) that change will affect all content types that use that field.

  • Will you want to manage access to the field differently between content types? For example, will anonymous users be able to see an image on Article but not on a Book page? Field visibility and permissions are part of the Field settings, are unique to the field, and are provided via the Field Permissions module.

Add a Vocabulary to a Content Type

What is Drupal's Taxonomy?

From a content perspective, Drupal's taxonomy system provides a way to assign descriptive tags to nodes, making it easier for your site's visitors to locate related content.

From a technical perspective, each Drupal site has one taxonomy made up of multiple vocabularies. Each vocabulary contains multiple terms (or tags). Terms can have parent-child relationships with other terms. One term cannot be assigned to two vocabularies.

Create a Vocabulary

1. Click on Structures and then on Taxonomy.

2. Click on Add Vocabulary.

3. Complete the form and save.

4. Add terms to the vocabulary. Note: You can add fields via Manage Fields on the add/edit vocabulary screen.

Add Vocabulary Field

In Drupal 7, a vocabulary is now a field, and field type is a term reference. Use the steps covered in the "Add Fields to Content Types" section to add a vocabulary.

Form element options work for both single- and multi-select options for three types of form elements.


Autocomplete term widget (tagging) is a free tagging option. This means you can add terms to your vocabulary as you create your content versus being required to use a pre-defined list of terms.

Alternative to Term Reference Field

Traditional select lists (via the List (text) field ) are still an option with Drupal, but the term reference field can be a viable alternative. List (text) field has the select list and check box/radio button options.

Which strategy is best for you? Consider the following scenario. You have a data table with multiple columns and you have enabled sorting on the columns. You click on the column header to sort the field that holds the vocabulary term. The table rows get reordered based on the alphanumeric values of the term in that field.

But … what if an alphabetical sort isn't what you need? What if you need to control the order in which table rows are displayed? The List (text) field will sort of the value you store in the database. If you allow a person to select 'Three months prior' from the select list, but assign the value 1 to the database (1|Three months prior), then rows with this value will appear first. A selection of 'Two months prior' set to 2 (2|Two months prior) would appear next. And so on.

Hot Tip

Once you have used the List (text) field, you cannot edit the a value that has been stored in the database. If the value and label are the same, consider the term reference field.

Configure URL Aliases

When a node is created in Drupal, the path is /?=node/23 where 23 is the NID (node ID). When clean URLs are enabled on the site and available on your server, the default path for a node is /node/23. Neither option means much to the user or to search engines. Also, they are not very friendly when it comes to defining rules for when a block will appear on a webpage. Hence, we have URL aliases, turning /node/23 into something like /myfavorite- hobby.

Drupal comes with the Path module. This module allows you to manually add a URL alias each time you create a node.

The Pathauto module provides a way for you to create aliases via content path patterns. For instance, all article paths might look like this: article/titleof- node where the title is provided with the [node:title] token. The screen shot below provides some examples.


Manage Display of Fields

When you are creating a node, you fill in the fields that are part of the content type form. The order in which those fields appear in the add/edit form is controlled by the same place you add fields, Manage fields.

The order in which the fields appear when the node is viewed by the public is controlled using the content type's Manage Display feature.


You can perform the following tasks.

- Reorder the order of the fields. Click+hold+drag the + sign next to the field to reposition the field.

- Manage the label display: Above, Inline, or .

  • Choose a format for the data to be displayed. Not all fields have format options.

  • Refine formatting. Some fields allow you to take formatting a little further. For example, you can choose the image style you want for your Image field. (see "Image Styles" later in this card).

Create a Menu

Sometimes you need a list of menu links to appear in a section of your site. For example, a menu called About might include a link to Partners, History, Methodology, and so on. The About menu would be a block that you can place in a region and set to show only on the pages in the About site section.

1. Click on Structure and then on Menus.

2. Click on Add Menu.

3. Provide a menu title (a name for your menu).

4. Save.

5. Add menu items (links to pages) to your new menu.

Add a Page to a Menu

Add an Item via Menu Admin

1. Click on Structure and then on Menus.

2. Click Add Link next to the menu of choice.

3. Complete the form. Note: the URL path needs to exist first in order to add it to the menu.

4. Save.

Add an Item via a Module

When you create a node, you can add that node to a menu via the menu tab. You can also add nodes to menus via the menu admin interface.

The Views module creates page displays and a way for you to assign that page to a menu without having to go to the Menu admin pages. The Panels module provides the same type of option.

Some modules will add a page to a menu for you. The Forum module adds the Forums link to the Navigation menu when you enable that module. You can move the link to another menu if you wish by editing the menu item.

Manage Text Formats

Drupal 7 comes with three text formats: Plain text, Filtered HTML, and Full HTML. All roles on the site have access to plain text. You can select which roles utilize the HTML format options.


When text and HTML markup is added to a field configured to allow HTML, Drupal collects that content and stores it. When the node is viewed, Drupal looks at the text format settings for the node and send only that which the text format allows. For example, the Filtered HTML format does not have the table tags as approved HTML tags. If a user creates a node and includes a table, it will not show. You have two options:

1. Add the table tags to the list of allowed HTML tags by configuring the Filtered HTML format.

2. Give your users permission to choose Full HTML. Be careful with this as it gives your user a lot of power.

To manage text formats, go to Configuration > Text formats.

Add an HTML Editor

By default, Drupal does not come with an HTML editor installed. You can add an HTML editor one of two ways:

1. Use the WYSIWYG module and add an editor library such as CKEditor, TinyMCE, or others. You can add as many as you like. Assign the editor to a text format.

2. Use a module dedicated to one particular editor such as the CKEditor module.

Buttons should match the tags permitted by the text format to which it is assigned.

Adding Images

There are many ways to put an image on a page in Drupal. Below are four options to get you started.

Image Field

The advantage of this approach is the ability to reuse the images you upload. For example, you can create a page listing your nodes and include a thumbnail of the image associated with the node. The original image would be used but styled to fit using an image style.

Image field. Upload an image via a field and display the image from the field.

Image field and Insert. Upload an image via a field, hide the image from showing from the field, use the Insert module and insert the image into the body of the node.

HTML Editor Image Editor

The advantage of this approach is usability. If you are not going to reuse the image, sometimes it's easier for users to click the image icon on an editor bar.

HTML Editor. Install an HTML editor and use the Image editor button to add a link to an image that will be embedded in the body. Images can be uploaded manually to the server or link to an image via a URL.

HTML Editor and IMCE. Install an HTML Editor and the IMCE module. IMCE adds a browse option allowing you to upload an image to the server via the Image editor button.

Create an Image Style

When a user uploads an image with the Image field, you can't always predict the size of the image. With an image style, you can manage the display of the image field such that all images have a specific width and/ or height.

1. Click on Configuration and then on Image Styles.

2. Click on Add Style.

3. Give the style a name. Hint: Names like icon, thumbnail or small don't convey much about the style. A name like scale_100w says this style will scale the image (versus resize) and it will keep the image from distorting because the height is allowed to be whatever it needs to be after the width is set to 100 pixels.

4. Add one or more effects to the style to match the style name.

5. Update the style.

This style is now available for use in Manage Display and other modules that use image styles such as Insert, Views, and Panels.


Find and Choose Modules

Modules are bundles of code that add new or modify existing functionality in Drupal. Contributed modules can be found at http://drupal.org/project/ modules. A couple things to know about modules:

Dependencies. Modules can be dependent on other modules. Therefore you might need to install multiple modules to get the functionality you want.

Status. Module project pages have status information to help you decide if the module is supported. There are too many indicators to discuss here.

Install a Module

The process of installing a module will depend on your server environment. Each module contains a readme.txt and/or install.txt file with instructions on how to install that particular module. The general steps are:

1. Download the module tar.gz or zip file.

2. Upload the file to /sites/all/modules. You might need to create the 'all' and 'modules' directory.

3. Unpack or unzip the file. Note: If you are using FTP, you might need to reverse steps 2 and 3.

Enable a Module

You can enable a core, contributed, and/or custom module such as Blog, Forum, or Poll with the following steps:

1. Click on Modules in the black admin menu bar.

2. Check the box for the module(s) you want.

3. Save the configuration.

Hot Tip

Drupal will remind you if you didn't enable a dependent module. When it does, click Continue and Drupal will do the rest.

Configure a Module

Modules vary significantly, so it's impossible to convey one process for module configuration. To learn more about a particular module's configuration, consider the following:

  • Review the readme.txt file that comes with the module.
  • Check the module's project page on Drupal.org.
  • Search for online tutorials for the module.
  • Ask for support in the module's issue queue.


Find a Theme

You learned at the start of the reference card that the theme controls your page layout. Go to http://drupal.org/project/themes to locate a theme you want to use (assuming you aren't building your own theme).

Dependencies. There are standalone themes, base themes, and subthemes. Subthemes depend on base themes.

Status. Theme project pages have status information to help you decide if the theme is supported. There are too many indicators to discuss here.

Install a Theme

The process of installing a theme will depend on your server environment. Each theme contains a readme.txt and/or install.txt file with instructions on how to install that particular theme. The general steps are below.

1. Download the theme tar.gz or zip file.

2. Upload the file to /sites/all/themes. You might need to create the 'all' and 'modules' directory.

3. Unpack or unzip the file. Note: If you are using FTP, you might need to reverse steps 2 and 3.

Enable a Theme

You can enable a core, contributed, or custom theme such as Garland, Stark, Danland, and so on with the following steps.

1. Click on Appearance in the black admin menu bar.

2. Locate the theme you want under Disable Themes.

3. Click on Enable and set default. You can choose to enable a theme and not set it as default, thus giving you the option to configure a change in your current site without anyone knowing until you are ready.

Configure a Theme

Themes vary but there are some consistencies regarding configuration. Each theme will have the following.

  • Toggle display options. These options allow you to enable and disable features such as the logo, menus, user pictures, search, and more.
  • Logo image settings. Note: the image will need to be the right size before you upload.
  • Shortcut icon settings. This is also known as the favicon.
  • Some themes will have extensive configuration options available to you. Omega, Fusion, AdaptiveTheme, to name a few, are base themes that add significant configuration options in the admin interface.

Configure Blocks

Once you have your theme enabled, you are ready to start configuring the blocks for your web pages. There are three decisions you need to make when configuring blocks.

1. In which region will the block appear?

2. Where in that region will the block appear?

3. When should the block appear?

Choose a Region

There are two ways to choose a region for your block.

1. Click on Structure and then on Blocks.

2. Locate the block you want to position.

3. Either use the Region dropdown or click Configure.

4. If you click Configure, you will be given the option to set the block for all enabled themes at once.

5. If you use the Region dropdown, remember to save your changes.

Choose a Location in the Region

If you have multiple blocks in one region, in what order should they appear? Place the blocks with the following steps.

1. Click on Structure and then on Blocks.

2. Locate the block you want to position.

3. Click+hold+drag the + next to the block to place the block.

4. Save.

Designate When the Block Should Appear

There are three conditions to choose from: 1. Pages, 2. Content type, and 3. Roles.


Pages. When using the Pages option, the condition is based on a URL path. If you declare that all Article nodes will have a path that begins with article, you can set a block for each article node by setting the Pages visibility condition to the following.

1. Only on the listed pages.

2. articles/* Note: This means that all pages whose URL path begins with mysitename.com/articles will show this block.

Content types. If you have a block that should appear on nodes of a specific type, regardless of their path, this is the condition you need.

Roles. If you want to control who sees the block, select the role. If you don't make any selection, all will be assumed.

You can use one or more of the conditions at a time. For instance, if Pages is … and Content type is … and Roles is …, then show block.

Managing Roles and Permissions

Throughout this Refcard, I've discussed two roles: anonymous and authenticated. In Drupal 7, there is also a third: administrator.

  • Anonymous refers to those not logged into your site.
  • Authenticated refers to those logged into your site.
  • Administrator can do anything on your site by default.

The first user on a Drupal site user/1 cannot be stripped of administrative rights; this user is all-powerful. Use this role with care.

Create a Role

1. Click on People, then Permissions, then Roles.

2. Type the name of the role and click Add Role.

Set Permissions for Roles

1. Click on the Permissions tab.

2. Check the permission setting for each role.

3. Save permissions.

Configure User Account Settings

You have several configuration options regarding user accounts.

1. Click on Configuration in the black admin menu and then click on Account settings.

2. Review and edit your options to meet your needs.

Included in the account settings admin page are the emails that the system sends out. You can customize these, but remember these apply to all communications versus those that might be associated with one particular event on your site.

Add a User

1. Click on People.

2. Click on Add User.

3. Complete the form. Note: if you want to collect more information from the user, you can add fields with the Manage Fields tab.

4. Save.

About The Author


Cindy McCourt

Cindy McCourt has written technical papers and instructions on many topics for governmental and private sector clients. She maintains a close relationship with the Drupal community through her blog, at http://idcminnovations.com, by speaking at Drupal events, and by offering Drupal training. She is also a partner at Acquia, the company owned by Drupal creator Dries Buytaert.

Recommended Book

Recommended Book

Drupal allows you to quickly and easily build a wide variety of web sites, from very simple blog sites to extremely complex sites that integrate with other systems. In order to maximize what Drupal can do for you, you need to plan. Whether you are building with Drupal 6 or 7, this book details the steps necessary to plan your site so you can make informed decisions before you start to build.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

JavaServer Faces 2.0

By Cay Horstmann

39,860 Downloads · Refcard 58 of 204 (see them all)


The Essential JavaServer Faces 2.0 Cheat Sheet

JavaServer Faces (JSF) is the “official” component-based view technology in the Java EE web tier. This JSF 2.0 DZone Refcard is perfect for both new and seasoned JSF developers. JSF 2.0, the long-awaited successor to JSF 1.x, brings exciting new features: less boilerplate code, better error handling, built-in Ajax, and more. This Refcard also provides updated summaries of the tags and attributes needed for JSF programming, along with a summary of the JSF expression language and a list of code snippets for common operations.
HTML Preview
JavaServer Faces 2.0

JavaServer Faces 2.0

By Cay Horstmann

JSF Overview

JavaServer Faces (JSF) is the “official” component-based view technology in the Java EE web tier. JSF includes a set of predefined UI components, an event-driven programming model, and the ability to add third-party components. JSF is designed to be extensible, easy to use, and toolable. This refcard describes JSF 2.0.

Development Process

A developer specifies JSF components in JSF pages, combining JSF component tags with HTML and CSS for styling. Components are linked with managed beans—Java classes that contain presentation logic and connect to business logic and persistence backends.

JSF framework

In JSF 2.0, it is recommended that you use the facelets format for your pages:

<?xml version=”1.0” encoding=”UTF-8”?>
<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Strict//EN”
<html xmlns=”http://www.w3.org/1999/xhtml”

These common tasks give you a crash course into using JSF.

Text Field

Code 1


<h:inputText value=”#{bean1.luckyNumber}”>


public class SampleBean {
   public int getLuckyNumber() { ... }
   public void setLuckyNumber(int value) { ... }


Press Button


<h:commandButton value=”press me” action=”#{bean1.login}”/>


public class SampleBean {
   public String login() {
     if (...) return “success”; else return “error”;

The outcomes success and error can be mapped to pages in faces-config.xml. if no mapping is specified, the page /success.xhtml or /error.xhtml is displayed.

Radio Buttons

Radio Button


<h:selectOneRadio value=”#{form.condiment}>
<f:selectItems value=”#{form.items}”/>


public class SampleBean {
  private static Map<String, Object> items;
  static {
    items = new LinkedHashMap<String, Object>();
    items.put(“Cheese”, 1); // label, value
    items.put(“Pickle”, 2);
  public Map<String, Object> getItems() { return items; }
  public int getCondiment() { ... }
  public void setCondiment(int value) { ... }

JBoss Studio2.0_5.jpg

Validation and Conversion

Page-level validation and conversion:

<h:inputText value=”#{bean1.amount}” required=”true”>
  <f:validateDoubleRange maximum=”1000”/>
<h:outputText value=”#{bean1.amount}”>
  <f:convertNumber type=”currency”/>

The number is displayed with currency symbol and group separator: $1,000.00

Using the Bean Validation Framework (JSR 303) 2.0

public class SampleBean {
  @Max(1000) private BigDecimal amount;

Error Messages

Error Message

<h:outputText value=”Amount”/>
<h:inputText id=”amount” label=”Amount” value=”#{payment.amount}”/>
<h:message for=”amount”/>

Resources and Styles


<h:outputStylesheet library=”css” name=”styles.css” target=”head”/>
<h:outputText value=”#{msgs.goodbye}!” styleClass=”greeting”>






goodbye=Auf Wiedersehen


.greeting {
  font-style: italic;
  font-size: 1.5em;
  color: #eee;

Table with links

Table With Links

<h:dataTable value=”#{bean1.entries}” var=”row” styleClass=”table”
    <f:facet name=”header”>
      <h:outputText value=”Name”/>
    <h:outputText value=”#{row.name}”/>
    <h:commandLink value=”Delete” action=”#{bean1.deleteAction}”
      <f:setPropertyActionListener target=”#{bean1.idToDelete}”


public class SampleBean {
  private int idToDelete;
  public void setIdToDelete(int value) { idToDelete = value; }
  public String deleteAction() {
    // delete the entry whose id is idToDelete
    return null;
  public List<Entry> getEntries() { ... }

Ajax 2.0

<h:commandButton value=”Update”>
  <f:ajax execute=”



The jsf expression language

An EL expression is a sequence of literal strings and expressions of the form base[expr1][expr2]... As in JavaScript, you can write base.identifier instead of base[‘identifier’] or base[“identifier”]. The base is one of the names in the table below or a bean name.

header A Map of HTTP header parameters, containing only the first value for each name
headerValues A Map of HTTP header parameters, yielding a String[] array of all values for a given name
param A Map of HTTP request parameters, containing only the first value for each name
paramValues A Map of HTTP request parameters, yielding a String[] array of all values for a given name
cookie A Map of the cookie names and values of the current request
initParam A Map of the initialization parameters of this web application
viewScope 2.0
A map of all attributes in the given scope
facesContext The FacesContext instance of this request
view The UIViewRoot instance of this request
component 2.0 The current component
cc 2.0
resource 2.0 Use resource[‘library:name’] to access a resource

Value expression: a reference to a bean property or an entry in a map, list or array. Examples:

userBean.name calls getName or setName on the userBean object pizza.choices[var] calls pizza.getChoices().get(var) or pizza.getChoices().put(var, ...)

Method expression: a reference to a method and the object on which it is to be invoked. Example:
userBean.login calls the login method on the userBean object when it is invoked. 2.0: Method expressions can contain parameters: userBean.login(‘order page’)

In JSF, EL expressions are enclosed in #{...} to indicate deferred evaluation. The expression is stored as a string and evaluated when needed. In contrast, JSP uses immediate evaluation, indicated by ${...} delimiters.

2.0: EL expressions can contain JSTL functions

fn:contains(str, substr),
fn:containsIgnoreCase(str, substr)
fn:startsWith(str, substr)
fn:endsWith(str, substr)
fn:join(strArray, separator)
fn:split(str, separator)
fn:substring(str, start, pastEnd)
fn:substringAfter(str, separator)
fn:substringBefore(str, separator)
fn:replace(str, substr,

JSF Core Tags

Tag Description/Attributes
f:facet Adds a facet to a component - name: the name of this facet
f:attribute Adds an attribute to a component - name, value: the name and value of the attribute to set
Constructs a parameter child component
- name: An optional name for this parameter component.
- value:The value stored in this component.
f:actionListener f:valueChangeListener
Adds an action listener or value change listener to a component
- type: The name of the listener class
f:propertyAction Listener 1.2
Adds an action listener to a component that sets a bean property
to a given value
- target: The bean property to set when the action event occurs
- value: The value to set it to
f:phaseListener 1.2
Adds a phase listener to this page
- type: The name of the listener class
f:event 2.0
Adds a system event listener to a component
- name: One of preRenderComponent, postAddToView,
  preValidate, postValidate
- listenter: A method expression of the type
  void (ComponentSystemEvent) throws
Adds an arbitrary converter to a component
- convertedId: The ID of the converter
Adds a datetime converter to a component
- type: date (default), time, or both
- dateStyle, timeStyle: default, short, medium, long or full
- pattern: Formatting pattern, as defined in java.text.
- locale: Locale whose preferences are to be used for parsing
  and formatting
- timeZone: Time zone to use for parsing and formatting
  (Default: UTC)
Adds a number converter to a component
- type: number (default), currency , or percent
- pattern: Formatting pattern, as defined in java.text.
- minIntegerDigits, maxIntegerDigits,
  minFractionDigits, maxFractionDigits: Minimum,
  maximum number of digits in the integer and fractional part
- integerOnly: True if only the integer part is parsed (default:
- groupingUsed: True if grouping separators are used (default:
- locale: Locale whose preferences are to be used for parsing
  and formatting
- currencyCode: ISO 4217 currency code to use when
  converting currency values
- currencySymbol: Currency symbol to use when converting
  currency values
Adds a validator to a component
- validatorID: The ID of the validator
Validates a double or long value, or the length of a string
- minimum, maximum: the minimum and maximum of the valid
f:validateRequired 2.0 Sets the required attribute of the enclosing component
f:validateBean 2.0
Specify validation groups for the Bean Validation Framework
(JSR 303)
Loads a resource bundle, stores properties as a Map
- basename: the resource bundle name
- value: The name of the variable that is bound to the bundle map
Specifies an item for a select one or select many component
- binding, id: Basic attributes
- itemDescription: Description used by tools only
- itemDisabled: false (default) to show the value
- itemLabel: Text shown by the item
- itemValue: Item’s value, which is passed to the server as a
  request parameter
- value: Value expression that points to a SelectItem instance
- escape: true (default) if special characters should be converted
  to HTML entities
- noSelectionOption 2.0: true if this item is the “no selection
Specifies items for a select one or select many component
- value: Value expression that points to a SelectItem, an array
  or Collection, or a Map mapping labels to values.
- var 2.0: Variable name used in value expressions when
  traversing an array or collection of non-SelectItem elements
- itemLabel 2.0, itemValue 2.0, itemDescription 2.0,
  itemDisabled 2.0, itemLabelEscaped 2.0: Item label,
  value, description, disabled and escaped flags for each item in
  an array or collection of non-SelectItem elements. Use the
  variable name defined in var.
- noSelectionOption 2.0: Value expression that yields the “no
  selection option” item or string that equals the value of the “no
  selection option” item
f:ajax 2.0
Enables Ajax behavior
- execute, render: Lists of component IDs for processing in the
  “execute” and “render” lifecycle phases
- event: JavaScript event that triggers behavior. Default: click
  for buttons and links, change for input components
- immediate: If true, generated events are broadcast during
  “Apply Request Values” phase instead of “Invoke Application”
- listener: Method binding of type void (AjaxBehaviorEvent)
- onevent, onerror: JavaScript handlers for events/errors
f:viewParam 2.0 Defines a “view parameter” that can be initialized with a request
-name, value: the name of the parameter to set
-binding, converter, id, required, value, validator,
valueChangeListener: basic attributes
f:metadata 2.0 Holds view parameters. May hold other metadata in the future


Tag Description
h:head 2.0,h:body 2.0,
HTML head, body, form
h:outputStylesheet 2.0,
h:outputScript 2.0
Produces a style sheet or script
h:inputText Single-line text input control.


h:inputTextArea Multiline text input control.


h:inputSecret Password input control. inputTextArea
h:inputHidden Hidden field
h:outputLabel Label for another
component for
h:outputLink HTML anchor. code8.2
h:outputFormat Like outputText, but
formats compound
h:outputText Single-line text output.
h:button 2.0
Button: submit, reset, or pushbutton. press me
h:commandLink, h:link 2.0 Link that acts like a
h:message Displays the most recent
message for a component
Recent Message
h:messages Displays all messages
h:grapicImage Displays an image Image Display
h:selectOneListbox Single-select listbox Listbox
h:selectOneMenu Single-select menu Menu Select
h:selectOneRadio Set of radio buttons Radio Button Select
h:selectBooleanCheckbox Checkbox Checkbox
h:selectManyCheckbox Set of checkboxes Checkboxes
h:selectManyListbox Multiselect listbox Multiselect Listbox
h:selectManyMenu Multiselect menu Multiselect Menu
h:panelGrid HTML table
h:panelGroup Two or more components
that are laid out as one
h:dataTable A feature-rich table
h:column Column in a data table

Basic Attributes

id Identifier for a component
binding Reference to the component that can be used in a backing bean
rendered A boolean; false suppresses rendering
value A component’s value, typically a value binding
valueVhangeListener A method expression of the type void (ValueChangeEvent)
converter, validator Converter or validator class name
required A boolean; if true, requires a value to be entered in the associated field

Attributes for h:body and h:form

Attribute Description
binding, id, rendered Basic attributes
dir, lang, style, styleClass, target, title
h:form only: accept, acceptcharset, enctype
HTML 4.0 attributes
(acceptcharset corresponds
to HTML accept-charset,
styleClass corresponds to
HTML class)
onclick, ondblclick, onkeydown, onkeypress,
onkeyup, onmousedown, onmousemove, onmouseout,
h:body only: onload, onunload
h:form only: onblur, onchange, onfocus,
onreset, onsubmit
DHTML events

Attributes for h:inputText, h:inputSecret,
h:inputTextarea, and h:inputHidden

Code Attribute Description cols For h:inputTextarea only—number of columns immediate Process validation early in the life cycle redisplay For h:inputSecret only—when true, the input field’s value is redisplayed when the web page is reloaded required Require input in the component when the form is submitted rows For h:inputTextarea only—number of rows binding, converter, id, rendered, required, value, validator, valueChangeListener Basic attributes accesskey, alt, dir,
disabled, lang, maxlength,
readonly, size, style,
styleClasstabindex, title HTML 4.0 pass-through attributes—alt, maxlength, and size do not apply to h:inputTextarea. None apply to h:inputHidden onblur, onchange, onclick,
ondblclick, onfocus,
onkeydown, onkeypress,
onkeyup, onmousedown,
onmousemove, onmouseout,
onmouseover, onselect DHTML events. None apply to h:inputHidden

Attributes for h:outputText and h:outputFormat

Attribute Description
escape If set to true, escapes <, >, and & characters. Default value is true
Basic attributes
style, title HTML 4.0

Attributes for h:outputLabel

Attribute Description
for The ID of the component to be labeled.
binding, converter, id, rendered, value Basic attributes

Attributes for h:graphicImage

cloud Attribute Description library 2.0, name 2.0 The resource library (subdirectory of resources) and file name (in that subdirectory) binding, id, rendered, value Basic attributes alt, dir, height, ismap, lang,
longdesc, style, styleClass, title,
url, usemap, width HTML 4.0 onblur, onchange, onclick,
ondblclick, onfocus, onkeydown,
onkeypress, onkeyup, onmousedown,
onmousemove, onmouseout,
onmouseover, onmouseup DHTML events

Attributes for h:commandButton and h:commandLink

press me Attribute Description action (command tags) Navigation outcome string or method expression of type String () outcome 2.0 (non-command tags) Value expression yielding the navigation outcome fragment 2.0 (non-command tags) Fragment to be appended to URL. Don’t include the # separator actionListener Method expression of type void (ActionEvent) charset For h:commandLink only—The character encoding of the linked reference image (button tags) For h:commandButton only—A context-relative path to an image displayed in a button. If you specify this attribute, the HTML input’s type will be image immediate A boolean. If false (the default), actions and action listeners are invoked at the end of the request life cycle; if true, actions and action listeners are invoked at the beginning of the life cycle type For h:commandButton: The type of the generated input element: button, submit, or reset. The default, unless you specify the image attribute, is submit. For h:commandLink and h:link: The content type of the linked resource; for example, text/html, image/gif, or audio/basic value The label displayed by the button or link binding, id, rendered Basic attributes accesskey, dir, disabled (h:commandButton only), lang, readonly, style, styleClass, tabindex, title
link tags only: charset, coords, hreflang, rel, rev, shape, target HTML 4.0 onblur, onclick, ondblclick,
onfocus, onkeydown, onkeypress,
onkeyup, onmousedown, onmousemove,
onmouseout, onmouseover,
onmouseup DHTML events

Attributes for h:outputLink

Java Anchor Attribute Description accesskey, binding, converter, id, lang, rendered, value Basic attributes charset, coords, dir, hreflang, lang, rel, rev, shape, style, styleClass, tabindex, target, title, type HTML 4.0 onblur, onchange, onclick, ondblclick, onfocus, onkeydown, onkeypress, onkeyup, onmousedown, onmousemove, onmouseout, onmouseover, onmouseup DHTML events

Attributes for: h:selectBooleanCheckbox,
h:selectManyCheckbox, h:selectOneRadio,
h:selectOneListbox, h:selectManyListbox,
h:selectOneMenu, h:selectManyMenu

Boolean Checkbox Attribute Description enabledClass, disabledClass CSS class for enabled/disabled elements— h:selectOneRadio and h:selectManyCheckbox only selectedClass 2.0,
unselectedClass 2.0 CSS class for selected/unselected elements— h:selectManyCheckbox only layout Specification for how elements are laid out: lineDirection (horizontal) or pageDirection (vertical)—h:selectOneRadio and h:selectManyCheckbox only collectionType 2.0 selectMany tags only: the name of a collection class such as java.util.TreeSet hideNoSelectionOption 2.0 Hide item marked as “no selection option” binding, converter, id, immediate, required, rendered, validator, value, valueChangeListener Basic attributes accesskey, border, dir, disabled, lang, readonly, style, styleClass, size, tabindex, title HTML 4.0—border only for h:selectOneRadio and h:selectManyCheckbox, size only for h:selectOneListbox and h:selectManyListbox. onblur, onchange, onclick,
ondblclick, onfocus, onkeydown,
onkeypress, onkeyup, onmousedown,
onmousemove, onmouseout,
onmouseover, onmouseup, onselect DHTML events

Attributes for h:message and h:messages

Boolean Checkbox Attribute Description for The ID of the component whose message is displayed— applicable only to h:message errorClass, fatalClass, infoClass, warnClass CSS class applied to error/fatal/information/warning messages errorStyle, fatalStyle, infoStyle, warnStyle CSS style applied to error/fatal/information/warning messages globalOnly Instruction to display only global messages—h:messages only. Default: false layout Specification for message layout: table or list— h:messages only showDetail A boolean that determines whether message details are shown. Defaults are false for h:messages, true for h:message. showSummary A boolean that determines whether message summaries are shown. Defaults are true for h:messages, false for h:message. tooltip A boolean that determines whether message details are rendered in a tooltip; the tooltip is only rendered if showDetail and showSummary are true binding, id, rendered Basic attributes style, styleClass, title HTML 4.0

Attributes for h:panelGrid

Attribute Description
bgcolor Background color for the table
border Width of the table’s border
cellpadding Padding around table cells
cellspacing Spacing between table cells
columns Number of columns in the table
frame frame Specification for sides of the frame surrounding
the table that are to be drawn; valid values: none,
above, below, hsides, vsides, lhs, rhs, box, border
headerClass, footerClass CSS class for the table header/footer
rowClasses, columnClasses Comma-separated list of CSS classes for rows/columns
rules Specification for lines drawn between cells; valid values: groups, rows, columns, all
summary Summary of the table’s purpose and structure used for non-visual feedback such as speech
binding, id, rendered, value Basic attributes
dir, lang, style, styleClass, title, width HTML 4.0
onclick, ondblclick,
onkeydown, onkeypress,
onkeyup, onmousedown,
onmousemove, onmouseout,
onmouseover, onmouseup
DHTML events

Attributes for h:panelGroup

Attribute Description
binding, id, rendered Basic attributes
style, styleClass HTML 4.0

Attributes for h:dataTable

Attribute Description
bgcolor Background color for the table
border Width of the table’s border
cellpadding Padding around table cells
cellspacing Spacing between table cells
first index of the first row shown in the table
frame Specification for sides of the frame surrounding the table should be drawn; valid values: none, above, below, hsides, vsides, lhs, rhs, box, border
headerClass, footerClass CSS class for the table header/footer
rowClasses, columnClasses comma-separated list of CSS classes for rows/columns
rules Specification for lines drawn between cells; valid values: groups, rows, columns, all
summary summary of the table’s purpose and structure used for non-visual feedback such as speech
var The name of the variable created by the data table that represents the current item in the value
binding, id, rendered,
Basic attributes
dir, lang, style, styleClass, title, width HTML 4.0
onclick, ondblclick,
onkeydown, onkeypress,
onkeyup, onmousedown,
onmousemove, onmouseout,
onmouseover, onmouseup
DHTML events

Attributes for h:column

Attribute Description
headerClass 1.2,
footerClass 1.2
CSS class for the column’s header/footer
binding, id, rendered Basic attributes
Attribute Description
ui:define Give a name to content for use in a template
-name: the name of the content
ui:insert If a name is given, insert named content if defined or use the child elements otherwise. If no name is given, insert the content of the tag invoking the template -name: the name of the content
ui:composition Produces content from a template after processing child elements (typically ui:define tags) Everything outside the ui:composition tag is ignored -template: the template file, relative to the current page
ui:component Like ui:composition, but makes a JSF component -binding, id: basic attributes
ui:decorate, ui:fragment Like ui:composition, ui:
ui:include Include plain XHTML, or a file with a ui:composition or ui:component tag -src: the file to include, relative to the current page
ui:param Define a parameter to be used in an included file or template -name: parameter name -value: a value expression (can yield an arbitrary object)
ui:repeat Repeats the enclosed elements
  • value: a List, array, ResultSet, or object
  • offset, step, size: starting intex, step size, ending index of the iteration
  • var: variable name to access the current element
  • varStatus: variable name to access the iteration status, with integer properties begin, end, index, step and Boolean properties even, odd, first, last
ui:debug Shows debug info when CTRL+SHIFT+a key is pressed
  • hotkey: the key to press (default d)
  • rendered: true (default) to activate
ui:remove Do not include the contents (useful for comments or temporarily deactivating a part of a page)

About The Author

Photo of Cay S Horstmann

Cay S. Horstmann

has written many books on C++, Java and object-oriented development, is the series editor for Core Books at Prentice-Hall and a frequent speaker at computer industry conferences. For four years, Cay was VP and CTO of an Internet startup that went from 3 people in a tiny office to a public company. He is now a computer science professor at San Jose State University. He was elected Java Champion in 2005.

Cay Horstmann’s Java Blog


Cay Horstmann’s Website-


Recommended Book


Core JavaServer Faces delves into all facets of JSF development, offering systematic best practices for building robust applications and maximizing developer productivity.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Getting Started with Spring-DM

By Craig Walls

22,998 Downloads · Refcard 57 of 204 (see them all)


The Essential Spring-DM Cheat Sheet

Spring is a framework that promotes development of loosely coupled/highly cohesive objects through dependency injection and interface-oriented design. Spring Dynamic Modules (Spring-DM) brings Spring and OSGi together to enable a declarative service model for OSGi that leverages Spring’s power of dependency injection. This DZone Refcard shows you how to use Spring-DM to wire together OSGi services to build highly modular and dynamic applications. You may also enjoy the other Spring DZone Refcardz in our Collection: Spring Configuration and Spring Annotations.
HTML Preview
Getting Started with Spring-DM

Getting Started with Spring-DM

By Craig Walls

about spring-dm

Spring is a framework that promotes development of looselycoupled/ highly-cohesive objects through dependency injection and interface-oriented design. OSGi is a framework specification that promotes development of loosely-coupled/ highly-cohesive application modules through services and interface-oriented design. Seems like a match made in heaven! Spring Dynamic Modules (Spring-DM) brings Spring and OSGi together to enable a declarative service model for OSGi that leverages Spring’s power of dependency injection. This reference card will be your resource for working with Spring- DM to wire together OSGi services and ultimately building modular applications.

You may be interested to know that Spring-DM is the basis for the SpringSource dm Server, a next-generation application server that embraces modularity through OSGi. What’s more, the upcoming OSGi R4.2 specification includes a component model known as the OSGi Blueprint Services that is heavily influenced by Spring-DM.

Introducing Spring-DM

The star player of Spring-DM is a bundle known as the Spring- DM extender. The Spring-DM extender watches for bundles to be installed and inspects them to see if they are Springenabled (that is, if they contain a Spring application context definition file). When it finds a Spring-enabled bundle, the extender will create a Spring application context for the bundle.

Spring Application Contexts

Spring-DM also provides a Spring configuration namespace that enables you to declare and publish Spring beans as OSGi services and to consume OSGi services as if they were just beans in a Spring application context. This declarative model effectively eliminates the need to work with the OSGi API directly.

Installing Spring-DM

One of the nice things about Spring-DM is that you do not need to include it in the classpath of your OSGi bundles or even reference it from those bundles. Installing Spring-DM involves two parts:

  1. Installing the Spring-DM and supporting bundles in your OSGi framework
  2. Adding the Spring-DM configuration namespace to your bundle’s Spring configuration XML files

You can download Spring-DM from
http://www.springframework.org/osgi. The distribution comes complete with everything you need to work with Spring- DM, including the Spring-DM extender bundle and all of its dependency bundles.

Installing the Spring-DM extender bundles

There are several means by which you can install bundles into an OSGi framework, depending on the OSGi framework and any add-ons or tools you may be using. But the most basic way is to use the “install” command that is available in most OSGi framework shells. For example, to install the Spring- DM extender bundle and the supporting Spring-DM bundles (assuming that you’ve unzipped the Spring-DM distribution in / spring-dm-1.2.0):

osgi> install file:///spring-dm-1.2.0/dist/spring-osgi-core-
osgi> install file:///spring-dm-1.2.0/dist/spring-osgi-extender-
osgi> install file:///spring-dm-1.2.0/dist/spring-osgi-io-

Spring-DM depends on the Spring framework, so you’ll also need to install several other Spring bundles:


osgi> install file:///spring-dm-1.2.0/lib/spring-aop-2.5.6.A.jar
osgi> install file:///spring-dm-1.2.0/lib/spring-context-
osgi> install file:///spring-dm-1.2.0/lib/spring-core-2.5.6.A.jar
osgi> install file:///spring-dm-1.2.0/lib/spring-beans-2.5.6.A.jar

Finally, you’ll also need to install several other supporting bundles that Spring and Spring-DM depend on:

osgi> install file:///spring-dm-1.2.0/lib/com.springsource.net.
osgi> install file:///spring-dm-1.2.0/lib/com.springsource.org.
osgi> install file:///spring-dm-1.2.0/lib/com.springsource.slf4j.
osgi> install file:///spring-dm-1.2.0/lib/com.springsource.slf4j.
osgi> install file:///spring-dm-1.2.0/lib/com.springsource.slf4j.
osgi> install file:///spring-dm-1.2.0/lib/log4j.osgi-1.2.15-

Hot Tip

Use tools to help install bundles Installing bundles using the “install” command should work with almost any OSGi framework, but it is also quite a manual process. Pax Runner (http:// paxrunner.ops4j.org) is an OSGi framework launcher that takes a lot of the tedium out of installing bundles. Just use Pax Runner’s “spring dm” profile: % pax-run.sh --profiles=spring.dm

The Spring-DM configuration namespace

Schema URI:


Schema XSD:


When it comes to declaring services and service consumers in Spring-DM, you’ll use Spring-DM’s core namespace. To do that, you’ll need to include the namespace in the XML file.

<?xml version=”1.0” encoding=”UTF-8”?>
<beans xmlns=”http://www.springframework.org/schema/beans”

Spring’s “beans” namespace is the default namespace, but if you know that most or all of the elements in the Spring configuration file will be from the Spring-DM namespace, you can make it the default namespace:

spring beans

Publishing Services

To demonstrate Spring-DM’s capabilities, we’re going to create a few OSGi services that translate English text into some other language. All of these services will implement the following Translator interface:

package com.habuma.translator;
public interface Translator
String translate(String text);

The first service we’ll work with is one that translates English into Pig Latin. It’s implementation looks something like this:

package com.habuma.translator.piglatin;
import com.habuma.translator.Translator;
public class PigLatinTranslator implements Translator {
	private final String VOWELS = “AEIOUaeiou”;
	public String translate(String text) {
	  // actual implementation left out for brevity

If we were working with basic OSGi (that is, without Spring- DM), we’d publish this service to the OSGi service registry using the OSGi API, perhaps in a bundle activator’s start() method:

public void start(BundleContext context) throws Exception {
				new PigLatinTranslator(), null);

Although the native OSGi approach will work fine, it requires us to work programmatically with the OSGi API. Instead, we’ll publish services declaratively using Spring-DM. The first step: Create a Spring context definition file that declares the PigLatinTranslator as a Spring bean:

<?xml version=”1.0” encoding=”UTF-8”?>
<beans xmlns=”http://www.springframework.org/schema/beans”
  <bean id=”pigLatinTranslator”
    class=”com.habuma.translator.piglatin.PigLatinTranslator” />

Hot Tip

Overriding the context configuration location By default, the Spring-DM extender looks for all XML files located in a bundle’s META-INF/spring folder and assumes that they’re all Spring context definition files that are to be used to create a Spring application context for the bundle; however, if you’d like to put your context definition files elsewhere in the bundle, use the Spring-Context: header in the META-INF/ MANIFEST.MF file.
For example, if you’d rather place your Spring configuration files in a directory called “spring-config” at the root of the bundle, add the following entry to your bundle’s manifest: Spring-Context: spring-config/*.xml

This Spring context file can be named anything, but it should be placed in the Pig Latin translator bundle’s META-INF/ spring directory. When the bundle is started in an OSGi framework, the Spring-DM extender will look for Spring context configuration files in that directory and use them to create a Spring application context for the bundle.

Publishing a simple OSGi service

By itself the Spring context we’ve created only creates a bean in the Spring application context. It’s not yet an OSGi service. To publish it as an OSGi service, we’ll create another Spring context definition file that uses the Spring-DM namespace:

<<?xml version=”1.0” encoding=”UTF-8”?>
<<beans:beans xmlns:beans=”http://www.springframework.org/schema/
   <<service ref=”pigLatinTranslator”
	  interface=”com.habuma.translator.Translator” />

Hot Tip

Don’t mix your Spring and OSGi contexts Although you can certainly define all of your bundle’s Spring beans and OSGi services in a single Spring context definition file, it’s best to keep them in separate files (all in META-INF/spring). By keeping the OSGi-specific declarations out of the normal Spring context definition, you’ll be able to use the OSGi-free context to do non-OSGi integration tests of your beans.

This new Spring context file uses Spring-DM’s element to publish the bean whose ID is “pigLatinTranslator” in the OSGi service registry. The ref attribute refers to the Spring bean in the other context definition file. The interface attribute identifies the interface under which the service will be available in the OSGi service registry.

Publishing a service under multiple interfaces

Let’s suppose that the PigLatinTranslator class were to implement another interface, perhaps one called TextProcessor. And let’s say that we want to publish the service under both the Translator interface and the TextProcessor interface. In that case, you can use the element to identify the interfaces for the service:

<service ref=”pigLatinTranslator”>

Auto-selecting service interfaces

Instead of explicitly specifying the interfaces for a service, you can also let Spring-DM figure out which interfaces to use by specifying the auto-export attribute:

<service ref=”pigLatinTranslator”
       auto-export=”interfaces” />

By setting auto-export to “interfaces”, it tells Spring-DM to publish the service under all interfaces that the implementation class implements. You can also set auto-export to “all-classes” to publish the service under all interfaces and classes for the service or “class-hierarchy.”

Publishing a service with properties

It’s also possible to publish a service with properties to qualify that service. These properties can later be used to help refine the selection of services available to a consumer. For example, let’s say that we want to qualify Translator services by the language that they translate to:

<rvice ref=”pigLatinTranslator”
   <beans:entry key=”translator.language” value=”Pig Latin” />

The <service-properties> element can contain one or more <entry> elements from the “beans” namespace. In this case, we’ve added a property named “translator.language” with a value of “Pig Latin”. Later, we’ll use this property to help select this particular service from among a selection of services that all implement Translator.

Consuming Services

Now that we’ve seen how to publish a service in the OSGi service registry, let’s look at how we can use Spring-DM to consume that service. To get started, we’ll create a simple client class:

package com.habuma.translator.client;
import java.util.List;
import com.habuma.translator.Translator;
public class TranslatorClient {
  private static String TEXT = “Be very very quiet. I’m hunting
  public void go() {
     System.out.println(“ TRANSLATED: “ +
  private Translator translator;
  public void setTranslator(Translator translator) {
    this.translator = translator;

TranslatorClient is a simple POJO that is injected with a Translator and uses that Translator in its go() method to translate some text. We’ll declare it as a Spring bean like this:

<?xml version=”1.0” encoding=”UTF-8”?>
		<beans xmlns=”http://www.springframework.org/schema/beans”
  <bean class=”com.habuma.translator.client.TranslatorClient”
   <property name=”translator” ref=”translator” />

As with the service’s Spring context declaration, the name of this Spring context definition can be named anything, but it should be placed in the client bundle’s META-INF/spring folder so that the Spring-DM extender will find it.

The bean is declared with the init-method attribute set to call the go() method when the bean is created. And we use the <property> element to inject the bean’s translator property with a reference to a bean whose ID is “translator”.

The big question here is: Where does the “translator” bean come from?

Simple service consumption

Spring-DM’s <reference> element mirrors the <service> element. Rather than publishing a service, however, <reference> retrieves a service from the OSGi service registry. The simplest way to consume a Translator service is as follows:

<reference id=”translator”
interface=”com.habuma.translator.Translator” />

When the Spring-DM extender creates a Spring context for the client bundle, it will create a bean with an ID of “translator” that is a proxy to the service it finds in the service registry. With that id attribute and interface, it is quite suitable for wiring into the client bean’s translator property.

Setting a service timeout

In a dynamic environment like OSGi, services can come and go. When the client bundle starts up, there may not be a Translator service available for consumption. If it’s not available, then Spring-DM will wait up to 5 minutes for the service to become available before giving up and throwing an exception.

But it’s possible to override the default timeout using the <reference> element’s timeout attribute. For example, to set the timeout to 1 minute instead of 5 minutes:

<reference id=”translator”
	timeout=”60000” />

Notice that the timeout attribute is specified in milliseconds, so 60000 indicates 60 seconds or 1 minute.

Optional service references

Another way to deal with the dynamic nature of OSGi services is to specify that a service reference is optional. By default, the cardinality of a reference to a service is “1..1”, meaning that the service must be found within the timeout period or else an exception will be thrown. But you can specify an optional reference by setting the cardinality to “0..1”:

<reference id=”translator”
	cardinality=”0..1” />

Filtering services

Imagine that we have two or more Translator services published in the OSGi service registry. Let’s say that in addition to the Pig Latin translator there’s also another Translator service that translates text into Elmer Fudd speak. How can we ensure that our client gets the Pig Latin service when another implementations may be available?

Earlier, we saw how to set a property on a service when it’s published. Now we’ll use that property to filter the selection of services found on the consumer side:

<reference id=”translator”
	filter=”(translator.language=Pig Latin)” />

The filter attribute lets us specify properties that will help refine the selection of matching services. In this case, we’re only interested in a service that has its “translator.language” property set to “Pig Latin”.

Consuming multiple services

But why choose? What if we wanted to consume all matching services? Instead of pin-pointing a specific service, we can use Spring-DM’s <list> element to consume a collection of matching services:

<list id=”translators”
	interface=”com.habuma.translator.Translator” />

The Spring-DM extender will create a list of matching services that can be injected into a client bean collection property such as this one:

private List<Translator> translators;
public void setTranslators(List<Translator> translators) {
  this.translators = translators;

Optionally, you can use Spring-DM’s <set> element to create a set of matching services:

<set id=”translators”
   interface=”com.habuma.translator.Translator” />

The <list> and <set> elements share many of the same attributes with <reference>. For example, to consume a set of all Translator services filtered by a specific property:

<set id=”translators”
filter=”(translator.language=Elmer Fudd)” />

Testing Bundles

Hopefully, you’re in the habit of writing unit tests for your code. If so, that practice should extend to the code that is contained within your OSGi bundles. Because Spring-DM encourages POJO-based OSGi development, you can continue to write unit-tests for the classes that define and consume OSGi services just like you would for any other non-OSGi code.

But it’s also important to write tests that exercise your OSGi services as they’ll be used when deployed in an OSGi container. To accommodate in-OSGi integration testing of bundles, Spring-DM provides AbstractConfigurableBundleCreatorTests, a JUnit 3 base test class from which you can write your bundle tests.

What’s fascinating is how tests based on AbstractConfigurableBundleCreatorTests work. When the test is run, it starts up an OSGi framework implementation (Equinox by default) and installs one or more bundles into the framework. Finally, it wraps itself in an on-the-fly bundle and installs itself into the OSGi framework so that it can test bundles as an insider.

Writing a basic OSGi test

To illustrate, let’s write a simple test that loads our Translator interface bundle and the Pig Latin implementation bundle:

package com.habuma.translator.test;
import org.osgi.framework.ServiceReference;
import org.springframework.osgi.test.

import com.habuma.translator.Translator;
public class PigLatinTranslatorBundleTest
      extends AbstractConfigurableBundleCreatorTests {
  protected String[] getTestBundlesNames() {
     return new String[] {
		“com.habuma.translator, interface, 1.0.0”,
		“com.habuma.translator, pig-latin, 1.0.0”
  public void testOsgiPlatformStarts() {

The getTestBundleNames() method returns an array of Strings where each entry represents a bundle that should be installed into the OSGi framework for the test. The format of each entry is a comma-separated set of values that identify the bundle by its Maven group ID, artifact ID, and version number.

So far, our test has a single test method, testOsgiPlatformStarts(). All this method does is test that the OSGi framework has started by asserting that bundleContext (inherited from AbstractConfigurableBundleCreatorTests) is not null.

Testing OSGi service references

A more interesting test we could write is one that uses the bundle context to lookup a reference to the Translator service and assert that it meets our expectations:

public void testServiceReferenceExists() {
  ServiceReference serviceReference =
	assertEquals(“Pig Latin”,

Here we retrieve a ServiceReference from the bundle context and assert that it isn’t null. This means that some implementation of the Translator service has been published in the OSGi service registry. Then, it examines the properties of the service reference and asserts that the “translator.language” property has been set to “Pig Latin”, as we’d expect from how we published the service earlier.

Testing OSGi services

One more thing we could test is that the Translator service does what we’d expect it to do. Certainly, this kind of test usually belongs in a unit test. But it’s still good to throw a smoke test its way to make sure that we’re getting the service we’re expecting.

We could use the ServiceReference to lookup the service. But, we can avoid any additional work with the OSGi API by having the Translator service wired directly into our test class. First, let’s add a Translator property and setter method to our test class

private Translator translator;
public void setTranslator(Translator translator) {
  this.translator = translator;

When the test is run, Spring will attempt to autowire the property with a bean from its own Spring application context. But we haven’t defined a Spring application context for the test yet. Let’s do that now:

<?xml version=”1.0” encoding=”UTF-8”?>
<beans:beans xmlns:beans=”http://www.springframework.org/schema/
  <reference id=”translator”
    interface=”com.habuma.translator.Translator” />

You’ll recognize that this Spring context definition looks a lot like the one we created for the service consumer. In fact, our test class will ultimately be a consumer of the Translator service. We just have one more thing to do before we can test the service—we’ll need to override the getConfigLocations() method to tell the test where it can find the context definition file:

protected String[] getConfigLocations() {
   return new String[] { “bundle-test-context.xml” };

Now we can write our test method:

public void testTranslatorService() {
	assertEquals(“id-DAY is-thAY ork-wAY”,
       translator.translate(“Did this work”));

This method assumes that by the time it is invoked, the translator property has been set. It first asserts that it is not null and then throws a simple test String at it to test that the service does what we expect.

Changing the tested OSGi framework

By default, Spring-DM tests are run within Equinox. But you can change them to run within another OSGi framework implementation such as Apache Felix or Knopflerfish by overriding the getConfigLocations() method. For example, to run the test within Apache Felix:

  protected String getPlatformName() {
    return Platforms.FELIX;

Or for Knoplerfish:

protected String getPlatformName() {
  return Platforms.KNOPFLERFISH;

Providing a Custom Manifest

When a Spring-DM test gets wrapped up in an on-the-fly bundle, a manifest will be automatically generated for it. But if you’d like to provide a custom manifest. To provide a custom manifest for the on-the-fly bundle, all you need to do is override the getManifestLocation(). For example:

protected String getManifestLocation() {
  return “classpath:com.habuma.translator.Translator.MF”;

Be aware, however, that if you provide a custom manifest, you must include a few things in that manifest to make Spring-DM testing work. First, you’ll need to specify a bundle activator:

Bundle-Activator: org.springframework.osgi.test.JUnitTestActivator

And you’ll need to import JUnit and Spring-DM packages:

Import-Package: junit.framework,


Example Source Code:


Spring-DM Homepage:


OSGi Alliance:


Modular Java on Twitter:


Craig’s Modular Java Blog:


Craig’s Spring Blog:


56 76

About The Author

Photo of Craig Walls

Craig Walls

is a Texas-based software developer with more than 15 years experience working the telecommunication, financial, retail, education, and software industries. He’s a zealous promoter of the Spring Framework, speaking frequently at local user groups and conferences and writing about Spring and OSGi on his blog. When he’s not slinging code, Craig spends as much time as he can with his wife, two daughters, six birds, and two dogs.

Craig’s Publications:

  • Modular Java: Creating Flexible Applications with OSGi and Spring, 2009
  • Spring in Action, 2nd Edition, 2007
  • XDoclet in Action, 2003

Craig’s Blog: http://www.springinaction.com

Recommended Book

Spring in action

Newcomers will find a thorough introduction to the framework, while experienced Drupal developers will learn best practices for building powerful websites. With Using Drupal, you’ll find concrete and creative solutions for developing the exact community website you have in mind.

Recommended Book


Modular Java is filled with tips and tricks that will make you a more proficient OSGi and Spring-DM developer. Equipped with the know-how gained from this book, you’ll be able to develop applications that are more robust and agile.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Getting Started with JavaFX

By Stephen Chin

22,615 Downloads · Refcard 56 of 204 (see them all)


The Essential JavaFX Cheat Sheet

JavaFX makes it easier to build better RIAs with graphics, animation, and media. Built on Java, this platform uses existing Java libraries and is capable of running in your browser, on your desktop, and on your phone. This DZone Refcard is ideal for the beginner, and will help you get started programming with JavaFX Script. It will also serve as a convenient reference once you have mastered the language. The instructions in this Refcard assume that you are using an IDE, but it is also possible to do everything from the command line as well.
HTML Preview

Getting Started with JavaFX

By Stephen Chin

About JavaFX

JavaFX is an exciting new platform for building Rich Internet Applications with graphics, animation, and media. It is built on Java technology, so it is interoperable with existing Java libraries, and is designed to be portable across different embedded devices including mobile phones and set-top boxes. This Refcard will help you get started programming with JavaFX Script and also serve as a convenient reference once you have mastered the language.

To get started, you will have to download the latest JavaFX SDK from the JavaFX website here: http://javafx.com/.

The instructions in the following tutorial assume that you are using an IDE, such as NetBeans. However, it is possible to do everything from the command line as well.

JFXPoetry, a si mple exa mple

To illustrate how easy it is to build an application that melds graphics, text, animation, and media, we will start with a simple tutorial. The goal will be to write an application that:

  • Loads and displays an image from the internet
  • Displays and animates a verse of poetry
  • Declaratively mixes in graphic effects
  • Plays media asynchronously

For the JFXPoetry theme, we will use “Pippa’s Song,” a wellknown excerpt from Robert Browning’s Pippa Passes.

Loading an Image on the Stage

Stage and Scene are the building blocks of almost every JavaFX program. A Stage can either be represented as a Frame for desktop applications, a rectangle for applets, or the entire screen for mobile devices. The visual content of a Stage is called a Scene, which contains a sequence of content Nodes that will be displayed in stacked order. The following program creates a basic Stage and Scene which is used to display an image:

var scene:Scene;
Stage {
  title: “Pippa’s Song by Robert Browning”
  scene: scene = Scene {
    content: [
      ImageView {
        image: bind Image {
	 height: scene.height
	 preserveRatio: true
	 url: “http://farm1.static.flickr.com/39/

Notice that that JavaFX syntax makes it simple to express nested UI structures. The curly braces “{}” are used for object instantiation, and allow inline initialization of variables where the value follows the colon “:”. This is used to instantiate an ImageView with an Image inside that loads its content from the given URL. To ensure the image resizes with the window, we set preserveRatio to true and bind the Image. Binding is a very powerful facility in JavaFX that makes it easy to update values without heavyweight event handlers. Compiling and running this application will display a picture of a misty morning in Burns Lake, BC, Canada taken by Duane Conlon as shown in Figure 1.1 2

Figure 1.1 2

Figure 1: A JavaFX Stage containing an image loaded from the network

Displaying Text With Effects

Displaying text in JavaFX is as simple as instantiating a Text Node and setting the content to a String. There are many variables available on Text, but for this example we will set the font, fill color, and also add a Drop Shadow effect to make the text stand out on the background.

1 Creative Commons Attribution 2.0 License: http://creativecommons.org/licenses/by/2.0/

2 Duane Conlon’s Photostream: http://www.flickr.com/photos/duaneconlon/

var text:Text;
Stage {
	  ImageView {
	  text = Text {
		effect: DropShadow {}
		font: bind Font.font(“Serif”, FontWeight.BOLD,
					  scene.height / 12.5)
	    fill: Color.GOLDENROD
	    x: 10
	    y: bind scene.height / 6
	    content: “The year’s at the spring,\n”
				 “And day’s at the morn;\n”
				 “Morning’s at seven;\n”
				 “The hill-side’s dew-pearled;\n”
				 “The lark’s on the wing;\n”
				 “The snail’s on the thorn;\n”
				 “God’s in His heaven--\n”
				 “All’s right with the world!”

Notice that rather than specifying the whole poem text on one line we have split it across several lines, which will automatically get concatenated. Also, we have used the bind operator to set both the font size and y offset, which will update their values automatically when the scene height changes. Figure 2 shows the updated example with text overlaid on the Image.

Figure 2

Figure 2: Updated example with a Text overlay

JavaFX offers a large set of graphics effects that you can easily apply to Nodes to create rich visual effects. Table 1 lists all the available effects you can choose from.

Table 1. Graphics effects available in JavaFX

Effect Description
Blend Blends two inputs together using a pre-defined BlendMode
Bloom Makes brighter portions of the Node appear to glow
BoxBlur Fast blur with a configurable quality threshold
ColorAdjust Per-pixel adjustments of hue, saturation, brightness, and contrast
DisplacementMap Shifts each pixel by the amount specified in a DisplacementMap
DropShadow Displays an offset shadow underneath the node
Flood Fills a rectangular region with the given Color
GaussianBlur Blurs the Node with a configurable radius
Glow Makes the Node appear to glow with a given intensity level
Identity Passes an image through to a chained effect
InnerShadow Draws a shadow on the inner edges of the Node
InvertMask Returns a mask that is the inverse of the input
Lighting Simulates a light source to give Nodes a 3D effect
MotionBlur Blurs the image at a given angle to create a motion effect
PerspectiveTransform Maps a Node to an arbitrary quadrilateral for a perspective effect
Reflection Displays an inverted view of the Node to create a reflected effect
SepiaTone Creates a sepia tone effect to mimic aged photographs
Shadow Similar to a DropShadow, but without the overlaid image

Animated Transitions

Animations in JavaFX can be accomplished either by setting up a Timeline from scratch, or using one of the pre-fabricated Transitions. To animate the Text rising onto the screen, we will use a TranslateTransition, which adjusts the position of a Node in a straight line for the specified duration:

var animation = TranslateTransition {
  duration: 24s
  node: text
  fromY: scene.height
  toY: 0
  interpolator: Interpolator.EASEOUT

By setting an interpolator of EASEOUT, the text will start at full speed and gradually deaccelerate as it approaches its destination. Animations and Transitions can also be configured to repeat, run at a specific rate, or reverse. To run the transition, all you need to do is call the play() function, which will animate the text as shown in Figure 3.

Figure 3

Figure 3: Animated Text Scrolling Into View

Table 2 lists all of the available transitions that are part of the JavaFX API. To get a feel for how the different transitions work, try adding a FadeTransition that will gradually fade the background in over a 5 second duration.

Table 2. Transitions Supported by JavaFX

Transition Description
FadeTransition Changes the opacity of a node over time
ParallelTransition Plays a sequence of transitions in parallel
PathTransition Animates nodes along a Shape or Path
PauseTransition Executes an action after the specified delay
RotateTransition Changes the rotation of a node over time
ScaleTransition Changes the size of a node over time
SequentialTransition Plays a sequence of transitions in series
TranslateTransition Changes the position of a node over time

Interacting With Controls

The JavaFX 1.2 release features a new library of skinnable controls written in pure JavaFX. Table 3 lists some of the new controls and what they can be used for.

Table 3. Controls Available in JavaFX 1.2

Control Description
Button Button that can contain graphics and text
CheckBox Selectable box that can be checked, unchecked, or undefined
Hyperlink HTML-like clickable text link
Label Text that can be associated with anther control
ListView Scrollable list that can contain text or Nodes
ProgressBar Progress bar that can show percentage complete or be indeterminate
RadioButton Selectable button that can belong to a group
ScrollBar Scroll control typically used for paging
Slider Draggable selector of a number or percent
TextBox Text input control

The simplest control to use is a Button, which can easily be scripted to play the animation sequence again from the beginning.

var button:Button;
Stage {
text = Text {
    button = Button {
        translateX: bind (scene.width - button.width) / 2
        translateY: bind (scene.height - button.height) / 2
        text: “Play Again”
      visible: bind not animation.running
      action: function() {

Ths bind operator is used to both hide the button while the animation is playing and also center the button in the window. Initially the button is invisible, but we added a new SequentialTransition that plays a FadeTransition to show the button after the translation is complete. Clicking the button shown in Figure 4 will hide it and play the animation from the beginning.

Figure 4

Figure 4: Button Control to Play the Animation Again

Panning With Layouts

JavaFX 1.2 comes with several new layouts that make it easy to design complex UIs. One of these is the ClipView, which we will use to support dragging of the poem text. ClipView takes a single Node as the input and allows the content to be panned using the mouse:

 content: [
    ClipView {
	 width: bind scene.width
	 height: bind scene.height
	 override var maxClipX = 0
	 node: text = Text {

To ensue the ClipView takes the full window, its width and height are bound to the scene. Also, we have overridden the maxClipX variable with a value of 0 to restrict panning to the vertical direction. The text can now be dragged using the mouse as shown in Figure 5.

Figure 5

Figure 5: Panning the Text using a ClipView

Table 4 lists all of the available layouts that come JavaFX comes with. HBox and VBox have been around since the 1.0 release, but all the other layouts are new in JavaFX 1.2.

Table 4. Layouts Available in JavaFX 1.2

Layout Description
HBox Lays out its contents in a single, horizontal row
VBox Lays out its contents in a single, vertical column
ClipView Clips its content Node to the bounds, optionally allowing panning
Flow Lays out its contents either vertically or horizontally with wrapping
Stack Layers its contents on top of each other from back to front
Tile Arranges its contents in a grid of evenly sized tiles

Finishing with Media

JavaFX has built-in media classes that make it very simple to play audio or video either from the local files or streaming off the network. To complete the example we will add in a public domain clip of Indigo Bunting birds chirping in the background. Adding in the audio is as simple as appending a MediaPlayer with autoPlay set to true that contains a Media object pointing to the URL.

MediaPlayer {
  autoPlay: true
  media: Media {
    source: “http://video.fws.gov/sounds/35indigobunting.mp3”

In this example we are using an mp3 file, which is supported across platforms by JavaFX. Table 5 lists some of the common media formats supported by JavaFX, including all the crossplatform formats.

Table 5. Common Media Formats Supported by JavaFX

Type Platform Format File Extension
Audio Cross-platform MPEG-1 Audio Layer 3 mp3
Audio Cross-platform Waveform Audio Format wav
Audio Macintosh Advanced Audio Coding m4a, aac
Audio Macintosh Audio Interchange File Format aif, aiff
Video Platform Format File Extension
Video Cross-platform Flash Video flv, f4v
Video Cross-platform JavaFX Multimedia fxm
Video Windows Windows Media Video wmv, avi
Video Macintosh QuickTime mov
Video Macintosh MPEG-4 mp4

To try the completed example complete with animation and audio, you can click on the following url:


The full source code for this application is available on the JFXtras Samples website: http://jfxtras.org/portal/samples

Running on Mobile

To run the sample in the Mobile Emulator all you have to do is pass in the MOBILE profile to the javafxpackager program or switch the run mode in your IDE project properties. JavaFX Mobile applications are restricted to the Common Profile, which does not include all the features of desktop applications. The full list of restrictions is shown in Table 5.

Table 5. Functionality Not Available in the Common Profile

Class(es) Affected Variables and Methods
javafx.ext.swing.* All
javafx.reflect.* All
javafx.scene.Node effect, style
javafx.scene.Scene stylesheets
javafx.scene.effect.* All
javafx.scene.effect.light.* All
javafx.scene.shape.ShapeIntersect All
javafx.scene.shape.ShapeSubtract All
javafx.scene.text.Font autoKern, embolden, letterSpacing, ligatures, oblique, position
javafx.stage.AppletStageExtension All
javafx.util.FXEvaluator All
javafx.util.StringLocalizer All

Over 80% of the JavaFX API is represented in the Common Profile, so it is not hard to build applications that are portable. In this example we used a DropShadow on the text that, once removed, will let us run the example in the Mobile Emulator as shown in Figure 6.

Figure 6

Figure 6: JFXPoetry application running in the Mobile Emulator

Running as a Desktop Widget

You can deploy your application as a desktop widget using the WidgetFX open-source framework. Any JavaFX application can be converted to a widget by including the WidgetFX-API.jar and making some small updates to the code.

The Following code fragment highlights the code changes required:

var widget:Widget = Widget {
  resizable: false
  width: 500
  height: 375
  content: [
      height: widget.height
    font: bind Font.font(“Serif”, FontWeight.BOLD,
              widget.height / 12.5)
    y: bind widget.height / 6

    fromY: widget.height

The updates to the code include the following three changes:

  • Wrap your application in a Widget class. The Widget class extends javafx.scene.layout.Panel, which makes it easy to extend.
  • Set the initial widget width/height and modify references from scene to widget.
  • Return the widget at the end of the script.

To run the widget, simply change your project properties to run the application using Web Start Excecution. This will automatically create a JNLP file compatible with WidgetFX and launch the Widget Runner, which allows you to test your widget as shown in the Figure 7.

Figure 7

Figure 7: JFXPoetry running as a desktop widget

For more information about WidgetFX, including SDK download, documentation, and additional tutorials, check out the project website: http://widgetfx.org/

JavaFX Reference

Language Reference

JavaFX supports all the Java datatypes plus a new Duration type that simplifies writing animationed UIs.

Data Types:

DataType Java Equivalent Range Examples
Boolean boolean true or false true,false
Integer int -2147483648 to 2147483647 2009, 03731, 0x07d9
Number float 1.40×10 45 and 3.40×1038 3.14, 3e8, 1.380E-23
String String N/A “java’s”, ‘in”side”er’
Duration <None> -263 to 263-1 milliseconds 1h, 5m, 30s, 500ms
Character char 0 to 65535 0,20,32
Byte byte -128 to 127 -5, 0,5
Short short -32768 to 32767 -300, 0, 521
Long long -263 to 263-1 2009, 03731,0x07d9
Float float 1.40x10 45 and 3.40x1038 3.14, 3e8, 1.380E-23
Double double 4.94x10 324 and 1.80x10308 3.14, 3e8, 1.380E-123

JavaFX Characters cannot accept literals like ‘a’ or ‘0’, because they are treated as Strings. The primary way of getting Characters will be by calling a Java API that returns a char primitive, although you can create a new character by assigning a numeric constant


The following table lists all the mathematical, conditional, and boolean operators along with their precedence (1 being the highest).

Operator Meaning Precedence Examples
++ Pre/post increment 1 ++i, i++
-- Pre/post decrement 1 --i, i--
not Boolean negation 2 not (cond)
* Multiply 3 2 * 5, 1h * 4
/ Divide 3 9 / 3, 1m / 3
mod Modulo 3 20 mod 3
+ Add 4 0 + 2, 1m + 20s
- Subtract (or negate) 4 (2) -2, 32 - 3, 1h - 5m
== Equal 5 value1 == value2, 4 == 4
!= Not equal 5 value1 != value2, 5 != 4
< Less than 5 value1 < value2, 4 < 5
<= Less than or equal 5 value1 <= value2, 5 <= 5
< Greater than 5 value1 > value2, 6 > 5
>= Greater than or equal 5 value1 >= value2, 6 >= 6
instanceof Is instance of class 6 node instanceof Text
as Typecast to class 6 node as Text
and Boolean and 7 cond1 and cond2
or Boolean or 8 cond1 or cond2
+= Add and assign 9 value += 5
-= Subtract and assign 9 value -= 3
*= Multiply and assign 9 value *= 2
/= Divide and assign 9 value /=4
= Assign 9 value = 7
  • Multiplication and division of two durations is allowed, but not meaningful
  • Underflows/Overflows will fail silently, producing inaccurate results
  • Divide by zero will throw a runtime exception


JavaFX sequences provide a powerful resizable and bindable list capability under a simple array-like syntax. All of the sequence operators (sizeof, reverse, indexof) have a relative precedence of 2.

Operation Syntax Examples

[y..z step w]

var nums = [1, 2, 3, 4]; var letters = [“a”, “b”, “c”];
[1..5] = [1, 2, 3, 4, 5]
[1..>5] = [1, 2, 3, 4]
[1..9 step 2] = [1, 3, 5, 7, 9]

Size sizeof seq sizeof nums; // = 4
Index indexof variable

for(x in seq) {
indexof x;

Element seq[i] letters[2]; // = “c”


nums[1..2]; // = [2, 3]
letters[0..<2]; // = [“a”, “b”]

Predicate seq[x|boolean] nums[n|n mod 2 == 0]; // = [2, 4]
Reverse reverse seq reverse letters; // = [“c”, “b”, “a”]

insert x into seq
insert x before seq[i]
insert x after seq[i]

insert 5 into nums; // = [1, 2, 3, 4, 5]
insert “gamma” before letters[2]; // = [“a”, “b”,
“gamma”, “c”]
insert “2.3” after nums[1]; // = [1, 2, 2.3, 3, 4]


delete seq[i]
delete seq[x..y]
delete x from seq
delete seq

delete letters[1]; // = [“a”, “c”]
delete nums[1..2]; // = [1, 4]
delete “c” from letters; // = [“a”, “b”]
delete letters; // = []

  • The javafx.util.Sequences class provides additional functions, which allow you to manipulate sequences, such as min, max, search, shuffle, and short.
  • Nested sequences are automatically flattened, so [[1,2], [3,4]] is equivalent to [1,2,3,4].
  • Sequences require commas after all elements except close braces; however it is recommended to always use commas
  • You can declare a sequence as a nativearray. This is an optimization so that arrays returned from a Java method don’t need to be converted to a sequence.

Access Modifiers:

The JavaFX access modifiers are based upon Java with the addition of extra variable-only modifiers.

Modifier Name Description
<Default> Script only access Only accessible within the same script file
package Package access Only accessible within the same package
protected Protected access Only accessible within the same package or by subclasses
public Public access Can be accessed anywhere

Read access

Var/def modifier to allow a variable to be read anywhere
public-init Init access modifier Var/def modifier to allow a variable to be initialized or read anywhere
  • Unlike Java the default permission in JavaFX is script-only rather than package.
  • The var/def access modifiers can be stacked with other modifiers, such as public-read protected


JavaFX supports many of the same expressions as Java, but adds in powerful inline functions and for loop extensions.

Expression Syntax Example

if (cond) expr1 else expr2
if (cond) then expr1 else expr2

if (grass.green) {
} else {
var water = if (grass.color ==
BLACK) aLot else aLittle;


for (x in seq) expr
for (x in seq where cond) expr
for (x in seq, y in x) expr

var loans = for (b in borrowers
where b.pulse > 0)
for (loan in loans) {

while while (bool) expr

while (swimming) {


try {expr1} catch(exception)
{expr2} finally {expr3}

try {
} catch(e:FinancialCrisis) {
} finally {

function function(params):returnType{}

function(e:MouseEvent):Void {

Just like in Java programs:

  • continue can be used to skip a for or while loop iteration
  • break can be used to exit a for or while loop
  • return can be used to exite from a function event if inside a loop

Magic Variables:

JavaFX provides some built-in variables that can be accessed from any code running inside a script.

Name Description
__DIR__ Directory the current classfile is contained in
__FILE__ Full path to the current classfile
__PROFILE__ The current profile, which can be ‘desktop’ or ‘mobile’

API Reference

In the short span of a few pages you have already seen quite a bit of the JavaFX platform. Some other functionality that JavaFX offers includes:

Package Description
javafx.animation Animation and Interpolation
javafx.async Asynchronous Tasks and Futures
javafx.data.feed RSS/Atom Feed support
javafx.data.pull XML and JSON Pull Parsers
javafx.ext.swing Additional Swing-based Widgets
javafx.fxd Production Suite (FXD)
javafx.io Local Data Storage
javafx.reflect JavaFX Reflection Classes
javafx.chart Charting and Graphing
javafx.scene.media Media (Audio and Video) Playback
javafx.scene.shape Vector Shapes

An easy way to view and navigate the full JavaFX API is using the JFXplorer application. The following URL will launch it in as a web start application that you can use to start exploring the JavaFX API today:


Additional Resources

About The Author

Photo of Stephen Chin

Open-Source Developer and Agile Manager, Stephen Chin is founder of numerous opensource projects including WidgetFX and JFXtras and Senior Manager at Inovis in Emeryville, CA. He has been working with Java desktop and enterprise technologies for over a decade, and has a passion for improving development technologies and process. Stephen’s interest in Java technologies has lead him to start a Java and JavaFX focused blog and coauthor the upcoming Pro JavaFX Platform book together with Jim Weaver, Weiqi Gao, and Dean Iverson.

Stephen’s Blog:


Jim Weaver’s JavaFX Learning Blog:


Recommended Book


Learn from bestselling JavaFX author Jim Weaver and expert JavaFX developers Weiqi Gao, Stephen Chin, and Dean Iverson to discover the highly anticipated JavaFX technology and platform that enables developers and designers to create RIAs that can run across diverse devices. Covering the JavaFX Script language, JavaFX Mobile, and development tools, Pro JavaFX™ Platform: Script, Desktop and Mobile RIA with Java™ Technology provides code examples that cover virtually every language and API feature.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Apache Maven 2

By Matthew McCullough

51,696 Downloads · Refcard 55 of 204 (see them all)


The Essential Maven 2 Cheat Sheet

Maven is a comprehensive project information tool whose most common application is building Java code. It is receiving renewed recognition in the emerging development space for its convention over configuration approach to builds. This DZone Refcard showcases how Maven offers unparalleled software lifecycle management, and gives Java developers a wide range of execution commands, tips for debugging Mavenized builds, and a clear introduction to the "Maven vocabulary". This Refcard also covers the MVN command, dependencies, plugins, profiles and more. Download it today!
HTML Preview
Apache Maven 2

Apache Maven 2

By Matthew McCullough


Maven is a comprehensive project information tool, whose most common application is building Java code. Maven is often considered an alternative to Ant, but as you’ll see in this Refcard, it offers unparalleled software lifecycle management, providing a cohesive suite of verification, compilation, testing, packaging, reporting, and deployment plugins.

Maven is receiving renewed recognition in the emerging development space for its convention over configuration approach to builds. This Refcard aims to give JVM platform developers a range of basic to advanced execution commands, tips for debugging Mavenized builds, and a clear introduction to the “Maven vocabulary”.

Interoperability and Extensibility

New Maven users are pleasantly surprised to find that Maven offers easy-to-write custom build-supplementing plugins, reuses any desired aspect of Ant, and can compile native C, C++, and .NET code in addition to its strong support for Java and JVM languages and platforms, such as Scala, JRuby, Groovy and Grails.

Hot Tip

All things Maven can be found at http://maven.apache.org


Maven supplies a Unix shell script and MSDOS batch file named mvn and mvn.bat respectively. This command is used to start all Maven builds. Optional parameters are supplied in a space-delimited fashion. An example of cleaning and packaging a project, then running it in a Jetty servlet container, yet skipping the unit tests, reads as follows:

mvn clean package jetty:run –Dmaven.test.skip


The world of Maven revolves around metadata files named pom.xml. A file of this name exists at the root of every Maven project and defines the plugins, paths and settings that supplement the Maven defaults for your project.

Basic pom.xml Syntax

The smallest valid pom.xml, which inherits the default artifact type of “jar”, reads as follows:


Super POM

The Super POM is a virtual pom.xml file that ships inside the core Maven JARs, and provides numerous default settings. All projects automatically inherit from the Super POM, much like the Object super class in Java. Its contents can be viewed in one of two ways:

View Super POM via SVN

Open the following SVN viewing URL in your web browser:


View Super POM via effective-pom

Run the following command in a directory that contains the most minimal Maven project pom.xml, listed above.

mvn help:effective-pom

Multi-module Projects

Maven showcases exceptional support for componentization via its concept of multi-module builds. Place sub-projects in sub-folders beneath your top level project and reference each with a module tag. To build all sub projects, just execute your normal mvn command and goals from a prompt in the top-most directory.

  <!-- ... -->

Artifact Vector

Each Maven project produces an element, such as a JAR, WAR or EAR, uniquely identified by a composite of fields known as groupId, artifactId, packaging, version and scope. This vector of fields uniquely distinguishes a Maven artifact from all others.

Many Maven reports and plugins print the details of a specific artifact in this colon separated fashion:


An example of this output for the core Spring JAR would be:



Maven divides execution into four nested hierarchies. From most-encompassing to most-specific, they are: Lifecycle, Phase, Plugin, and Goal.

Lifecycles, Phases, Plugins and Goals

Maven defines the concept of language-independent project build flows that model the steps that all software goes through during a compilation and deployment process.


Lifecycles represent a well-recognized flow of steps (Phases) used in software assembly.

Each step in a lifecycle flow is called a phase. Zero or more plugin goals are bound to a phase.

A plugin is a logical grouping and distribution (often a single JAR) of related goals, such as JARing.

A goal, the most granular step in Maven, is a single executable task within a plugin. For example, discrete goals in the jar plugin include packaging the jar (jar:jar), signing the jar (jar:sign), and verifying the signature (jar:sign-verify).

Executing a Phase or Goal

At the command prompt, either a phase or a plugin goal can be requested. Multiple phases or goals can be specified and are separated by spaces.

If you ask Maven to run a specific plugin goal, then only that goal is run. This example runs two plugin goals: compilation of code, then JARing the result, skipping over any intermediate steps. mvn compile:compile jar:jar

Conversely, if you ask Maven to execute a phase, all phases and bound plugin goals up to that point in the lifecycle are also executed. This example requests the deploy lifecycle phase, which will also execute the verification, compilation, testing and packaging phases.

mvn deploy

Online and Offline

During a build, Maven attempts to download any uncached referenced artifacts and proceeds to cache them in the ~/.m2/repository directory on Unix, or in the %USERPROFILE%/.m2/repository directory on Windows.

To prepare for compiling offline, you can instruct Maven to download all referenced artifacts from the Internet via the command:

mvn dependency:go-offline

If all required artifacts and plugins have been cached in your local repository, you can instruct Maven to run in offline mode with a simple flag:

mvn <phase or goal> -o

Built-in Maven Lifecycles

Maven ships with three lifecycles; clean, default, and site. Many of the phases within these three lifecycles are bound to a sensible plugin goal.

Hot Tip

The official lifecycle reference, which extensively lists all the default bindings, can be found at http://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html

The clean lifecycle is simplistic in nature. It deletes all generated and compiled artifacts in the output directory.

Clean Lifecycle
Lifecycle Phase Purpose
clean Remove all generated and compiled artifacts in preperation for a fresh build.
Default Lifecycle
Lifecycle Phase Purpose
validate Cross check that all elements necessary for the build are correct and present.
initialize Set up and bootstrap the build process.
generate-sources Generate dynamic source code
process-sources Filter, sed and copy source code
generate-resources Generate dynamic resources
process-resources Filter, sed and copy resources files.
compile Compile the primary or mixed language source files.
process-classes Augment compiled classes, such as for code-coverage instrumentation.
generate-test-sources Generate dynamic unit test source code.
process-test-sources Filter, sed and copy unit test source code.
generate-test-resources Generate dynamic unit test resources.
process-test-resources Filter, sed and copy unit test resources.
test-compile Compile unit test source files
test Execute unit tests
prepare-package Manipulate generated artifacts immediately prior to packaging. (Maven 2.1 and above)
package Bundle the module or application into a distributable package (commonly, JAR, WAR, or EAR).
integration-test Execute tests that require connectivity to external resources or other components
verify Inspect and cross-check the distribution package (JAR, WAR, EAR) for correctness.
install Place the package in the user’s local Maven repository.
deploy Upload the package to a remote Maven repository

The site lifecycle generates a project information web site, and can deploy the artifacts to a specified web server or local path.

Site Lifecycle
Lifecycle Phase Purpose
pre-site Cross check that all elements necessary for the build are correct and present.
site Generate an HTML web site containing project information and reports.
site-deploy Upload the generated website to a web server

Default Goal

The default goal codifies the author’s intended usage of the build script. Only one goal or lifecycle can be set as the default. The most common default goal is install.



Help for a Plugin

Lists all the possible goals for a given plugin and any associated documentation.

help:describe -Dplugin=<pluginname>

Help for POMs

To view the composite pom that’s a result of all inherited poms:

mvn help:effective-pom

Help for Profiles

To view all profiles that are active from either manual or automatic activation:

mvn help:active-profiles


Declaring a Dependency

To express your project’s reliance on a particular artifact, you declare a dependency in the project’s pom.xml.

Hot Tip

You can use the search engine at repository.sonatype.org to find dependencies by name and get the xml necessary to paste into your pom.xml

  <!-- ... -->

Standard Scopes

Each dependency can specify a scope, which controls its visibility and inclusion in the final packaged artifact, such as a WAR or EAR. Scoping enables you to minimize the JARs that ship with your product.

Scope Description
compile Needed for compilation, included in packages.
test Needed for unit tests, not included in packages.
provided Needed for compilation, but provided at runtime by the runtime container.
system Needed for compilation, given as absolute path on disk, and not included in packages.
import An inline inclusion of a POM-type artifact facilitating dependency-declaring POM snippets.


Adding a Plugin

A plugin and its configuration are added via a small declaration, very similar to a dependency, in the <build> section of your pom.xml.

  <!-- ... -->

Common Plugins

Maven created an acronym for its plugin classes that aggregates “Plain Old Java Object” and “Maven Java Object” into the resultant word, Mojo.

There are dozens of Maven plugins, but a handful constitute some of the most valuable, yet underused features:

surefire Runs unit tests.
checkstyle Checks the code’s styling
clover Code coverage evaluation.
enforcer Verify many types of environmental conditions as prerequisites.
assembly Creates ZIPs and other distribution packages of apps and their transitive dependency JARs.

Hot Tip

The full catalog of plugins can be found at: http://maven.apache.org/plugins/index.html


Users often mention that the most challenging task is identifying dependencies: why they are being included, where they are coming from and if there are collisions. Maven has a suite of goals to assist with this.

List a hierarchy of dependencies.

mvn dependency:tree

List dependencies in alphabetic form.

mvn dependency:resolve

List plugin dependencies in alphabetic form.

mvn dependency:resolve-plugins

Analyze dependencies and list any that are unused, or undeclared.

mvn dependency:analyze


Repositories are the web sites that host collections of Maven plugins and dependencies.

Declaring a Repository


The Maven community strongly recommends using a repository manager such as Nexus to define all repositories. This results in cleaner pom.xml files and centrally cached and managed connections to external artifact sources. Nexus can be downloaded from http://nexus.sonatype.org/

Popular Repositories

Central http://repo1.maven.org/maven2/
Java.net https://maven-repository.dev.java.net/
Codehaus http://repository.codehaus.org/
JBoss http://repository.jboss.org/maven2

Hot Tip

A near complete list of repositores can be found at http://www.mvnbrowser.com/repositories.html


A wide range of predefined or custom of property variables can be used anywhere in your pom.xml files to keep string and path repetition to a minimum.

All properties in Maven begin with ${ and end with }. To list all available properties, run the following command.

mvn help:expressions

Predefined Properties (Partial List)

${env.PATH} Any OS environment variable such as EDITOR, or GROOVY_HOME. Specifically, the PATH environment variable.
${project.groupId} Any project node from the aggregated Maven pom.xml. Specifically, the Group ID of the project
${project.artifactId} Name of the artifact.
${project.basedir} Path of the pom.xml.
${settings.localRepository} The path to the user’s local repository.
${java.home} Any Java System Property. Specifically, the Java System Property path to its home.
${java.vendor} The Java System Property declaring the JRE vendor’s name.
${my.somevar} A user-defined variable.

Project properties could previously be referenced with a ${pom.basedir} prefix or no prefix at all ${basedir}. Maven now requires that you prefix these variables with the word project ${project.basedir}.

Define a Property

You can define a new custom property in your pom.xml like so:

      <my.somevar>My Value</my.somevar>


Exception Full Stack Traces

If a Maven plugin is reporting an error, to see the full detail of the exception’s stack trace run Maven with the -e flag.

mvn <yourgoal> -e

Output Debugging Info

Whenever reporting a Maven bug, or troubleshooting a problem, turn on all the debugging info by running Maven like so:

mvn <yourgoal> -X

Debug Maven Core/Plugins

Core Maven operations and plugins can be stepped through with any JPDA-compatible debugger, the most common option being Eclipse. When run in debug mode, Maven will wait for you to connect your debugger to socket port 8000 before continuing with its lifecycle.

mvnDebug <yourgoal>
Preparing to Execute Maven in Debug Mode
Listening for transport dt_socket at address: 8000

Debug a Unit Test

Your suite or an individual unit test can be debugged in much the same fashion by telling the Surefire test-execution plugin to wait for you to attach a debugger to port 5005.

mvn test -Dmaven.surefire.debug
Listening for transport dt_socket at address: 5005


Configuring SCM

Your project’s SCM connection can be quickly configured with just three XML tags, which adds significant capabilities to the scm, release, and reactor plugins.

The connection tag is your read-only view of your repository and developerConnection is the writable link. URL is your web-based view of the source.


Hot Tip

Over 12 SCM systems are supported by Maven. The full list can be viewed at http://docs.codehaus.org/display/SCM/SCM+Matrix

Using the SCM Plugin

The core SCM plugin offers two highly useful goals.

The diff command produces a standard Unix patch file with the extension .diff of the pending (uncommitted) changes on disk that can be emailed or attached to a bug report.

mvn scm:diff

The update-subprojects goal invokes a recursive scm-provider specific update (svn update, git pull) across all the submodules of a multimodule project.

mvn scm:update-subprojects


Profiles are a means to conditionally turn on portions of Maven configuration, including plugins, pathing and configuration.

The most common uses of profiles are for Windows/Unix platform-specific variations and build-time customization of JAR dependencies based on the use of a specific Weblogic, Websphere or JBoss J2EE vendor.

         [...settings, build, plugins etc...]

Profile Definition Locations

Profiles can be defined in pom.xml, profiles.xml (parallel to the pom.xml), ~/.m2/settings.xml, or $M2_HOME/conf/settings.xml.

Hot Tip

The full Maven Profile reference, including details about when to use each of the profile definition files, can be found at http://maven.apache.org/guides/introduction/introduction-to-profiles.html


Profiles can be activated manually from the command line or through an activation rule (OS, file existence, Maven version, etc.). Profiles are primarily additive, so best practices suggest leaving most off by default, and activating based on specific conditions.

Manual Profile Activation

mvn <yourgoal> –P YourProfile

Automatic Profile Activation

     [...settings, build, etc...]
      <name>Windows XP</name>


Maven offers excellent automation for cutting a release of your project. In short, this is a plugin-guided ceremony for verifying that all tests pass, tagging your source code repository, and altering the POMs to reflect a product version increment.

The prepare goal runs the unit tests, continuing only if all pass, then increments the value in the pom <version> tag to a release version, tags the source repository accordingly, and increments the pom version tag back to a SNAPSHOT version.

mvn release:prepare

After a release has been successfully prepared, run the perform goal. This goal checks out the prepared release and deploys it to the POM’s specified remote Maven repository for consumption by other teams and Maven builds.

mvn release:perform


An archetype is a powerful template that uses your corporate Java package names and project name in the instantiated project and establishes a baseline of dependencies, with a bonus of basic sample code.

You can leverage public archetypes for quickly starting a project that uses a familiar stack, such as Struts+Spring, or Tapestry+Hibernate. You can also create private archetypes within your company to offer new projects a level of consistent dependencies matching your approved corporate technology stack.

Using an Archetype

The default behavior of the generate goal is to bring up a menu of choices. You are then prompted for various replaceables such as package name and artifactId. Type this command, then answer each question at the command line prompt.

mvn archetype:generate

Creating Archetypes

An archetype can be created from an existing project, using it as the pattern by which to build the template. Run the command from the root of your existing project.

mvn archetype:create-from-project

Archetype Catalogs

The Maven Archetype plugin comes bundled with a default catalog of applications it can create, but other projects on the Internet also publish catalogs. To use an alternate catalog:

mvn archetype:generate –DarchetypeCatalog=<catalog>

A list of the most commonly used catalogs is as follows:




Maven has a robust offering of reporting plugins, commonly run with the site generation phase, that evaluate and aggregate information about the project, contributors, it’s source, tests, code coverage, and more.

Adding a Report Plugin


Hot Tip

A list of commonly used reporting plugins can be reviewed here http://maven.apache.org/plugins/

About The Author

Photo of MatthewMcCullough

Matthew McCullough

Matthew McCullough is an Open Source Architect with the Denver, Colorado consulting firm Ambient Ideas, LLC which he co-founded in 1997. He’s spent the last 13 years passionately aiming for ever-greater efficiencies in software development, all while exploring how to share these practices with his clients and their team members. Matthew is a nationally touring speaker on all things open source and has provided long term mentoring and architecture services to over 40 companies ranging from startups to Fortune 500 firms. Feedback and questions are always welcomed at matthewm@ambientideas.com

Recommended Book


Several sources for Maven have appeared online for some time, but nothing served as an introduction and comprehensive reference guide to this tool -- until now. Maven: The Definitive Guide is the ideal book to help you manage development projects for software, webapplications, and enterprise applications. And it comes straight from the source.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Agile Adoption

Reducing Cost

By Gemba Systems

24,541 Downloads · Refcard 54 of 204 (see them all)


The Essential Agile Adoption Cheat Sheet

Adopting Agile methods in your projects is easier than you think. This DZone Refcard series focuses on building software faster, better, and cheaper. This Refcard in particular teaches you four practical strategies for reducing the cost of software development. Learn how specific Agile methods like evolutionary design, refactoring and automated testing will help you reduce costs and eliminate risks. Other topics covered include how to adopt Agile practices successfully, as well as what’s next in Agile and a wealth of references. You’ll also enjoy Agile Adoption: Decreasing Time to Market, the first Refcard in this Agile Adoption series.
HTML Preview
JavaServer Faces 2.0

Agile Adoption:Reducing Cost

By Gemba Systems

About this agile adoption refcard

Faster, better, cheaper. That’s what we must do to survive. The Time to Market Refcard (a companion in this series) addresses faster, the Quality Refcard addresses better, and this Refcard addresses cheaper. This is about building the system for less.

Some of the costs of software development are associated with man hours needed to build the system, others with cost of maintenance over time, and yet others include hardware as well as software platform costs. Practices that educe any or all of these costs without sacrificing quality reduce the overall cost of the system.

Then there is the Pareto principle – a.k.a. the 80/20 rule. This rule suggests that roughly 20% of the software system is used 80% of the time. This is also backed up by research that is even more dramatic [figure with usage]. Practices that help the team build only what is needed in a prioritized manner reduce the cost and still deliver the most important business value to the customer (the part she uses).

Figure 1 Practices that help reduce the cost of building software.


You will be able to use this Refcard to get 50,000 ft view of what will be involved to reduce the cost of developing your systems.

Four Strategies to reduce cost

Software development is complex and often very complicated. It is HARD. This is not some new revelation, in fact Fred Brooks in the well known paper, “No Silver Bullet.”, states:

The essence of a software entity is a construct of interlocking concepts ...

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation.

There are four major strategies that can help you reduce the cost of building and maintaining your software

Maintain the Theory of the Code


One way to look at software development is ‘theory building’. That is, programs are theories – models of the world mapped onto software – in the head of the individuals of the development team. Great teams have a shared understanding of how the software system represents the world. Therefore they know where to modify the code when a requirement change occurs, they know exactly where to go hunting for a bug that has been found, and they communicate well with each other about the world and the software.

Conversely, a team that does not have a shared ‘theory’ make communication mistakes all the time. The customer may say something that the business analyst misunderstands because she has a different world view. She may, in turn, have a different understanding than the developers, so the software ends up addressing a different problem or, after several trials, errors and frustrations, the right problem but very awkwardly. Software where the theory of the team does not match, or even worse, the theory is now lost because the original software team is long-gone, is very expensive to maintain.

Building a shared theory of the world-to-software-mapping is a human process that is best done face-to-face by trial and error and with significant time.

Build Less

It has been shown that we build many more features than are actually used. In fact, we can see in Figure 2, only about 20% of functionality we build is used often or always. More than 60% of all functionality built in software is rarely or never used!


One way to reduce the cost of software is to find a way not to build the unused functionality. There are several Agile practices that help you get to that point.

Pie Chart

Functionality Usage

Figure 2: Most functionality built is never used.

Pay Less for Bug Fixes

Typically, anywhere between 60%-90% of software cost goes into the maintenance phase. That is, most of our money goes into keeping the software alive and useful for our clients after the initial build. Practices that help us reduce the cost of software maintenance will significantly affect the overall cost of the software product or system.

Figure 3

Figure 3: Maintain the theory of the code helps reducing the cost of making design changes and fixing bugs. Building less enables better understanding and helps to understand the theory of the code for a change because there is less to change

Pay Less for Changes

The only thing constant in today’s software market is change. If we can embrace change, plan for it, and reduce its cost

when it eventually happens we can make significant savings. One of the strongest points of Agile development is that its practices enable you to roll with the punches and change your software as the business world changes.

Hand with money

The four strategies above: maintain the theory of the code, build less, paying less for maintenance and being able to react to change are not independent.

The practices

Evolutionary Design

Four Icons

Evolutionary Design

Evolutionary Design

Book Evolutionary design is the simple design practice done continuously. Start off with a simple design and change that design only when a new requirement cannot be met by the existing design.
Dollar Evolutionary design reduces the cost by focusing on always building less. This, in turn, directly affects the cost of change drastically.
Cubes You are on a development team practicing automated developer tests, refactoring, and simple design. That’s it, because this is one of those things that is applicable to all types of development projects. The context is especially a match if the technology used technologies is new to a large part of the team.

Simple Design

Evolutionary Design

Book If a decision between coding a design for today’s requirements and a general design to accommodate for tomorrow’s requirements needs to be made, the former is a simple design. Simple design meets the requirements for the current iteration and no more.
Dollar Simple design reduces cost because you build less code to meet the requirements and you maintain less code afterwards. Simple designs are easier to build, understand, and maintain.
Cubes Simple design should only be used when your team also is writing automated developer tests and refactoring. A simple design is fine as long as you can change it to meet future requirements.



Book The practice of Refactoring code changes the structure (i.e., the design) of the code while maintaining its behavioe.
Dollar Costs are reduced because continuous refactoring keeps the design from degrading over time, ensuring that the code is easy to understand, maintain, and change.
Cubes You are on a development team that is practicing automated developer tests. You are currently working on a requirement that is not well-supported by the current design. Or you may have just completed a task (with its tests of course) and want to change the design for a cleaner solution before checking in your code to the source repository.

Automated Developer Tests


Book Automated developer tests are a set of tests that are written and maintained by developers to reduce the cost of finding and fixing defects—thereby improving code quality—and to enable the change of the design as requirements are addressed incrementally.
Dollar Automated developer tests reduce the cost of software development by creating a safety-net of tests that catch bugs early and enabling the incremental change of design. Beware, however, that automated developer tests take time to build and require discipline.
Cubes You are on a development team that has decided to adopt iterations and simple design and will need to evolve your design as new requirements are taken into consideration. Or you are on a distributed team. The lack of both face-to-face communication and constant feedback is causing an increase in bugs and a slowdown in development.

Evocative Document

Brain Small

Book Evocative documents are documents that evoke memories, conversations, and situations that are shared by those who wrote the document. They are more meaningful and representative of a team’s understanding of the system than traditional documents.
Dollar Evocative documents help by accurately representing the team’s internal model of the software and allowing that model to be handed down from master to apprentice. The better understanding of the system over time also reduces the maintenance cost of the system over time because appropriate changes reduce the deterioration of the software.
Cubes Current documentation isn’t working – as a document is passed from one person to another much of the context and value is lost, and as a result, the maintenance team’s understanding of the codebase constantly deteriorates. This is resulting in the calcification of your software system.

Automated Acceptance Tests

Diagram and icons

Book Automated acceptance tests are tests written at the beginning of the iteration that answer the question: “what will this requirement look like when it is done?”. This means that you start with failing tests at the beginning of each iteration and a requirement is only done when that test passes.
Dollar This practice builds a regression suite of tests in an incremental manner and catches errors, miscommunications, and ambiguities very early on. This, in turn, reduces the amount of work that is thrown away and therefore enables building less. The tests also catch bugs and act as a safety-net during change.
Cubes You are on a development project with an onsite customer who is willing and able to participate more fully as part of the development team. Your team is also willing to make difficult changes to any existing code. You are willing to pay the price of a steep learning curve.

The remaining practices also help reduce the cost of software development. Because of the limited size of the refcard, we will only summarize them below.

img1 A backlog is a prioritized list of requirements that enable a team to build less by making sure they always work on the most important items first and help the team understand the theory of the code when used as an evocative document that shows a larger picture of the system.
img2 An iteration is a time-box where the team builds what is on the backlog and is a potential release and therefore enables building less.
img3 The done state is a definition agreed upon by the entire team of what constitutes the completion of a requirement. The closer the done state is to deployable software, the better it is because it forces the team to resolve all hidden issues early and thus reduces cost.
img4 The cross-functional team is one that has the necessary expertise among its members to take a requirement from its initial concept to a fully deployed and tested piece of software within one iteration. A requirement can be taken off of the backlog, elaborated and developed, tested, deployed.
img5 A self-organizing team is in charge of its own fate. Management gives the team goals to achieve and the team members are responsible for driving towards those goals and achieving them. A self-organizing team recognizes and responds to changes in their environment and in their knowledge as they learn. A self-organizing team is frequently a cross functional team as well.
img6 The Retrospective is a meeting held at the end of a major cycle - iteration or release - to gather and analyze data about that cycle and decide on future actions to improve the team’s environment and process. A retrospective is about evaluating the people, their interactions, and the tools they use.
img7 Continuous integration reduces the total time it takes to build a software system by catching errors early and often. Errors caught early cost significantly less to fix when caught later. It leverages both automated acceptance tests and automated developer tests to give frequent feedback to the team and to pay a much smaller price for fixing a defect.
img8 A user story is an evocative document for requirements. A user story is a very high level description of the requirement to be built –it usually fits on a 3 x 5 index card – and is a “promise for a conversation” later between the person carrying out the Customer Part of Team practice and the implementers.

How to adopt Agile practices successfully

To successfully adopt Agile practices let’s start by answering the question “which ones first?” Once we have a general idea of how to choose the first practices there are other considerations.

Become “Well-Oiled” First

One way to look at software development is to see it as problem solving for business. When considering a problem to solve there are two fundamental actions that must be taken:

  • Solving the right problem. This is IT/Business alignment.
  • Solving the problem right. This is technical expertise.

Intuitively it would seem that we must focus on solving the right problem first because, no matter how well we execute our solution to the problem, if it is the wrong problem then our solution is worthless. This, unfortunately, is the wrong way to go. Research shows in Figure 3, that focusing on alignment first is actually more costly and less effective than doing nothing. It also shows that being “well-oiled”, that is focusing on technical ability first, is much more effective and a good stepping-stone to reaching the state where both issues are addressed.

Figure 3.2

Figure 4: The Alignment Trap (from Avoiding the Alignment Trap in Information Technology, Shpilberg, D. et al, MIT Sloan Management Review, Fall 20078.)

This is supported anecdotally by increasing reports of failed Agile projects that do not deliver on promised results. They adopt many of the soft practices such as Iteration, but steer away from the technically difficult practices such as Automated Developer Tests, Refactoring, and Done State. They never reach the “well-oiled” state.

So the lesson here is make sure that on your journey to adopt Agile practices that improve time to market (or any other business value for that matter), your team will need to become “well-oiled” to see significant, sustained improvement. And that means you should plan on adopting the difficult technical practices for sustainability.

Know What You Don’t Know

The Dreyfus Model of Skill Acquisition, is a useful way to look at how we learn skills – such as learning Agile practices necessary to reduce cost. It is not the only model of learning, but it is consistent, has been effective, and works well for our purposes. This model states that there are levels that one goes through as they learn a skill and that your level for different skills can and will be different. Depending on the level you are at, you have different needs and abilities. An understanding of this model is not crucial to learning a skill; after all, we’ve been learning long before this model existed. However, being aware of this model can help us and our team(s) learn effectively.

So let’s take a closer look at the different skill levels in the Dreyfus Model:


Figure 5: The Dreyfus Model for skill acquisition. One starts as a novice and through ecperience and learning advances towards expertise.

How can the Dreyfus Model help in an organization that is adopting agile methods? First, we must realize that this model is per skill, so we are not competent in everything. Secondly, if agile is new to us, which it probably is, then we are novices or advanced beginners; we need to search for rules and not break them until we have enough experience under our belts. Moreover, since everything really does depend on context, and we are not qualified to deal with context as novices and advanced beginners, we had better get access to some people who are experts or at least proficient to help guide us in choosing the right agile practices for our particular context. Finally, we’d better find it in ourselves to be humble and know what we don’t know to keep from derailing the possible benefits of this new method. And we need to be patient with ourselves and with our colleagues. Learning new skills will take time, and that is OK.

Choosing a Practice to Adopt

Choosing a practice comes down to finding the highest value practice that will fit into your context. Figure 1 will guide you in determining which practices are most effective in decreasing your time to market and will also give you an understanding of the dependencies. The other parts in this section, How to Adopt Agile Practices Successfully?, discuss other ideas that can help you refine your choices. Armed with this information:

figure 5

Figure 5: Steps for choosing and implementing practices.

What Next?

This Refcard is a quick introduction to Agile practices that can help you reduce the cost of building and maintaining your software and an introduction of how you to choose the practices for your organizational context. It is only a starting point. If you choose to embark on an Agile adoption initiative, your next step is to educate yourself and get as much help as you can afford. Books and user groups are a beginning. If you can, find an expert to join your team(s). Remember, if you are new to Agile, then you are a novice or advanced beginner and are not capable of making an informed decision about tailoring practices to your context.


Column 1

Column 2

Column 3

Column 4

Column 5

Column 6

Column 7

Column 8

Column 9

Column 10

Column 11

Column 12

Column 13

Column 14

Astels, David. 2003. Test-driven development: a practical guide. Upper Saddle River, NJ: Prentice Hall. x x
Avery, Christopher, Teamwork is an Individual Skill, San Francisco: Berrett-Koehler Publishers, Inc., 2001 x
Bain, Scott L., 2008, Emergent Design, Boston, MA: Pearson Education x x x x
Beck, Kent. 2003. Test-driven development by example. Boston, MA: Pearson Education. x x
Beck, K. and Andres, C., Extreme Programming Explained: Embrace Change (second edition), Boston: Addison-Wesley, 2005 x x x x x x x x
Cockburn, A., Agile Software Development: The Cooperative Game (2nd Edition), Addison-Wesley Professional, 2006. x
Cohn, M., Agile Estimating and Planning, Prentice Hall, 2005. x x
Crispin, L. and Gregory, J., Agile Testing: A Practical Guide for Testers and Agile Teams x
Derby, E., and Larson, D., Agile Retrospectives: Making Good Teams Great, Raliegh: Pragmatic Bookshelf, 2006. x x
Duvall, Paul, Matyas, Steve, and Glover, Andrew. (2006). Continuous Integration: Improving Software Quality and Reducing Risk. Boston: Addison-Wesley. x
Elssamadisy, A., Agile Adoption Patterns: A Roadmap to Organizational Success, Boston: Pearson Education, 2008 x x x x x x x x x x x x x x
Feathers, Michael. 2005. Working effectively with legacy code. Upper Saddle River, NJ: Prentice Hall. x x
Jeffries, Ron. “Running Tested Features.” http://www.xprogramming.com/xpmag/jatRtsMetric.htm x
Jeffries, Ron. 2004. Extreme programming adventures in C#. Redmond, WA: Microsoft Press. x x
Kerth, N., Project Retropsectives: A Handbook for Team Reviews, NY: Dorset House Publishing Company, 2001. x
Kerievsky, Joshua. “Don’t Just Break Software, Make Software.” http://www.industriallogic.com/papers/storytest.pdf x
Larman, C., Agile and Iterative Development: A Manager’s Guide, Boston: Addison-Wesley, 2004 x x
Larman, C., and Vodde, B., Scaling Lean and Agile Development, Boston: Addison-Wesley, 2009 x x x
Massol, Vincent. 2004. Junit in action. Greenwich, CT: Manning Publications. x x
Meszaros, XUnit Test Patterns: Refactoring Test Code, Boston: Addison-Wesley, 2007 x x
Mugridge, R., and W. Cunningham. 2005. Fit for Developing Software: Framework for Integrated Tests. Upper Saddle River, NJ: Pearson Education. x
Poppendieck, M., and Poppendieck, T., Implementing Lean Software Development, Addison-Wesley Professional, 2006. x
Rainsberger, J.B. 2004. Junit recipes: Practical methods for programmer testing. Greenwich, CT: Manning Publications. x x
Schwaber, K., and Beedle, M., Agile Software Development with Scrum, Upper Saddle River, New Jersey: Prentice Hall, 2001. x x x x
Senge, P., The Fifth Discipline: The Art and Practice of The Learning Organization, NY: Currency 2006. x
Surowiecki, J., The Wisdom of Crowds, NY: Anchor, 2005. x

About Gemba Systems

Gemba Systems is comprised of a group of seasoned practitioners who are experts at Lean & Agile Development as well as crafting effective learning experiences. Whether the method is Scrum, Extreme Programming, Lean Development or others - Gemba Systems helps individuals and teams to learn and adopt better product development practices. Gemba Systems has taught better development techniques - including lean thinking, Scrum and Agile Methods - to thousands of developers in dozens of companies around the globe. To learn more visit http://us.gembasystems.com/

Recommended Book


Agile Adoption Patterns will help you whether you’re planning your first agile project, trying to improve your next project, or evangelizing agility throughout your organization. This actionable advice is designed to work with any agile method, from XP and Scrum to Crystal Clear and Lean. The practical insights will make you more effective in any agile project role: as leader, developer, architect, or customer.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Getting Started with BIRT

By Virgil Dodson

13,571 Downloads · Refcard 49 of 204 (see them all)


The Essential BIRT Cheat Sheet

Eclipse Business Intelligence and Reporting Tools (BIRT) is an open source, Eclipse-based reporting system that integrates with your Java/J2EE application to produce compelling reports. BIRT is the only top-level Eclipse project focused on business intelligence.This DZone Refcard provides an overview of the BIRT components focusing on a few key capabilities of the BIRT Designer, BIRT Runtime APIs, and BIRT Web Viewer. This Refcard should be interesting to report designers as well as developers or architects involved in integrating BIRT reports into applications.
HTML Preview

Getting Started with BIRT

By Michael Williams

What Is Birt?

Eclipse Business Intelligence and Reporting Tools (BIRT) is an open-source, Eclipse-based reporting system that integrates with your Java EE application to produce compelling reports. BIRT is the only top-level Eclipse project focused on business intelligence. BIRT provides core reporting features such as report layout, data access, and scripting. This Refcard provides an overview of the BIRT components, focusing on a few key capabilities of the BIRT Designer, BIRT Runtime APIs, and BIRT Web Viewer. This information should be interesting to report designers as well as to developers or architects involved in integrating BIRT reports into applications.

Design and Runtime components

BIRT has two main components: a report designer based on Eclipse, and a runtime component that you can add to your application. The charting engine within BIRT can also be used by itself, allowing you to add charts to your application.

Birt Components

Getting birt

Open Source BIRT can be downloaded from http://download. eclipse.org/birt/downloads/ or http://www.birt-exchange. com. There are several different packages containing BIRT depending on your needs.

BIRT All-In-One
The fastest way to get started designing BIRT reports on Windows.
Includes everything you need to start designing BIRT Reports,
including the full Eclipse SDK.
BIRT Framework This download allows you to add the BIRT plug-in to your existing
Eclipse environment. (Make sure you check the dependencies and
update those too.)
RCP Designer Simple to use rich client version of the BIRT Report Designer
dedicated to creating reports without the rest of the Eclipse
development environment.
BIRT Runtime Deployment components of the BIRT project including a command
line example, API examples, and example web viewer.
BIRT Web Tools Integration Contains the plug-ins required to use the BIRT Web Project Wizard
and the BIRT Viewer JSP tag library.

Hot Tip

You can also get BIRT into your existing Eclipse environment through the Eclipse Update Manager. Be sure to also select the Data Tools Project when using this approach.

BIRT report designers


The BIRT report designers are easy-to-use, visual report development tools that meet a comprehensive range of reporting requirements. The report designers include taskspecific editors, builders, and wizards that make it easy to create reports that can be integrated into web applications. All BIRT report designers support:

  • Component-based model for reuse
  • Ease of use features
  • Support for a wide range of reports, layouts and formatting
  • Programmatic control
  • Data access across multiple data sources


Design File
An XML file that contains the data connection infromation, report layout and instructions. Created when making a report in the BIRT Designer.
Template File
Ensures all reports you create start with some common elements such as a company header or predefined syles. The starting point for a BIRT report.
Library File
Stores commonly used report elements, such as a company logo, so they are managed in one place for all reports.
Report Document
The completed report including layout instructions and data. Can be transformed into final report output, such as HTML, PDF, and XLS.

BIRT Data Sources

BIRT supports a variety of data sources and can be extended to support any data to which you have access. In addition to the list below, BIRT also ships with a connection to the ClassicModels sample database and can be easily extended to connect to your custom data source. BIRT also includes a Joint Data Set which allows you to join data across data sources.

Flat File Data Source Supports tab, comma, semicolon, and pipe delimited data
JDBC Data Source Supports connections to relational databases
Scripted Data Source Allows you to communicate with Java objects or to any data you can get from you application.
Web Services Data
Supports connections to a web service. A wizard helps you point at a service through a WSDL and select the data
XML Data Source Supports data from XML
Hive/Hadoop Data Source Allows access to Hadoop data through Hive using Hive Query Language (HQL)
Additional Data Sources BIRT has been extended by both the open source community and within commercial products allowing additional data connections to POJOs, Amazon RDS, LDAP, LinkedIn, Facebook, Excel, MongoDB, GitHub, and Spring Beans

Palette of report items

label Use to include static (or localized) text within a report. Typically for report titles, column headers or any other report text.
text Use to include richly formatted text to your report, including the ability to integrate HTML formatting with your dynamic data.
DynamicText Use to integrate your static text with dynamic or conditional data.
data Use to include data from your connection in the report.
image Use to include images from various embedded sources or dynamic locations.
grid Use to define the layout of a report. Can be nested within other grids to support complex layouts.
list Use to display Data elements from your data source that repeat and creates a new report row for each daata set row. Can contain multiple levels of grouping.
table Use to display repeating data elements within your report and has support for multiple columns and multiple levels of grouping.
chart Use to add rich, interactive charting to your BIRT report.
Cross Tab Use to display grouped and dynamic data by both the row and column level.
aggregation Use to build totals for tables and groups. Includes over 30 built-in functions like COUNT, SUM, MAX, MIN, AVE, RUNNINGSUM, COUNTDISTINCT, and RANK.
Additional Report Items BIRT has been extended by both the open source community and within commercial products providing additional report items such as Google Translate item, HTML5 charts, Flash gadgets/objects, and interactive HTML buttons.

Chart types

Bar Chart1

Percent Stacked
2D, 2D w/depth, and 3D

Line Chart2 Sub-Types:
Percent Stacked
2D and 3D
Area Chart3 Sub-Types:
Percent Stacked
2D, 2D w/ Depth, and 3D
Pie Chart4 Sub Types:
Pie Charts
2D and 2D w/ Depth
Meter Chart5 Sub-Types:
Scatter Chart6 Sub-Types:
Stock Chart7 Sub-Types:
Bubble Chart8 Sub-Types:
Difference Chart9 Sub-Types:
Gantt Chart10 Sub-Types:
Tube Chart1 Sub-Types:
Percent Stacked
2D, 2D w/ Depth, and 3D
Cone Chart11 Sub-Types:
Percent Stacked
2D , 2D w/ Depth, and 3D
Pyramid Chart12 Sub-Types:
Percent Stacked
2D , 2D w/ Depth, and 3D
Radar Chart12 Sub-Types:
Additional Chart Types Chart12 BIRT has been extended by both the open source community and within commercial products providing additional chart types such as heat maps, segment charts, HTML 5 charts, Flash Charts, and Flash Maps.

Hot Tip

Creating your first report:
  • Create a new Report Project from the category of Business Intelligence of Reporting Tools. Change to the Report Design perspective.
  • File -> New ->Report. Select the template called "My First Report" which launches a cheat sheet containing a step-by-step tutorial assisting you with connecting to data sources, creating data sets, and laying out your report.


BIRT supports internationalization of report data including support for bidirectional text. BIRT also supports the localization of static report elements within a report allowing you to replace report labels, table headers, and chart titles with localized text. BIRT uses resources files with name/value pairs and a *.properties file extension. For example, a file called MyLocalizedText_de.properties can include a line that says "welcomeMessage=Willkommen". To use these files within a BIRT report:

Assign Resource File to entire report Report -> Properties -> Resources -> Resource File
Assign individual keys to a label Label -> Properties -> Localization -> Text key


Reports designed with the BIRT report designer can be richly formatted with styles that match your existing web application

Built-in Styles Built-in styles can be shared in a report library for managing style across multiple reports.
CSS Style Sheet BIRT can import CSS files at design time or reference existing CSS files at run time.

Below are some examples of CSS styles:

.table-header {
  background : #93BE95;
  border-bottom : double;
  border-top : solid;
  border-top-width : thin;
  border-color : #483D8B;
  font-family : sans-serif;
  font-size : x-small;
  font-weight : bold;
  color : #FFFFE0;
.table-detail {
  background : #DFECDF;
  font-family : sans-serif;
  font-size : x-small;
  color : #426E44;
.table-footer {
  background : #93BE95;
  border-top : double;
  border-bottom : solid
;  border-bottom-width : thin;
  border-color : #483D8B;
  font-family : sans-serif;
  font-size : x-small
;  font-weight : bold;
  color : #FFFFE0;
.crosstab-detail {
  background : #DFECDF;
  font-family : sans-serif;
  font-size : x-small;
  color : #426E44;
.crosstab-header {
  background : #5B975B;
  font-family : sans-serif;
  font-size : small;
  font-weight : bold;
  color : #FFFFE0;
.crosstab-cell {
  border-top : solid;
  border-top-width : thin;
  border-bottom : solid;
  border-bottom-width : thin;
  border-left : solid;
  border-left-width : thin;
  border-right : solid;
  border-right-width : thin;
  border-color : #294429;

To see style examples, visit http://www.birt-exchange.org/org/devshare and enter keyword “style”.

Customization with Expressions, Scripting and Events

BIRT includes out-of-the-box functionality that is available through drag-and-drop or by setting some properties,. BIRT also supports more advanced customizations through expressions, scripting, and events.  The expression builder in BIRT allows you to do conditional report processing just about anywhere you need to instead of hard coding values. For example, the expression below will display the shipped date for orders that have already shipped; otherwise, it will display the order date.

if (dataSetRow["STATUS"] == "Shipped") {
else {

Scripting of a BIRT report can be done in either JavaScript or Java depending on your skill set and needs. Scripting allows you to circumvent the traditional event processing of the BIRT report.  You can add scripting to report object, data source, and data element event types. Each of these event types has several events that you can overwrite.

For example, you can use scripting to navigate your Java objects and add them to a BIRT Data Set.

favoritesClass = new Packages.SimpleClass();
favorites = favoritesClass.readData();

var favrow = favorites.get(currentrow);
var Customer = favrow[0];
var Favorite = favrow[1];
var Color = favrow[2];


Use scripting to change bar colors on a chart based on plotted data.

if (dph.getOrthogonalValue() < 1000) {
  fill.set(255,0,0); //red
else if (dph.getOrthogonalValue() < 5000) {
  fill.set(255,255,0); //yellow
else {
  fill.set(0,255,0); //green

Use scripting to add or drop a report table based on a user parameter.

if (params["showOrders"] == false){

Or use scripting to include dynamic images that are based on the report data.

if (row["CREDITLIMIT"] <= 0) {
else {

You can also use scripting within a text box using the <value-of> tag for generation time evaluation or with the <viewtime-value-of> tag for render time evaluation.

if (row[“myField”] > 0) {
         } else {


Or use html <script> tags to create client-side script, like creating a function to hide a certain table that will be called by an html button.

function hidetable( tblbtn,tblname){
         var mytable=document.getElementById(tblname);
var hide=true;
         if(mytable.style.display == 'none'){


         if( hide ){


For more scripting examples, visit http://www.birt-exchange.org/org/devshare and enter keyword “scripting”.

Report deployment options

Once you create your report design, there are several different ways to generate the report output.  Obviously, you can run these reports directly from the BIRT Designer, but you can also run BIRT reports from the command line, generate BIRT reports from your Java application using the BIRT APIs, integrate and customize the example web viewer, or deploy your reports within commercial business intelligence servers.


BIRT supplies several APIs and an example Java EE application for generating and viewing reports. The major APIs are the Design Engine API(DE API), Report Engine API(RE API) and the Chart Engine API (CE API). In addition to the APIs, BIRT supports scripting using either Java or JavaScript within report designs..

Design Engine API(DE API) Use the Design Engine API (DE API) to create a custom report designer tool, or to explore or modify BIRT report designs. The BIRT Designer uses this API. You can call this API within a BIRT script to modify the currently running report design.
Report Engine API(RE API) Use the Report Engine API to run BIRT reports directly from Java code or to create a custom web application front end for BIRT.
Chart Engine API(CE API) Use the Chart Engine API to create and render charts apart from BIRT.

To see API examples, visit http://www.birt-exchange.org/org/devshare and enter keyword “API”.

Birt report engine tasks

There are several tasks supplied by the Report Engine API that can be used to generate report output. A few key tasks are listed below.

IRunAndRenderTask Use this task to run a report and create the output directly to one of the supported output formats. This task does not create a report document.
IRunTask Use this task to run a report and generate a report document, which is saved to disk.
IGetParameterDefinitionTask Use this task to obtain information about parameters and their default values.
IDataExtractionTask Use this task to extract data from a report document. The BIRT viewer uses this class to extract report data into CSV format.

Siple BIRT Engine Example

static void executeReport() throws EngineException
  IReportEngine engine=null;
  EngineConfig config = null;
  // start up Platform
  config = new EngineConfig( );
  config.setLogConfig("C:\\BIRT_231\\logs", java.util.logging.Level.
  Platform.startup( config );
  // create new Report Engine
  IReportEngineFactory factory = (IReportEngineFactory) Platform
  .createFactoryObject( IReportEngineFactory.EXTENSION_REPORT_
  engine = factory.createReportEngine( config );
  // open the report design
  IReportRunnable design = null;
  design = engine.openReportDesign("C:\\BIRT_231\\designs\\param
  // create RunandRender Task
  IRunAndRenderTask task = engine.createRunAndRenderTask(design);
  // pass necessary parameters
  task.setParameterValue("ordParam", (new Integer(10101)));
  // set render options including output type
  PDFRenderOption options = new PDFRenderOption();
  // run task
}catch( Exception ex){
  Platform.shutdown( );

Hot Tip

Platform startup and shutdown should only occur at the beginning and the end of the application, respectively.  Also, be sure to get the jars from the reportengine/lib directory in the runtime download added to the classpath/buildpath.

Web Viewer

Web Viewer

The BIRT WebViewer is an example application that illustrates generating and rendering BIRT report output in a web application. This viewer demonstrates report pagination, an integrated table of contents, report export to several formats, and printing to local and server-side printers.

The BIRT Web Viewer can be used in a variety of ways:

Stand-alone Use as a pre-built web application for running and viewing reports.
Modify Viewer Source Use as a starter web application that you can customize to your needs.
RCP Application Use as a plug-in for your existing RCP application.
Integrated with existing web application The viewer can be integrated with URLs or BIRT JSP tag library.

The BIRT Web Viewer consists of two main Servlets, the ViewerServlet and the BirtEngineServlet. These Servlets handle three mappings: (/frameset, /run, and /preview).

/frameset Renders the report in the full AJAX viewer, complete with toolbar, navigation bar
and table of contents features. This mapping also generates an intermediate
report document from the report design file to support the AJAX based features.
For example.
http://localhost:8080viewer/frameset?_report=myreport .rptdesign&parm1=value
/run Runs and renders the report but does not create a report document. This
mapping does not supply HTML pagination, TOC or toolbar features, but does
use the AJAX framework to collect parameters, support report cancelling and
retrieve the report output in HTML format. For example.
/preview Runs and renders the report but does not generate a report document, although
an existing report document can be used; in this case, just the render operation
occurs. The output from the run and render operation is sent directly to the
browser. For example

Viewer URL Parameters

Below are a few of the key URL parameters available for the viewer. These parameters can be used along with the Servlet mappings, such as, run, frameset, and preview, listed in the Web Viewer section.

Attribute Description
__id Unique identifier for the viewer.
__title Sets the report file.
__showtitle Determines if the report title is shown in the frameset viewer. Defaults to true. Valid values are true and false.
__toolbar Determines if the report toolbar is shown in the frameset viewer. Defaults to true. Valid values are true and false.
__navigationbar Determines if the navigation bar is shown in the framset viewer. Defaults to true. Valid values are true and false.
__parameterpage Determines if the parameter page is displayed. By default, the frameset, run, and preview mappings automatically determine if the parameter page is required. This setting overrides this behavior. Valid values are true and false.
__report Sets the name of the report design to process. This setting can be absolute path or relative to the working folder.
__document Sets the name for the rptdocument. The document is created when the report engine separates run and render tasks, and is used to support features like table of contents and pagination. This setting can be an absolute path or relative to the working folder.
__format Specifies the desired output format, such as pdf, html, doc, ppt, or xls.
__Locale Specifies the locale for the specific operation. Note that this setting overrides the default locale.
__page Specifies page to render.
__pagerange Specifies page range to render such as, 1-4, 7.
__bookmark Specifies a bookmark in the report to load. The viewer automatically loads the appropriate page.

Viewer Web.xml settings

The BIRT Web Viewer has several configuration options. These settings can be configured by modifying the web.xml file located in the WebViewerExample/WEB-INF folder. Below are a few of the key settings available for the viewer.

Attribute Description
This setting sets the default locale for the Web Viewer.
This is the default location for report designs. If the report design specified
in a URL parameter is relative, this path is pre-pended to the report name.
If the __document parameter is not used, a report document is generated
in this location. If this setting is left blank, the default value, webapp/
documents, is used. If the__document URL parameter is used and the value
is relative, the report document is created in the working folder.
Specifies the default location to store temporary images generated by the
report engine. If this setting is left blank, the default location of webapp/
report/images is used.
Specifies the default location to store report engine log files. If this setting
is left blank, the default location of webapp/logs is used.
Sets the report engine log level. Valid values are:
Specifies the default location to place JAR files used by the script engine
or JARs containing Java event handlers. These JARs are appended to the
classpath. If this setting is left blank the default value of webapp/scriptlib
will be used.
This setting specifies the resource path used by report engine. The
resource path is used to search for libraries, images, and properties files
used by a report. If this setting is left blank, resources are searched for in
Specifies the maximum number of rows to retrieve from a dataset
This setting specifies whether server side printing is supported. If
set to OFF the toolbar icon used for server side printing is removed
automatically. Valid values are ON and OFF.

Viewer JSP Tag library

The BIRT Web Viewer includes a set of tags to make it easy to integrate BIRT reports into browser pages. These tags are available from the BIRT Web Tools Integration download. Below are a few the key JSP tags and a description of their usage.

Tag Description
viewer Displays the complete Viewer inside an IFRAME. This tag allows you to use frameset and run Servlet mappings. The AJAX Framework is used.
report Displays the report inside an IFRAME or DIV tag without the Viewer. This tag allows you to use preview mapping and does not create an rptdocument. The AJAX Framework is not used.
param Used to set parameter values when using the viewer or report tags. This tag must be nested within the viewer or report tag.
value Used to specify multiple values for a given param tag.
parameterPage Used to launch the BIRT parameter dialog or to create a customized parameter entry page. This tag can be used with the frameset, run, or preview mappings to launch the viewer after the parameters are entered.
paramDef Used within a parameterPage tag to retrieve pre-generated HTML for specific parameter control types such as radio, checkbox, dynamic or cascaded parameters

Simple viewer jsp tag example

<%@ taglib uri="/birt.tld" prefix="birt" %>
id="birtViewer" pattern="preview"
height="600" width="800"
title="My Viewer Tag"
showTitle="true" showToolBar="true"


In addition to delivering paginated report content to a web browser, BIRT also supports several other output formats. These formats listed below are support by both the Report Engine API as well as the BIRT Web Viewer.

Paginated web output An example web viewer is included with BIRT allowing for on demand paginated web output.
DOC Microsoft Word Document.
HTML Suitable for creating HTML pages of report data deployable to any server.
PDF Adobe PDF output suitable for emailing or printing.
Postscript Output can be directed to a printer that supports postscript.
Open Document Text, Spreadsheet, and Presentation

Birt extension points

The APIs in BIRT define extension points that let the developer add custom functionality to the BIRT framework. These extensions can be in the form of custom data sources, report items, chart types, output formats, and functions.  Once implemented, these custom extensions will show along with the built-in types.  For example, you can create a custom report item, like a rotated text label, that will show up in the BIRT Palette along with the existing items.  Below are some of the “more common” extension points..

Data Sources BIRT supports the Open Data Access (ODA) architecture, which means it can be extended to support custom data sources.
Functions BIRT allows you to create custom functions that extend those available in BIRT Expressions.
Report Items Report Items can be extended, allowing you to create your own custom report item.
Chart Types Additional chart types can be added to BIRT as plug-ins.
Output Emitters BIRT can be extended to include your own custom output type. For example, a simple CSV emitter exists and can be added to BIRT.

Additional Birt resources

Eclipse BIRT Project Site http://www.eclipse.org/birt
BIRT Exchange Community Site http://www.birt-exchange.com
Submitting/Searching BIRT Bugs http://bugs.eclipse.org/bugs/enter_bug.cgi?product=BIRT
Online BIRT Documentation http://www.birt-exchange.com/modules/documentation/

About The Author

Michael Williams

Michael Williams graduated from The University of Kansas with a degree in computer engineering.
Currently, he works as a BIRT Evangelist at Actuate, where he has been working with BIRT for the past 4 years. One of his roles is to provide technical content for the BIRT Exchange website, in the form of, DevShare articles, monitoring the forums, and maintaining a blog. Other roles include putting together the BIRT Report newsletter, attending software conferences as a technical presence at the BIRTExchange booth, and the occasional speaking session.

Recommended Book


Topics Discussed Include: Installing and deploying BIRT Deploying a BIRT report to an application server Understanding BIRT architecture Scripting in a BIRT report design Integrating BIRT functionality in applications Working with the BIRT extension framework

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Flex & Spring Integration

By Jon Rose and James Ward

17,416 Downloads · Refcard 48 of 204 (see them all)


The Essential Flex & Spring Integration Cheat Sheet

Adobe Flex Software is a popular framework for building Rich Internet Applications (RIAs). The Flex framework is used to create SWF files that run inside Flash Player. This DZone Refcard shows you how to integrate Flex and Spring to create a powerful platform for building robust RIAs. It starts off by showing you how to set up a server-side Java project with BlazeDS and the Spring Framework. After configuring your project with a basic Spring bean for use in BlazeDS, you’ll write your Flex application to use the Spring/BlazeDS service.
HTML Preview
Flex & Spring Integration

Flex & Spring Integration

By Jon Rose and James Ward

ABOUT Adobe Flex

Adobe Flex Software is a popular framework for building Rich Internet Applications (RIAs). The Flex framework is used to create SWF files that run inside Flash® Player. The framework was built for use by developers and follows traditional application development paradigms rather than the timelinebased development found in the Flash Professional authoring tools. Applications are built using the Flex Builder IDE™ - an Eclipse-based development environment. ActionScript® 3 is used to access data and build user interface components for web and desktop applications that run inside Flash Player or Adobe AIR® Software. The Flex Framework also uses a declarative XML language called MXML to simplify Flex development and layout.

ABOUT Spring

The Spring Framework is one of the most popular ways to build enterprise Java applications. Unlike traditional Java EE development, Spring provides developers a full featured “lightweight container,” that makes applications easy to test and develop. Although Spring is best known for its dependency injection features, it also provides features for implementing typical server-side enterprise applications, such as declarative security and transaction management.

WHy Flex and Spring?

Adobe Flex has strong ties to Java, which include an Eclipsebased IDE and BlazeDS, its open source server-based Java remoting and web messaging technology. In addition, most enterprise projects that use Flex build on a Java back end. With Flex and Java so often married together, it is only natural to want to integrate Flex with Spring-based Java back ends. Beyond greenfield development, many organizations want to revamp or replace the user interface of existing enterprise Spring applications using Flex. In late 2008, the Spring community recognized these cases and began working on the Spring BlazeDS Integration project to add support for Flex development with Java and Spring.

By default BlazeDS creates instances of server-side Java objects and uses them to fulfill remote object requests. This approach doesn’t work with Spring, as the framework is built around injecting the service beans through the Spring container. The Spring integration with BlazeDS allows you to configure Spring beans as BlazeDS destinations for use as remote objects in Flex.

Integrating Flex and spring

This Refcard assumes that you are already familiar with Spring and Flex. If you need an introduction or refresher to either, check out the Very First Steps in Flex and/or Spring Configuration DZone Refcardz.

To use BlazeDS, the server-side application could be any Java application that deploys as a WAR file. This Refcard uses the Eclipse IDE to create and edit the Java project. This Refcard walks you through the following steps:

  • Set up a server-side Java project with BlazeDS and the Spring Framework
  • Configure the Java project with a basic Spring bean for use in BlazeDS
  • Write Flex application to use the Spring/BlazeDS service

Hot Tip

BlazeDS provides simple two-way communication with Java back-ends. Adobe Flash Player supports a serialization protocol called AMF that alleviates the bottlenecks of text-based protocols and provides a simpler way to communicate with servers. AMF is a binary protocol for exchanging data that can be used over HTTP in place of text-based protocols that transmit XML. Applications using AMF can eliminate an unnecessary data abstraction layer and communicate more efficiently with servers. To see a demonstration of the performance advantages of AMF, see the Census RIA Benchmark at: http://www.jamesward.org/census.The specification for AMF is publicly available, and numerous implementations of AMF exist in a variety of technologies including Java, .Net, PHP, Python, and Ruby.

Hot Tip

The open source BlazeDS project includes a Java implementation of AMF that is used for remotely communicating with server-side Java objects as well as for a publish/subscribe messaging system. The BlazeDS remoting technology allows developers to easily call methods on Plain Old Java Objects (POJOs), Spring services, or EJBs. Developers can use the messaging system to send messages from the client to the server, or from the server to the client. BlazeDS can also be linked to other messaging systems such as JMS or ActiveMQ. Because the remoting and messaging technologies use AMF over HTTP, they gain the performance benefits of AMF as well as the simplicity of fewer data abstraction layers. BlazeDS works with a wide range of Javabased application servers, including Tomcat, WebSphere, WebLogic, JBoss, and ColdFusion.

To follow along with this tutorial you will need:

First, set up the server-side Java web project in Eclipse by creating a web application from the blazeds.war file (found inside the blazeds zip file).

  • Import the Blazeds.war file to create the project:
    • Choose File > Import
    • Select the WAR file option. Specify the location of the blazeds.war file. For the name of the web project, type dzone-server
    • Click Finish

Now you can create a server that will run the application:

  • Select File > New > Other
  • Select Server > Server
  • Click Next
  • Select Apache > Tomcat v6.0Server
  • Click Next
  • Specify the location where Tomcat is installed and select the JRE (version 5 or higher) to use
  • Click Next
  • Select dzone-server in the Available Projects list
  • Click Add to add it to the Configured Projects list
  • Click Finish

Next, in the dzone-server project create the basic Java classes to be used by BlazeDS and Spring:

public class MyEntity {
  private String firstName;
  private String lastName;
  private String emailAddress;
  public String getFirstName() {
   return firstName;
public void setFirstName(String firstName) {
  this.firstName = firstName;
public String getLastName() {
  return lastName;
public void setLastName(String lastName) {
  this.lastName = lastName;
public String getEmailAddress() {
  return emailAddress;
public void setEmailAddress(String emailAddress) {
  this.emailAddress = emailAddress;

Listing 1: Java entity to be passed between Java and Flex

import java.util.List;
public interface MyService {
  public List<MyEntity> getMyEntities();

Listing 2: Java Service Interface

import java.util.ArrayList;
import java.util.List;
public class MyServiceImpl implements MyService {
  public List<MyEntity> getMyEntities() {
   List<MyEntity> list = new ArrayList<MyEntity>();
   MyEntity entity = new MyEntity();
   MyEntity entity2 = new MyEntity();
   MyEntity entity3 = new MyEntity();
   return list;

Listing 3: Java Example Service Implementation

Listings 1, 2, and 3 are very basic Java classes that you’ll use as examples for this tutorial. In a real-world application, the service implementation would likely connect to one or more enterprise services for data, such as a relational database. In this case, it simply returns a hard-coded set of entities as an ArrayList.

The basic Java web project with the BlazeDS dependencies is now complete.

Next, configure the Java project with a basic Spring bean for the MyService interface:

  • Copy the Spring libraries, the Spring BlazeDS Integration Library, and the ANTLR library to the project dzoneserver/ WebContent/WEB-INF/lib directory
  • Create a basic Spring Configuration File:
    • Right Click WebContent/WEB-INF and then choose New > File
    • For the file name, type application-config.xml
    • Click Finish
    • Copy and paste the text from Listing 4 into the file

<?xml version=”1.0” encoding=”UTF-8”?>
 <beans xmlns=”http://www.springframework.org/schema/beans”
 <!-- Spring Beans’s -->
 <bean id=”myService” class=”MyServiceImpl” />

Listing 4: Basic Spring Configuration

Those familiar with Spring should recognize this as a basic Spring configuration for creating a simple bean from the MyServiceImpl class. Later in this tutorial you will be using this bean through BlazeDS.

At this point, you have a basic Java web project with a default BlazeDS configuration. Now, you’ll change the default BlazeDS configuration to use the newly created Spring bean.

To begin configuring Spring BlazeDS Integration, update the web.xml file by removing the default BlazeDS configuration and replacing it with the code from Listing 5.

<>?xml version=”1.0” encoding=”UTF-8”?>
   <>servlet-name>Spring MVC Dispatcher Servlet<>/servlet-name>
  <>!-- Map /spring/* requests to the DispatcherServlet -->
   <>servlet-name>Spring MVC Dispatcher Servlet<>/servlet-name>

Listing 5: web.xml

The web.xml contents in Listing 5 create a servlet filter from Spring that will process all BlazeDS requests at: http://localhost:8080/dzone-server/spring This will be the base URL for accessing the BlazeDS endpoint. Also, you should notice that this is a standard DispatcherServlet for Spring.

Now that you have Spring wired into the Java web application, you will update the basic Spring configuration from Listing 4 so that it will work with BlazeDS. Add the highlighted section from Listing 6 to your application-config.xml file.

<?xml version=”1.0” encoding=”UTF-8”?>
<beans xmlns=”http://www.springframework.org/schema/beans”
 <!-- Spring Beans’s -->
 <bean id=”myService” class=”MyServiceImpl” />
 <!-- Simplest possible message broker -->
 <flex:message-broker />

 <!-- exposes myService as BlazeDS destination -->
 <flex:remote-service ref=”myService” />

Listing 6: Advanced Spring Configuration for BlazeDS

Listing 6 exposes the MyServiceImpl class as a BlazeDS destination. First, the Flex® namespace is added to the configuration. Note that the XSD will not be published from Spring until the final 1.0 release, and until then you will have to add it manually to your XML catalog. With the Flex namespace added, the configuration uses the messagebroker tag to create the MessageBrokerFactoryBean. Since there is no additional configuration information provided, the MessageBroker will be created with “sensible defaults,” assuming that the service-config.xml is in WEB-INF/flex/ services-config.xml. The remote-service tag creates a destination from existing Spring beans.

Hot Tip

In Spring BlazeDS Integration release 1.0.0M2, the standard BlazeDS configuration file (servicesconfig. xml) is still used for configuration of the communication channels.

Next, update the default BlazeDS services-config.xml file (found in the WebContent/WEB-INF/flex folder) to reflect the Spring URL defined in the web.xml file. Replace the contents of the file with the code in Listing 7.

<?xml version=”1.0” encoding=”UTF-8”?>
     <channel ref=”my-amf”/>
<channel-definition id=”my-amf”
 <channel-definition id=”my-polling-amf”

Listing 7: Update channel definition in services-config.xml

Note that the endpoint URL for the my-amf and my-polling-amf channels in Listing 7 include “spring” after the context.root parameter. This is the only configuration change you need to make in the BlazeDS default configuration files. All the remote destinations are configured in the Spring application-config. xml file.

You are now done configuring the server-side Spring / BlazeDS Java application. You may want to start up the Tomcat server to verify that your configuration is correct.

Now you can build the Flex application to use the Spring service remotely. Follow these steps to create the Flex project:

  • Select File > New > Other
  • In the Select A Wizard dialog box, select Flex Project
  • In the New Flex Project box, type in a project name: dzone-flex
  • Use the default location (which will already be checked)
  • Select Web Application (Runs In Flash Player)
  • Select None as the Application Server Type
  • Click Next
  • Specify the Output folder to be the location of the dzoneserver’s WebContent directory such as: C:\workspace\dzone-server\WebContent\
  • Click Finish

Your project will open in the MXML code editor and you’ll see a file titled main.mxml. Open the file and add the Flex® application code from Listing 8. This code accesses the MyServiceImpl class in Java and returns the results to Flex.

<?xml version=”1.0” encoding=”utf-8”?>
<mx:Application xmlns:mx=”http://www.adobe.com/2006/mxml”
  <mx:AMFChannel id=”myamf”
  <mx:ChannelSet id=”channelSet” channels=”{[myamf]}”/>
 <mx:RemoteObject id=”srv”
  destination=”myService” channelSet=”{channelSet}”/>
 <mx:DataGrid dataProvider=”{srv.getMyEntities.lastResult}”/>

Listing 8: Final main.mxml source file for accessing the Spring service

The code in Listing 8 sets up the AMFChannel for accessing the Spring service. Note that the destination “flexMyService” is the same as the bean you defined in the application-config. xml Spring configuration file. Also, you might have noticed that none of the Flex code contains anything specific to Spring. The Flex code doesn’t have to change, as the client code has no knowledge of the fact that Spring is being used on the server.

To get the dzone-server to update the deployed web application you may need to right-click the dzone-server project and select Refresh.

With all steps of the tutorial completed, you can start the Tomcat server in Eclipse and access the application at the following URL: http://localhost:8080/dzone-server/main.html

Running Application

Figure 1: The running application

To allow the Flex application to be launched in Run or Debug mode from Eclipse:

  • Right-click the dzone-flex project
  • Select Properties, then Flex Build Path
  • For the Output Folder URL, type http://localhost:8080/dzone-server/
  • Click OK to update the project properties

Now you can right-click the main.mxml file and select Run As > Flex Application or Debug As > Flex Application.

The running application displays the data that was hard coded in the MyServiceImpl Java class, as seen in Figure 1. Now you have a complete sample application using Spring, BlazeDS, and Flex

User Authentication

One of the benefits of using Spring is that it provides support for many common enterprise requirements, including security. In this section, you’ll expand on the basic application by using Spring Security to protect the service channel with role-based authentication.

To add security to the application you will need to download the following dependencies:

Then add the following files to the WEB-INF/lib directory in the dzone-server project:

  • cglib-2.2.jar
  • aspectjrt.jar (located in the aspectj.jar file)
  • asm-3.1.jar
  • asm-commons-3.1.jar
  • spring-security-acl-2.0.4.jar
  • spring-security-core-2.0.4.jar
  • spring-security-core-tiger-2.0.4.jar

<beans:beans xmlns=”http://www.springframework.org/schema/security”
<http auto-config=”true” session-fixation-protection=”none”/>
   <user name=”jeremy” password=”atlanta”
  authorities=”ROLE_USER, ROLE_ADMIN” />
 <user name=”keith” password=”melbourne”
authorities=”ROLE_USER” />

Listing 9: application-Context-security.xml Spring Security Configuration File

The first step is to create a very basic Spring Security configuration file. This example will use hard-coded credentials, however in a real application a database or LDAP server will likely be the source of the credentials. These methods of authentication can be easily configured with Spring Security. To learn more about Spring Security and how to add more advanced configurations, see the project home page at: http://static.springframework.org/spring-security/site/

To create a basic Spring Security Configuration File:

  • Right-click WebContent/WEB-INF and then choose New > File
  • For the file name, type applicationContext-security.xml
  • Click Finish
  • Copy the code from Listing 9 to the file

This configuration allows the user to authenticate through the Blaze DZ channel. Add the security configuration to the Spring configuration in the web.xml by updating the contextConfigLocation param-value as shown in Listing 10.

 <servlet-name>Spring MVC Dispatcher Servlet</servlet-name>

Listing 10: web.xml File with Security Configuration Added

At this point, you need to update the Spring configuration file to secure the getMyEntities method on myService. To do this update the application-config.xml file with the code in Listing 11.

<?xml version=”1.0” encoding=”UTF-8”?>
<beans xmlns=”http://www.springframework.org/schema/beans”
   <flex:secured />
  <bean id=”myService” class=”MyServiceImpl”>
   <security:protect method=”getMyEntities” access=”ROLE_USER” />

Listing 11: Updated application-config.xml Spring configuration file

If you run the Flex® application at this point, the getMyEntities service call will fail because the user is not authenticated.

Now that the server is configured to protect the service, you will update the Flex application to require the user to authenticate before loading data from the getMyEntities service method. The updated code shown in Listing 12 presents users with a login form (See Figure 2) until they are successfully authenticated. Once the user is authenticated, the view state is updated showing the DataGrid bound to the service results, and the service method is called.

Update the main.mxml page with the code in Listing 12. You can then run the application and login with one of the hard-coded username and password combinations from the applicationContext-security.xml configuration file.

<?xml version=”1.0” encoding=”utf-8”?>
<mx:Application xmlns:mx=”http://www.adobe.com/2006/mxml”>
  import mx.rpc.events.ResultEvent;
  import mx.rpc.events.FaultEvent;
  import mx.rpc.AsyncToken;
  import mx.rpc.AsyncResponder;
private function login():void {
var token:AsyncToken = channelSet.login(username.text, password.
token.addResponder(new AsyncResponder(loginResult, loginFault));
private function loginResult(event:ResultEvent,
token:AsyncToken):void {
   //get data
   //change state
   currentState = “userAuthenticated”;
 private function loginFault(event:FaultEvent, token:AsyncToken):void
invalidLogin = true;
<mx:AMFChannel id=”myamf”
<mx:ChannelSet id=”channelSet” channels=”{[myamf]}”/>
<mx:RemoteObject id=”srv”
destination=”myService” channelSet=”{channelSet}”/>
<mx:Boolean id=”invalidLogin”>false
<!-- Login Form -->
<mx:Panel id=”loginPanel” title=”Login Form”>
<mx:Label text=”Invalid username or password”
includeInLayout=”{invalidLogin}” visible=”{invalidLogin}” />
  <mx:Form defaultButton=”{loginButton}”>
   <mx:FormItem width=”100%” label=”Username”>
    <mx:TextInput id=”username”/>
  <mx:FormItem width=”100%” label=”Password”>
 <mx:TextInput id=”password” displayAsPassword=”true” />
    <mx:Button id=”loginButton” label=”Login” click=”login()”/>
 <mx:State name=”userAuthenticated”>
   <mx:RemoveChild target=”{loginPanel}” />
   <mx:DataGrid dataProvider=”{srv.getMyEntities.lastResult}” />

Listing 12: Update Flex Application

The Flex code in Listing 12 is very basic. It presents the user with the loginPanel until loginResult() is invoked by a successful login. The username and password parameters come from a login form and are passed to the channelSet’s login() method. On a successful login, the loginResult() handler function is called, and the post-login logic is invoked. In this case, the currentState is updated to userAuthenticated, which removes the login form and adds the DataGrid bound to the service call’s results. In addition, the getMyEntities service method is called to load the data.

Login Form

Now, you have a basic Flex®, Spring, and BlazeDS application protected with authentication.


In this Refcard, you first created a Spring bean that was exposed to the Flex client through BlazeDS using Spring BlazeDS Integration. Next, you secured your service by adding Spring Security, and basic Flex authentication. As you can see, the new Spring BlazeDS Integration project makes integrating Flex and Spring easy and straightforward. The combination of the two technologies creates a powerful platform for building robust RIAs. You can learn more about integrating Flex and Spring on the Spring BlazeDS Integration project site: http://www.adobe.com/devnet/flex/flex_java.html

About The Author

Photo of author Jon Rose

Jon Rose

Jon Rose is the Flex Practice Director for Gorilla Logic, an enterprise software consulting company located in Boulder, Colorado. He is an editor and contributor to InfoQ.com, an enterprise software community. Visit his website at: www.ectropic.com

Gorilla Logic, Inc. provides enterprise Flex and Java consulting services tailored to businesses in all industries. www.gorillalogic.com

Photo of author James Ward

James Ward

James Ward is a Technical Evangelist for Flex at Adobe. He travels the globe speaking at conferences and teaching developers how to build better software with Flex. Visit his websit at: www.jamesward.com

First Steps in Flex, co-authored by James, will give you just enough information, and just the right information, to get you started learning Flex--enough so that you feel confident in taking you own steps once you finish the book. For more information visit: http://www.firststepsinflex.com

Recommended Book

First Steps in Flex

First Steps in Flex will take you through your first steps on your way to becoming a powerful user interface programmer.

We’ve gone to great lengths to show you the world of Flex without burying you in information you don’t need right now. At the same time, we give pointers to places where you can go to explore more.

First Steps in Flex is the ideal starting point for any programmer who wants to quickly become proficient in Flex 3.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Enterprise Integration Patterns with Apache Camel


12,272 Downloads · Refcard of 204 (see them all)


The Essential EIP with Apache Camel Cheat Sheet

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.


By Holger Schwichtenberg

23,553 Downloads · Refcard 46 of 204 (see them all)


The Essential ASP.NET Cheat Sheet

ASP.NET stands for “Active Server Pages .NET”, which is a framework for the development of dynamic websites and web services. This DZone Refcard summarizes the most commonly used core functions and controls in ASP.NET. Author Holger Schwichtenberg shows you how to set up your ASP.NET development environment, explains the Webform model, and showcases common WebControls for Lists and Validation. He also explores ASP.NET state management features, configuration file management and shows the typical content of an .aspx page. This Refcard is useful for some of the most common tasks with ASP.NET, regardless of version number.
HTML Preview


By Holger Schwichtenberg


ASP.NET stands for “Active Server Pages .NET”, however the full name is rarely used. ASP.NET is a framework for the development of dynamic websites and web services. It is based on the Microsoft .NET Framework and has been part of .NET since Version 1.0 was released in January 2002. The current version named 3.5 Service Pack 1 was released in August 2008. The next version, 4.0, is expected to be released at the end of the year 2009.

This Refcard summarizes the most commonly used core functions of ASP.NET. You will find this Refcard useful for some of the most common tasks with ASP.NET, regardless of the version you are using


The best development environment for ASP.NET is Microsoft’s Visual Studio. You can either use the free Visual Web Developer Express Edition (http://www.microsoft.com/express/ vwd/) or any of the commercial editions of Visual Studio (e.g. Visual Studio Professional). The latest version that supports ASP.NET 2.0 and ASP.NET 3.5 is “2008” (internal version: 9.0). The .NET Framework and ASP.NET are part of the setup of Visual Web Developer Express Edition and Visual Studio. However, make sure you install Service Pack 1 for Visual Studio 2008, as this will not only fix some bugs but also add a lot of new features.

ASP.NET needs a server with the HTTP protocol (web server) to run. Visual Web Developer Express 2005/2008 and Visual Studio 2005/2008 contain a webserver for local use on your development machine. The “ASP.NET Development Server” (ADS) will be used when specifying a “File System” location when creating your project. Thus, “HTTP” would mean you address a local or remote instance of Internet Information Server (IIS) or any other ASP.NET enabled web server. ADS is a lightweight server that cannot be reached from other systems. However, there are differences between ADS and IIS, especially in the security model that makes it sometimes hard for beginners to deploy a website to the IIS that was developed with ADS. On the production system you will use IIS and only install the .NET Framework, because Visual Studio is not required here.

Hot Tip

If you choose to use Internet Information Server (IIS), install the IIS on your machine before installing the .NET Framework or Visual Studio. If you did not follow this installation order, you may use aspnet_regiis. exe to properly register ASP.NET within the IIS.

ASP.NEt Web Applications

An ASP.NET application consists of several .aspx files. An .aspx file can contain HTML markup and special ASP.NET markup (called Web Controls) as well as the code (Single Page Model). However, the Code Behind Model which comes with a separate code file called, the “Code Behind File” (.aspx. cs or .aspx.vb), provides a cleaner architecture and better collaboration between Web designers and Web developers. ASP.NET applications may contain several other elements such as configuration files (maximum one per folder), a global application file (only one per web application), web services, data files, media files and additional code files.

There are two types of Web projects: “Website Projects” (File/ New/Web Site) and “Web Application Projects” (File/New/ Project/Web Application). “Website” is the newer model, while Web Application Projects mainly exist for compatibility with Visual Studio .NET 2002 and 2003. This Refcard will only cover Web Site Projects. Most of this content is also valid for Web Applications.

Hot Tip

A well designed ASP.NET application distinguishes itself by having as little code in the Code Behind files and other code files as possible. The large majority of your code should be in referenced Assemblies (DLLs) as they are reusable in other Web applications. If you don’t want to put your code into a separate assembly, you at least should use separate classes in the “App_Code” folder within your web project.
Figure 1

Figure 1: The Content of an ASP.NET Web Application

The ASP.NET Web form Model

ASP.NET uses an object- and event-oriented model for web pages. The ASP.NET Page Framework analyzes all incoming requests as well as the .aspx page that the request is aimed at. The Page Framework creates an object model (alias control tree) based on this information and also fires a series of events. Event handlers in your code can access data, call external code in referenced .NET assemblies and manipulate the object model (e.g. fill a listbox or change the color of a textbox). After all event handlers have executed, the Page Framework renders the current state of the object model into HTML tags with optional CSS formatting, JavaScript code and state information (e.g. hidden fields or cookies). After interacting with the page, the user can issue a new request by clicking a button or a link that will restart the whole process.

Figure 2

Figure 2: The ASP.NET request/response life cycle

Web Controls

An ASP.NET page can contain common HTML markup. However, only ASP.NET web controls provide full objectand event-based functionality. Web controls have two representations: In the .aspx files they are tags with the prefix “asp:”, e.g. . In the code they are .NET classes, e.g. System.Web.UI.WebControls.TextBox.

Table 1 lists the core members of all web controls that are implemented in the base class “System.Web.UI.WebControls. WebControl”.

Name of Member Description
Id Unique identifier for a control within a page
ClientID Gets the unique identifier that ASP.NET generates if more than one control on the page has the same (String) ID.
Page Pointer to the page where the control lives
Parent Pointer to the parent control, may be the same as “Page”
HasControls() True, if the control has sub-controls
Controls Collection of sub-controls
FindControl(“NAME”) Finds a sub-control within the Controls collection by its ID
BackColor, BorderColor,
Borderstyle, BorderWidth,
Font, ForeColor, Height,
Width, ToolTip, TabIndex
Self-explaining properties for the formatting of the control.
CssClas The name of CSS class that is used for formatting the control
Style A collection of single CSS styles, if you don’t want to use a CSS class or override behavior in a CSS class
EnableViewState Disables page-scoped state management for this control
Visible Disables rendering of the control
Enabled Set to false if you want the control to be disabled in the browser
Focus() Set the focus to this control
DataBind() Gets the data (if the control is bound to a data source)
Init() Fires during initializtion of the page. Last chance to change basic setting e.g. the culture of the current thread that determines the behavior used for rendering the page.
Load() Fires during the loading of the page. Last to change to do any preparations./td>
PreRender() Fires after all user defined event handlers have completed
and right before rendering of the page starts. Your last
chance to make any changes to the controls on the page!
UnLoad() Event fires during the unloading of a page.

Table 1: Core Members in the base class system. Web.UI.WebControls.WebControl.

Tables 2, 3 and 4 list the most commonly used controls for ASP. NET web pages. However, there are more controls included in the .NET platform and many more from third parties not mentioned here.

Control Purpose/strong> Important specific members in
addition to the members inheritedn
from WebControl
<asp:Label> Static Text Text
<asp:TextBox> Edit Text (single line,
multiline or password)
TextMode, Text, TextChanged()
<asp:FileUpload> Choose a file for upload FileName, FileContent, FileBytes,SaveAs()
<asp:Button> Display a clasic button Click(), CommdName, Command()
<asp:ImageButton> Display a clickable image Click(), CommdName, Command()
<asp:LinkButton> Display a hyperlink that works like a button ImageUrl, ImageAlign, Click(),
CommdName, Command()
<asp:CheckBox> Choose an option Text, Checked, CheckedChanged()
<asp:RadioButton> Choose an option Text, Checked, CheckedChanged()
<asp:HyperLink> Display a hyperlink NavigateURL, Target, Text
<asp:Image> Display an image ImageURL, ImageAlign
<asp:ImageMap> Display a clickable
image with different
ImageURL, ImageAlign,

Table 2: Core controls for ASP.NET web pages.

List controls display several items that the user can choose from. The selectable items are declared static in the .aspx file or created manually using the Items collection or created automatically by using data binding. For data binding you can fill DataSource with any enumerable collection of .NET objects. DataTextField and DataValueField specify which properties of the objects in the collection are used for the list control.

Hot Tip

If you bind a collection of primitive types such as strings or numbers, just leave DataTextField and DataValueField empty.

Hot Tip

Setting AppendDataBoundItems to true will add the databound items to the static items declared in the .aspx file. This will allow the user to select values that don’t exist in the data source such as the values “All” or “None”
Control Purpose Important specific members in
addition to the members inherited
from WebControl
<asp:Drop DownList> Allows the user to select
a single item from a
drop-down list
Items.Add(), Items.
Remove(), DataSource,,
DataTextField, DataValueField,
SelectedIndez, SelectedItem,
SelectedValue, SelectedIndexChanged()
<asp:ListBox> Single or multiple selection box Items.Add(), Items.
Remove(), DataSource,,
DataTextField, DataValueField,
SelectedIndez, SelectedItem,
SelectedValue, SelectedIndexChanged(),
Rows, SelectionMode
<asp:Check BoxList> Multi selection check box group Items.Add(), Items.
Remove(), DataSource,,
DataTextField, DataValueField,
SelectedIndez, SelectedItem,
SelectedValue, SelectedIndexChanged(),
RepeatLayout, RepeatDirection
<asp:Radio ButtonList> Single selection radio button group Items.Add(), Items.
Remove(), DataSource,,
DataTextField, DataValueField,
SelectedIndez, SelectedItem,
SelectedValue, SelectedIndexChanged(),
RepeatLayout, RepeatDirection
<asp:Bulleted List> List of items in a bulleted format Items.Add(), Items.
Remove(), DataSource,,
DataTextField, DataValueField,
SelectedIndez, SelectedItem,
SelectedValue, SelectedIndexChanged(),
BulletImageUrl, BulletStyle

Table 3: List Controls for ASP.NET web pages

Validation Controls check user input. They always refer to one input control ControlToValidate and display a text ErrorMessage if the validation fails. They perform the checks in the browser using JavaScript and also on the server. The client side validation can be disabled by setting EnableClientScript to false. However, the server side validation cannot be disabled for security reasons.

Control Purpose Important specific members
in addition to the members
inherited from WebControl
Checks if a user changed the
initial value of an input control
ControlToValidate, ErrorMessage,
Display, EnableClientScript,
IsValid, InitialValue
Compares the value entered by
the user in an input control with
the value entered in another
input control, or with a constant
ControlToValidate, ErrorMessage,
Display, EnablesClientScript,
IsValid, ValueToCompare, Type,
Checks whether the value of
an input control is within a
specified range of values
ControlToValidate, ErrorMessage,
Display, EnableClientScript,
IsValid, MinimumValue,
MaximumValue, Type
Expression Validator>
Checks if the user input
matches a given regular
ControlToValidate, ErrorMessage,
Display, EnavleClientScript,
IsValid, ValidationExpression
Performs custom checks on the
server and optional also on the
client using JavaScript
ControlTValidate, ErrorMessage,
Display, EnableClientScript,
IsValid, ValidateEmptyText,
Client ValidationFunction,

Hot Tip

For the CustomValidator you can optionally write a JavaScript function that performs client side validation. The function has to look like this: <script type=”text/javascript”> function ClientValidate(source, args) { if (x > 0) // Any condition { args.IsValid=true; } else { args.IsValid=false; } } </script>

The Page Class

All web pages in ASP.NET are .NET classes that inherit from the base class “System.Web.UI.Page”. The class Page has associations to several other objects such as Server, Request, Response, Application, Session and ViewState (see figure 3). Therefore, developers have access to a wide array of properties, methods and events within their code. Table 5 lists the most important members of a Page and its dependent classes. Please note that the Page class has the class Control in its inheritance hierarchy and therefore shares a lot of members with the WebControl class (e.g. Init(), Load(), Controls, FindControl). However, these members are not repeated here.

Figure 3

Figure 3: Object Model of “System.Web.UI.Page”

Member Description
Page Title Title string of the Page
Page.IsPostBack True, if page is being loaded in response to a client postback. False if it is being loaded for the first time.
Page.IsAsync True, if the page is loaded in an asynchronous request (i.e. AJAX request)
Page.IsValid True, if all validation server controls in the current validation group validated successfully
Page.Master Returns the MasterPage object associated with this page
Page.PreviousPage Gets the page that transferred control to the current page (only available if using Server.Transfer, not available with Response. Redirect)
Page.SetFocus(Control ControlID) Sets the browser focus to the specified control (using JavaScript)
Trace.Write Writes trace information to the trace log.
User.Identity.IsAuthenticated True, if the user has been authenticated.
User.Identity.AuthenticationType Type of Authentication used (Basic, NTLM, Kerberos, etc)
User.Identity.Name Name of the current user
Server.MachineName Name of the computer the web server is running on
Server.GetLastError() Gets the Exception object for the last exception
Server.HtmlEncode(Text) Applies HTML encoding to a string
Server.UrlEncode(Pah) Applies URL encoding to a string
Server.MapPath(Path) Maps the given relative path to an absolute path on the web server
Server.Transfer(Path) Stops the execution of the current page and starts executing the given page as part of the current HTTP request
Request.AcceptTypes String array of cliet-supported MIME accept types.
Request.Browser Provides information about the browser
Request.ClientCertificate Provides the certificate of the client, if SSL client authentication is used
Request.Cookies The list of cookies that the browser sent to the web server
Request.Form The name and value of the input fields the browser sent to the web server
Request.Headers Data from the HTTP header the browser sent to the web server
Request.IsAuthenticated True, if the user is authenticated
Request.IsSecureConnection True, if SSL is used
Request.Path Virtual path of the HTTP request (without server name)
Request.QueryString Name/Value pairs the browser sent as part of the URL
Request.ServerVariables Complete list of name/value pairs with information about the server and the current request
Request.Url Complete URL of the request
Request.UrlReferrer Refering URL of the request (Previous page, the browser visited)
Request.UserAgent Browser identification
Request.UserHostAddress IP address of the client
Request.UserLanguages Preferred languages of the user (determined by browser settings)
Response.BinaryWrite(bytes) Writes information to an HTTP response output stream.
Response.Write(string) Writes information to an HTTP response output stream.
Response.WriteFile(string) Writes the specified file directly to an HTTP response output stream.
Response.BufferOutput True if the output to client is buffered
Response.Cookies Collection of cookies that shall be sent to the browser
Response.Redirect(Path) Redirects a client to a new URL using the HTTP status code 302
Response.StatusCode HTTP status code (integer) of the output returned to the client
Response.StatusDescription HTTP status string of the output returned to the client
Session.SessionID Unique identifier for the current session (a session is user specific)
Session.Item Gets or sets individual session values.
Session.IsCookieless True, if the ID for the current sessions are embedded in the URL. False, if its stored in an HTTP cookie
ViewState.Item Gets or sets the value of an item stored in the ViewState, which is a hidden field used for state management witin a page
Application.Item Gets or sets the value of an item stored in the application state, which is an applicationscope state management facility

Table 5: Most important members of the Page class and its associated classes

A Typical Page

Figure 4 shows the typical content of an .aspx page and Figure 5 the content of a typical code behind class. The sample used is a registration form with three fields: Name, Job Title and Email Address.

<%@ Page Language=”C#” AutoEventWireup=”true”
  CodeFile=”PageName.aspx.cs” Inherits=”PageName” %>
<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//en”
<html xmlns=”http://www.w3.org/1999/xhtml”>
<head runat=”server”>
  <title>Registration Page</title>
  <link href=”MyStyles.css” rel=”stylesheet” type=”text/css”/>
  <style type=”text/css”>
    font-size: large; font-weight: bold;
  <form id=”c_Form” ruanat=”server”>
   <asp:Label runat=”server” ID=”C_Headline” Text=”Please register:”
   <asp:TextBox ID=”C_Name” runat=”server”></asp:TextBox>
   <asp:RequiredFieldValidator ID=”C_NameVal” ControlToValidate=”C_
   Name” ruanat=”server” ErrorMessage=”Name required”></
    <p>Job Title:
   <asp:DropDownList ID=”C_JobTitle” runat=”server”>
  <asp:ListItem Text=”Software Developer” Value=”SD”></
  <asp:ListItem Text=”Software Architect” Value=”SA”></
  <asp:TextBox ID=”C_EMail” runat=”server”></asp:TextBox>
   <asp:RequiredFieldValidator Id=”C_EMailVal1” ControlToValidate=”C_
  EMail” runat=”server” ErrorMessage=”EMail required”></
  <asp:RegularExpressionValidator ID=”C_EMailVal2”
 ControlToValidate=”C_EMail” runat=”server” ErrorMessage=”Email
   not valid” ValidationExpression=”\w+([-+.’]\w+)*@\w+([-.]\
  <asp:Button ID=”C_register” runat=”server” Text=”Register”

Figure 4: Typical content of an ASPX file

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
public partial class PageName : System.Web.UI.Page
  protected void Page_Load(object sender, EventArgs e)
    // If an authenticated users starts using this page,
    // use his login name in the name textbox
    if (!Page.IsPostBack && Page.User.Identity.IsAuthenticated)
     this.C_Name.Text = Page.User.Identity.Name;
     this.C_Name.Enabled = false;
 protected void C_Register_Click(object sender, EventArgs e)
   if (Page.IsValid) // if all validation controls succedded
   { // call business logic and
    if (BL.Register(this.C_Name.Text, this.C_EMail.Text, this.C_
    { // redirect to confirmation page
    { // change the headline
     this.C_Headline. = “You are already registered!”;

Figure 5: Typical content of a Code Behind file

State Management

State management is a big issue in web applications as the HTTP protocol itself is stateless. There are three standard options for state management: hidden files, URL parameters and cookies. However, ASP.NET has some integrated abstractions from these base mechanisms know as View State, Session State, and Application State. Also, the direct use of cookies is supported in ASP.NET.

Hot Tip

Disabling the View State (EnableViewState=false in a control) will significantly reduce the size of the page sent to the browser. However, you will have to take care of the state management of the controls with disabled View State on your own. Some complex controls will suffer the loss of functionality without View State.

The following code snippet shows how to set values for a counter stored in each of these mechanisms:

ViewState[“Counter”] = CurrentCounter_Page + 1;
Session[“Counter”] = CurrentCounter_Session + 1;
Application[“Counter”] = CurrentCounter_Application + 1;
Response.Cookies[“Counter”].Value = (CurrentCounter_User +
Response.Cookies[“Counter”].Expires = DateTime.MaxValue; // no

Figure 6: Setting Values

Hot Tip

When reading value from these objects, you have to check first if they already exist. Otherwise you will recieve the exception “NullReferenceExeception: Object reference not set to an instance of an object.”

The following code snippet shows how to read the current counter value from each of these mechanisms: Next Column ---->

Mechanism Scope Lifetime Base Mechanism Data Type Storing Value Reading Value
View State Single user on a single page Leaving the current page Hidden FIeld “ViewState” Object (any serializable.NET data type) Page.ViewState Page.ViewState
Session State Latest interaction of a single
user with the web page
Limited number of minutes after the last request from
the user
Cookie (“ASPSessionJD...”)or URL Parameter “(S(...))”
plus server side store (local RAM, RAM on dedicated
server or database)
Object. Object must be serializable
if the store is not the local RAM
Page.Session Page.Session
Cookies A single User Closing of the browser or dedicated point in time Cookie String Page.Response.Cookies/td> Page.Response.Cookies
Application State All users Shutting down the web application Local RAM Object Page.Application Page.Application

long CurrentCounter_Application, CurrentCounter_ApplicationLimited,
CurrentCounter_Session, CurrentCounter_Page, CurrentCounter_User;
 if (Application[“Counter”] == null) { CurrentCounter_Application =
  0; }
  else { CurrentCounter_Application = Convert.ToInt64(Application[“C
   ounter”]); }
 if (Session[“Counter”] == null) { CurrentCounter_Session = 0; }
  else { CurrentCounter_Session = Convert.
 if (ViewState[“Counter”] == null) { CurrentCounter_Page = 0; }
  else { CurrentCounter_Page = Convert.
ToInt64(ViewState[“Counter”]); }
 if (Request.Cookies[“Counter”] == null) { CurrentCounter_User = 0; }
  else { CurrentCounter_User = Convert.ToInt64(Request.
   Cookies[“Counter”].Value); }

Figure 7: Reading Values


All configurations for ASP.NET applications are stored in XMLbased configuration files with the fixed name “web.config”. In addition to the configuration files in the application root folder, subfolders may also contain a web.config that overrides parent settings. Also, there are the global configuration files machine. config and web.config in the folder \Windows\Microsoft.NET\ Framework\v2.0.50727\CONFIG that provide some default settings for all web applications. (Note: v2.0.50727 is still correct for ASP.NET 3.5!).

Visual Studio and Visual Web Developer create a default root configuration file in your web project that contains a lot of internal setting for ASP.NET 3.5 to work properly. Figure 6 shows a fragment from a web.config file with settings that are often used.

<!-- Connection strings -->
  <add name=”RegistrationDatabase” connectionString=”Data
Source=EO2;Initial Catalog= RegistrationDatabase;Integrated
Security=True” providerName=”System.Data.SqlClient” />
<!-- User defined settings -->
  <and key=”WebmasterEMail” value=”hs@IT-Visions.de” />
  <!-- Specify a login page -->
  <!-- Use the URL for storing the authentication ID if cookies are
not allowed -->
  <!-- Set the authentication timeout to 30 minutes -->
<authentication mode=”Forms”>
  <forms loginUrl=”Login.aspx” cookieless=”AutoDetect” timeout=”30”>
   <!-- Deny all unauthorizd access to this application -->
   <deny users=”?” />
 <!-- Use the URL for storing the session ID if cookies are not
allowed -->
 <!-- Set the session timeout to 30 minutes -->
 <sessionState cookieless=”AutoDetect” timeout=”30”></sessionState>
<!-- Display custom error pages for remote users -->
<customErrors mode=”RemoteOnly” defaultRedirect=”GenericErrorPage.
  <error statusCode=”403” redirect=”NoAccess.htm” />
  <error statusCode=”404” redirect=”FileNotFound.htm” />
<!-- Turn on debugging -->
<compilation debug=”false”>

Figure 8: Typical setting in the web.config file.

Hot Tip

Please make sure you turn debugging off again before deploying your application as this decreases execution performance.


ASP.NET applications can be deployed as source code via the so called “XCopy deployment”. This means you copy the whole content of the web project folder to the production system and configure the target folder on the production system as an IIS web application (e.g. using the IIS Manager). The production web server will automatically compile the application during the first request and recompile automatically if any of the source files changed.

However, you can precompile the application into .NET assemblies to improve protection of your intellectual property and increase execution speed for the first user. Precompilation can be performed through Visual Studio/Visual Web developer (Menu “Build/Publish Website”) or the command line tool aspnet_compiler.exe.

Hot Tip

Download the “Visual Studio 2008 Web Deployment Projects” from microsoft.com. This is an Add-In that provides better control over the precompilation process.

About The Author

Photo of author Holger Schwichtenberg

Holger Schwichtenberg

Holger Schwichtenberg is one of Europe’s best-known experts on .NET and Windows PowerShell. He holds both a Master’s degree and a Ph.D. in business informatics. Microsoft recognizes him as a Most Valuable Professional (MVP) since 2003. He is a .NET Code Wise Member, an MSDN Online Expert and an INETA speaker. He regularly gives high-level talks at conferences such as TechEd, Microsoft Summit, BASTA and IT Forum. He is the CEO of the German based company www.IT-Visions.de that provides consulting and training for many companies throughout Europe.


Holger Schwichtenberg has published more than twenty books for Addison Wesley and Microsoft Press in Germany, as well as about 400 journal articles. His recent book “Essential PowerShell” has also been published by Addison Wesley in English.

Blog: www.dotnet-doktor.de (German)

Website: www.IT-Visions.de/en

Recommended Book


An in-depth guide to the core features of Web development with ASP.NET, this book goes beyond the fundamentals. It expertly illustrates the intricacies and uses of ASP.NET 3.5 in a single volume. Complete with extensive code samples and code snippets in Microsoft Visual C# 2008, this is the ideal reference for developers who want to learn what s new in ASP.NET 3.5, or for those building professional-level Web development skills.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Agile Adoption

Decreasing Time to Market

By Gemba Systems

22,562 Downloads · Refcard 45 of 204 (see them all)


The Essential Agile Adoption Cheat Sheet

Agile methods are everywhere, but which one is the right fit for you and your business? What are the business values you want out of adopting Agile? This Dzone Refcard focuses on choosing the right Agile practices for your team or organization when getting to market faster is of prime importance. You’ll learn about the practices that are the building blocks of these methods- such as iterations, continuous integration, refactoring, automated developer tests, and many more.
HTML Preview
Agile Adoption:

AgileAdoption: Decreasing time to market

By Gemba Systems

About Agile Adoption

There are a myriad of Agile practices out there. Which ones are right for you and your team? What are the business values you want out of adopting Agile and what is your organization’s context? This Refcard is focused on helping you evaluate and choose the practices for your team or organization when getting to market faster is of prime importance. Instead of focusing on entire methods such as Scrum and XP, we will talk about the practices that are the building blocks of these methods such as iterations and automated developer tests. We will answer two basic questions:

  • What Agile practices should you consider to improve Time to Market?
  • How should you go about choosing from those practices given your organization and context?

What Agile Practices improve time to market?

Figure 1

Figure 1-These are the Agile practices that improve time to market. The most effective practices are near the top of the diagram. Therefore iteration is more effective than Onsite Customer for improving time to market. The arrows indicate dependencies. Continuous Integration depends on Automated Developer Tests for it to be effective.


Figure 2

*Practices in pink are ones that don’t directly address time to market but are needed to support practices that do (hence a dependency). They are not described in this Refcard but can be found in the external references.

Table 1.1 An iteration is a time-boxed event that is anywhere between 1 to 4 weeks long. The development staff works throughout this period – without interruption – to build an agreed upon set of requirements that are accepted by the customer and meet an agreed upon “done state”.
Table 1.2 To get the most of an iteration and reduce your time to market, an iteration needs to work from an iteration backlog and reach a solid done state at its completion. Such an iteration reduces time to market because every time-boxed iteration is a potential release. There is little “work in progress” between iterations and defects are found early and often for cheaper and faster removal.
Table 1.3 Any software team that is building software where they are not 100% sure of their outcome is a candidate for performing iterations. Without iterations the majority of learning (from mistakes) only happens at the end and coursecorrections are difficult if not impossible.

Continuous Integration


Figure 2-The cost of fixing a defect increases over time because of context switching, communication, and bugs being built on existing bugs.

Release Often

Diagram 1
Table 1.1 Release your software to your end customers as often as you can without inconveniencing them.
Table 1.2 Releasing often streamlines your development process and makes you deal with the pains of getting software good enough to go live. A team that releases often faces the pains and addresses the problems that make deployment difficult so that releasing is just another development task.
Table 1.3 You are on a project where releasing often will enable you to produce revenue earlier. Having new features available frequently will not inconvenience your customer base. The quality of your releases is superb and your customers eagerly await your next release (instead of religiously keeping away from your 1.0 releases).

Done State

Table 1.1 The done state is a definition agreed upon by the entire team of what constitutes the completion of a requirement. The closer the done state is to deployable software, the better it is because it forces the team to resolve all hidden issues.
Table 1.2 A done state that is close to deployment enables the team to be confident in its work. The psychological effect of this confidence is a development team that gives good estimates, delivers regularly, and is confident in releasing its software. An executive decision can be made to release what has been built at anytime.
Table 1.3 A team that should consider using a done state is one that has the necessary expertise and resources to build a requirement from end-to-end and perform all of the necessary build and deployment tasks.

Iteration Backlog

Table 1.1 A backlog is a prioritized list of requirements. There are two common flavors of backlogs, one for the current iteration and one for the product. The product backlog contains all of the requirements prioritized by value to the customer. The iteration backlog is a list of requirements that a team has committed to building for an iteration.
Table 1.2 Properly prioritized backlogs that are used to set the goals for every iteration ensure that the team is always working on the most important requirements. When paired with iterations that produce working, tested software, backlogs give a development team the option to release at the end of any iteration having always worked on the most important issues.
Table 1.3 An expert on business value is needed to be part of the team to prioritize the backlog. If your team has such a person or someone that can coordinate with the business stakeholders to do so then use a product backlog. If you are using iterations then use an iteration backlog to set clear goals for the iterations and a release backlog to maintain long-term goals.

Automated Developer Tests

Table 1.1 Automated developer tests are a set of tests that are written and maintained by developers to reduce the cost of finding and fixing defects—thereby improving code quality—and to enable the change of the design as requirements are addressed incrementally.
Table 1.2 Automated developer tests reduce the time to market by actually reducing the development time. This is accomplished by reducing a developer’s time in debugging loops by catching errors in the safety-net of tests.
Table 1.3 You are on a development team that has decided to adopt iterations and simple design and will need to evolve your design as new requirements are taken into consideration. Or you are on a distributed team. The lack of both face-to-face communication and constant feedback is causing an increase in bugs and a slowdown in development.

Automated Acceptance Tests

Diagram 2
Table 1.1 Automated acceptance tests are tests written at the beginning of the iteration that answer the question: “what will this requirement look like when it is done?”. This means that you start with failing tests at the beginning of each iteration and a requirement is only done when that test passes.
Table 1.2 This practice builds a regression suite of tests in an incremental manner and catches errors, miscommunications, and ambiguities very early on. This, in turn, reduces the amount of work that is thrown away and enables faster development as you receive early feedback when a requirement is no longer satisfied.
Table 1.3 You are on a development project with an onsite customer who is willing and able to participate more fully as part of the development team. Your team is also willing to make difficult changes to any existing code. You are willing to pay the price of a steep learning curve.

Onsite Customer

Table 1.1 The onsite customer role in an Agile development team is a representative of the users of the system who understands the business domain of the software. The customer owns the backlog, is responsible for writing and clarifying requirements, and responsible for checking that the software meets the requirements specified.
Table 1.2 The role of customer helps improve time to market by supporting the developers by giving them clear requirements, providing clarifications and verifying that the software does really meet the needs of the user base. The customer provides early feedback to the development team so they never spend more than an iteration down a blind alley. Finally, having a customer who correctly prioritizes a backlog allows the team to deliver the most important items first when time is of the essence.
Table 1.3 The practice of onsite customer works best when the development team can be co-located with one or more domain experts. The person fulfilling the customer role is crucial to the success of the team and therefore will need sufficient time and resources to do the job.

Simple Design


*Practices in pink are ones that don’t directly address time to market but are needed to support practices that do (hence a dependency). They are not described in this Refcard but can be found in the external references.

Table 1.1 If a decision between coding a design for today’s requirements and a general design to accommodate for tomorrow’s requirements needs to be made, the former is a simple design. Simple design meets the requirements for the current iteration and no more.
Table 1.2 Simple design improves time to market because you build less code to meet the requirements and you maintain less code afterwards. Simple designs are easier to build, understand, and maintain.
Table 1.3 Simple design should only be used when your team also is writing automated developer tests and refactoring. A simple design is fine as long as you can change it to meet future requirements.



*Practices in pink are ones that don’t directly address time to market but are needed to support practices that do (hence a dependency). They are not described in this Refcard but can be found in the external references.

Table 1.1 The practice of Refactoring code changes the structure (i.e., the design) of the code while maintaining its behavior. Collective code ownership is needed because a refactoring frequently affects other parts of the system. Automated developer tests are needed to verify that the behavior of the system has not changed after the design change introduced by the refactoring.
Table 1.2 Refactoring improves time to market by supporting practices like Simple Design which, in turn, allow you to only write the software for the features that are needed now.
Table 1.3 You are on a development team that is practicing automated developer tests. You are currently working on a requirement that is not well-supported by the current design.

Cross-Functional Team

Table 1.1 The team that utilizes the Cross Functional Team practice is one that has the necessary expertise among its members to take a requirement from its initial concept to a fully deployed and tested piece of software within one iteration. A requirement can be taken off of the backlog, elaborated and developed, tested, deployed.
Table 1.2 Cross-functional teams primarily affect time to market by enabling true iterative and incremental development. Resource bottlenecks are resolved and teams can build features end-to-end.
Table 1.3 There is a hardening cycle at the end of each release indicating unresolved integration issues. Building a slice of functionality end-to-end in your system finds errors early and requires diverse expertise of many different people.

How to adopt agile practices successfully

To successfully adopt Agile practices let’s start by answering the question “which ones first?” Once we have a general idea of how to choose the first practices there are other considerations.

Become “Well-Oiled” First

One way to look at software development is to see it as problem solving for business. When considering a problem to solve there are two fundamental actions that must be taken:

  • Solving the right problem. This is IT/Business alignment.
  • Solving the problem right. This is technical expertise.

Intuitively it would seem that we must focus on solving the right problem first because, no matter how well we execute our solution to the problem, if it is the wrong problem then our solution is worthless. This, unfortunately, is the wrong way to go. Research shows in Figure 3, that focusing on alignment first is actually more costly and less effective than doing nothing. It also shows that being “well-oiled”, that is focusing on technical ability first, is much more effective and a good stepping-stone to reaching the state where both issues are addressed.

This is supported anecdotally by increasing reports of failed Agile projects that do not deliver on promised results. They adopt many of the soft practices such as Iteration, but steer away from the technically difficult practices such as Automated Developer Tests, Refactoring, and Done State. They never reach the “well-oiled” state.

So the lesson here is make sure that on your journey to adopt Agile practices that improve time to market (or any other business value for that matter), your team will need to become “well-oiled” to see significant, sustained improvement. And that means you should plan on adopting the difficult technical practices for sustainability.

Figure 3

Figure 3-The Alignment Trap (from Avoiding the Alignment Trap in Information Technology, Shpilberg, D. et al, MIT Sloan Management Review, Fall 2007.)

Minimize What You Build

Statistics show that most of what software development teams build is not used. In Figure 4 we see that only 7% of functionality is always used. And 45% is never used. This is a sad state of affairs, and an excellent opportunity. One of the easiest ways to speed up is to do less. If you have less to build, then not only do you spend less time writing and testing software, but you also reduce the complexity of the entire application. And by reducing the complexity of the application it takes less time to maintain because you have a simpler design, fewer dependencies, and fewer physical lines of code that your developers must understand and maintain.

Figure 4.1

Figure 4- Most functionality built is not used

Cross-Functional Team


Figure 5- Context matters. Choose Agile practices that fit your context.

The practices are all described within context. So, for example,the context for the Release Often practice indicates that your customers should be willing to install and run frequent releases and that the quality of your current builds are exceptional. If this is not the case, if your current releases go through a ‘stabilization phase’ and your customers have learned never to take a 1.0 release, then do not adopt Release Often, you will end up hurting your relationship with your customers.

Learning is the Bottleneck

Here is a hypothetical situation that we have presented to many experienced software development teams:

Suppose I was your client and I asked you and your team to build a software system for me. Your team proceeds to build the software system. It takes you a full year – 12 months – to deliver working, tested software.

I then thank the team and take the software and throw it out. I then ask you and your team to rebuild the system. You have the same team. The same requirements. The same tools and software. Basically – nothing has changed – it is exactly the same environment.

How long will it take you and your team to rebuild the system again?

When we present this hypothetical situation to development practitioners – many of them with 20+ years experience in building software – they typically respond with anywhere between 20% to 70% of the time. That is, rebuilding a system that originally takes one year to build takes only 2.5 to 8.5 months to build. It is a huge difference!

So, what is the problem? What was different? The team has learned. They learned about each other as a team and have gelled over the year. They learned about the true requirements – not just those written down. They also learned to use the toolset, they experienced the idiosyncrasies that come up during all software development, and basically they worked through all the unknowns until they built and delivered a successful software solution. Learning is THE bottleneck of software engineering.

The learning that occurs makes up a significant percentage of the time spent on the work. That’s the main reason that Agile practices work so well – they are all about recognizing and responding to change. Agile practices, from continuous integration to iterations, all consist of cycles that help the team learn fast. By cycling in every possible practice, Agile teams accelerate learning, addressing the bottleneck of software engineering. Call it “scientific method,” “continuous improvement” or “inspect and adapt”, to truly benefit from these practices you and your team(s) must learn well and learn often.

Know What You Don’t Know

Since learning is the bottleneck, it makes sense to talk a bit about how we actually learn. The Dreyfus Model of Skill Acquisition, is a useful model of learning. It is not the only model of learning, but it is consistent, has been effective, and works well for our purposes. This model states that there are levels that one goes through as they learn a skill and that your level for different skills can and will be different. Depending on the level you are at, you have different needs and abilities. An understanding of this model is not crucial to learning a skill; after all, we’ve been learning long before this model existed. However, being aware of this model can help us and our team(s) learn effectively.

So let’s take a closer look at the different skill levels in the Dreyfus Model:

Pyramid Diagram

Figure 6-The Drefyus Model for skill acquisition. One starts as a novice and through experience and learning advances towards expertise.

How can the Dreyfus Model help in an organization that is adopting agile methods? First, we must realize that this model is per skill, so we are not competent in everything. Secondly, if agile is new to us, which it probably is, then we are novices or advanced beginners; we need to search for rules and not break them until we have enough experience under our belts. Moreover, since everything really does depend on context,and we are not qualified to deal with context as novices and advanced beginners, we had better get access to some people who are experts or at least proficient to help guide us in choosing the right agile practices for our particular context. Finally, we’d better find it in ourselves to be humble and know what we don’t know to keep from derailing the possible benefits of this new method. And we need to be patient with ourselves and with our colleagues. Learning new skills will take time, and that is OK.

Choosing a Practice to Adopt

Choosing a practice comes down to finding the highest value practice that will fit into your context. Figure 1 will guide you in determining which practices are most effective in decreasing your time to market and will also give you an understanding of the dependencies. The other parts in this section discuss other ideas that can help you refine your choices. Armed with this information:


Figure 7- Steps for Choosing and Implementing Practices

Small steps and failing fast are the most effective methods to release quickly. Weed out defects early because the earlier you find them, the less they will cost to fix as shown in Figure 2 and you won’t be building on a crumbling foundation. This is why Continuous Integration and Iteration lead the practices that most positively affect time to market. They are both, however, dependent on several practices to be effective, so consider starting with Automated Developer Tests and the Iteration trio – Iteration, Iteration Backlog, and Done State.

Next Steps

This refcard is a quick introduction to Agile practices that can help you improve your time to market and an introduction of how you to choose the practices for your organizational context. It is only a starting point. If you choose to embark on an Agile adoption initiative, your next step is to educate yourself and get as much help as you can afford. Books and user groups are a beginning. If you can, find an expert to join your team(s). Remember, if you are new to Agile, then you are a novice or advanced beginner and are not capable of making an informed decision about tailoring practices to your context.

Reference table

Collumn 2 Collumn 3 Collumn 4 Collumn 5 Collumn 6 Collumn 7 Collumn 8 Collumn 9 Collumn 10 Collumn 11
Astels, David. 2003. Test-driven development: a practical guide.
Upper Saddle River, NJ: Prentice Hall.
Pink Cross
Beck, Kent. 2003. Test-driven development by example. Boston, MA:
Pearson Education.
Blue Cross
Beck, K. and Andres, C., Extreme Programming Explained: Embrace
Change (second edition), Boston: Addison-Wesley, 2005
Pink Cross Pink Cross Pink Cross Pink Cross Pink Cross Pink Cross
Cockburn, A., Agile Software Development: The Cooperative Game
(2nd Edition), Addison-Wesley Professional, 2006.
Blue Cross
Cohn, M., Agile Estimating and Planning, Prentice Hall, 2005. Pink Cross
Duvall, Paul, Matyas, Steve, and Glover, Andrew. (2006). Continuous
integration: Improving Software Quality and Reducing Risk. Boston: Addison-Wesley.
Blue Cross
Elssamadisy, A., Agile Adoption Patterns: A Roadmap to
Organizational Success, Boston: Pearson Education, 2008
Pink Cross Pink Cross Pink Cross Pink Cross Pink Cross Pink Cross Pink Cross Pink Cross Pink Cross Pink Cross
Feathers, Michael. 2005. Working effectively with legacy code. Upper
Saddle River, NJ: Prentice Hall.
Blue Cross
Jeffries, Ron. “Running Tested Features.”
Pink Cross
Jeffries, Ron. 2004. Extreme programming adventures in c#.
Redmond, WA: Microsoft Press.
Blue Cross
Kerievsky, Joshua. “Don’t Just Break Software, Make Software.”
Pink Cross

ABOUT Gemba Systems

Gemba Systems

Gemba Systems is comprised of a group of seasoned practitioners who are experts at Lean & Agile Development as well as crafting effective learning experiences. Whether the method is Scrum, Extreme Programming, Lean Development or others - Gemba Systems helps individuals and teams to learn and adopt better product development practices. Gemba Systems has taught better development techniques - including lean thinking, Scrum and Agile Methods - to thousands of developers in dozens of companies around the globe. To learn more visit http://us.gembasystems.com/

Recommended Book

Agile Adoption Patterns

Agile Adoption Patterns will help you whether you’re planning your first agile project, trying to improve your next project, or evangelizing agility throughout your organization. This actionable advice is designed to work with any agile method, from XP and Scrum to Crystal Clear and Lean. The practical insights will make you more effective in any agile project role: as leader, developer, architect, or customer.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

JBoss RichFaces

By Nick Belaevski, Ilya Shaikovsky, Jay Balunas and Max Katz

22,773 Downloads · Refcard 44 of 204 (see them all)


The Essential JBoss RichFaces Cheat Sheet

JBoss RichFaces is a JSF component library that consists of two main parts: Ajax enabled JSF components and the CDK (Component Development Kit). It allows easy integration of Ajax capabilities into enterprise application development. This DZone Refcard is a great start for any developer looking to learn more about JBoss RichFaces. You will find plenty of examples and code samples that will help with your Rich Internet Applications. This Refcard also covers the following topics: What is RichFaces?, Basic Concepts, Controlling Traffic, Tags, Hot Tips and more.
HTML Preview
JBoss RichFaces

JBoss RichFaces

By Nick Belaevski, Ilya Shaikovsky Jay Balunas, and Max Katz

What is Richfaces?

RichFaces is a JSF component library that consists of two main parts: AJAX enabled JSF components and the CDK (Component Development Kit). RichFaces UI components are divided into two tag libraries a4j: and rich:. Both tag libraries offer out-of-the-box AJAX enabled JSF components. The CDK is a facility for creating, generating and testing you own rich JSF components (not covered in this card).

Installing Richfaces

See the RichFaces Project page for the latest version- http:// www.jboss.org/jbossrichfaces/.

Add these jar files to your WEB-INF/lib directory: richfacesapi. jar, richfaces-impl.jar, richfaces-ui.jar, commons-beanutils.jar, commons-collections.jar, commons-digester.jar, commons-logging.jar

RichFaces Filter

Update the web.xml file with the RichFaces filter:

  <display-name>RichFaces Filter</display-name>
  <servlet-name>Faces Servlet</servlet-name>

Hot Tip

The RichFaces Filter is not needed for applications that use Seam (http://seamframework.org)

Page setup

Configure RichFaces namespaces and taglibs in your XHTML and JSP pages.

<%@ taglib uri=”http://richfaces.org/a4j” prefix=”a4j”%>
<%@ taglib uri=”http://richfaces.org/rich” prefix=”rich”%>

Hot Tip

Use JBoss Tools for rapid project setup - http://www.jboss.org/tools

Basic Concepts

Sending an AJAX request


Sends an AJAX request based on a DHTML event supported by the parent component. In this example, the AJAX request will be triggered after the user types a character in the text box:


<h:inputText value=”#{echoBean.text}”>
  <a4j:support event=”onkeyup” action=”#{echoBean.count}”
	              reRender=”echo, cnt”/>
<h:outputText id=”echo” value=”Echo: #{echoBean.text}”/>
<h:outputText id=”cnt” value=”Count: #{echoBean.textCount}”/></td>

a4j:support can be attached to any html tag that supports DHTML events, such as:


a4j:commandButton, a4j:commandLink

Similar to h:commandButton and h:commandLink but with two major differences. They trigger an AJAX request and allow partial JSF component tree rendering.

The request goes through the standard JSF life cyle. During the Render Response, only components whose client ids are listed in the reRender attribute (echo, count) are rendered back the the browser.

<h:selectOneRadio value=”#{colorBean.color}”>
  <f:selectItems value=”#{colorBean.colorList}” />
  <a4j:support event=”onclick” reRender=”id” />
a4j:commandButton, a4j:commandLink

<h:inputText value=”#{echoBean.text}”/>
  <h:outputText id=“echo” value=“Echo: #{echoBean.text}”/>
  <h:outputText id=“cnt” value=“Count: #{echoBean.textCount}”/>
  <a4j:commandButton value=“Submit” action=“#{echoBean.count}”
     reRender=“echo, cnt”/>

Basic Concepts, continued

When the response is received, the browser DOM is updated with the new data i.e ‘RichFaces is neat’ and ‘17’.

New Data

a4j:commandLink works exactly the same but renders a link instead of a button.


Enables independent periodic polling of the server via an AJAX request. Polling interval is defined by the interval attribute and enable/disable polling is configured via enabled attribute (true|fase).


<a4j:poll id=”poll” interval=”500” enabled=”#{pollBean.enabled}”
reRender=”now” />
  <a4j:commandButton value=”Start” reRender=”poll”
	 action=”#{pollBean.start}” />
  <a4j:commandButton value=”Stop” reRender=”poll”
	 action=”#{pollBean.stop}” />
  <h:outputText id=”now” value=”#{pollBean.now}” />


public class PollBean {
	private Boolean enabled=false; // setter and getter
	public void start () {enabled = true;}
	public void stop () {enabled = false;}
	public Date getNow () {return new Date();}


Allows sending an AJAX request directly from any JavaScript function (built-in or custom).


<td onmouseover=”setdrink(‘Espresso’)”
<h:outputText id=”drink” value=”I like #{bean.drink}” />
<a4j:jsFunction name=”setdrink” reRender=”drink”>
<a4j:actionparam name=”param1” assignTo=”#{bean.drink}”/>

When the mouse hovers or leaves a drink, the setdrink() JavaScript function is called. The function is defined by an a4j:jsFunction tag which sets up the AJAX call. It can call listeners and perform partial page rendering. The drink parameter is passed to the server via a4j:actionparam tag.


a4j:push works similarly to a4j:poll; however, in order to check the presence of a message in a queue, it only makes a minimal HEAD request(ping-like) to the server without invoking the JSF life cycle. If a message exists, a sandard JSF request is sent to the server.

Partial view (page) rendering

There are two ways to perform partial view rendering when AJAX requests return.

ReRender attribute

Most RichFaces components support the reRender attribute to define the set of client ids to reRender.

Attribute Can bind to
reRender/td> Set, Collection, Array, comma-delimited String

ReRender can be set statically as in the examples above or with EL:

<a4j:commandLink reRender=”#{bean.renderControls}”/>

Basic Concepts, continued

It’s also possible to point to parent components to rerender all child components:

<a4j:commandLink value=”Submit” reRender=”panel” />
<h:panelGrid id=”panel”>
   <h:outputText />

In the example above the child components of the outputPanel will be rerendered when the commandLink is submitted.


All child components of an a4j:outputPanel will be rerendered automatically for any AJAX request.

<a4j:commandLink value=”Submit” />
<a4j:outputPanel ajaxRendered=”true”>

In the example above the child components of the outputPanel will be rerendered when the commandLink is submitted.

Hot Tip

If ajaxRendered=”false” (default) the a4j:outputPanel behaves just like h:panelGroup.

To limit rendering to only components set in the reRender attribute, set limitToList=”true”. In this example, only h:panelGrid will be rendered:

<a4j:commandLink reRender=”panel” limitToList=”true”/>
<h:panelGrid id=”panel”>
<a4j:outputPanel ajaxRendered=”true”>

Deciding what to process on the server

When an AJAX request is sent to the server, the full HTML form is always submitted. However, once on the server we can decide what components to decode or process during the Apply Request, Process Validations and Update Model phases. Selecting which components to process is important in validation. For example, when validating a component (field) via AJAX, we don’t want to process other components in the form (in order not to display error messages for components where input hasn’t been entered yet). Controlling what is processed will help us with that.

The simplest way to control what is processed on the server is to define an AJAX region using the a4j:region tag (by default the whole page is an AJAX region).

  <a4j:support event=”onblur” />
    <a4j:support event=”onblur” />

When the user leaves the 2nd input component (onblur event), an AJAX request will be sent where only this input field will be processed on the server. All other components outside this region will not be processed (no conversion/validation, update model, etc). It’s also possible to nest regions:


Basic Concepts, continued

When the request is invoked from the inner region, only components in the inner region will be processed. When invoked from outer region, all components (including inner region) will be processed.

When sending a request from a region, processing is limited to components inside this region. To limit rendering to a region, the renderRegionOnly attribute can be used:

<a4j:region renderRegionOnly=”true”>
   <h:inputText />
   <a4j:commandButton reRender=”panel”/>
   <h:panelGrid id=”panel”>
<a4j:outputPanel ajaxRendered=”true”>

When the AJAX request is sent from the region, rendering will be limited to components inside that region only because renderRegionOnly=”true”. Otherwise, components inside a4j:outputPanel would be rendered as well.

To process a single input or action component, instead of wrapping inside a4j:region, it’s possible to use the ajaxSingle attribute:

<h:inputText> <a4j:support event=”onblur” ajaxSingle=”true”/> <<

When using ajaxSingle=”true” and a need arises to process additional components on a page, the process attribute is used to include id’s of components to be processed.

   <a4j:support event=”onblur” ajaxSingle=”true” process=”mobile”/>
<h:inputText id=”mobile”/>

The process can also point to an EL expression or container
component id in which case all components inside the
container will be processed.

When just validating form fields, it is usually not necessary
to go through the Update Model and Invoke Application
phases. Setting bypassUpdates=”true”, will skip these phases,
improving response time, and allowing you to perform
validation without changing the model’s state.

  <a4j:support event=”onblur” ajaxSingle=”true”

JavaScript interactions

RichFaces components send an AJAX request and do partial page rendering without writing any direct JavaScript code. If you need to use custom JavaScript functions, the following attributes can be used to trigger them.

Tag Attributte Description
onbeforedomupdate: JavaScript code to be invoked after
response is received but before browser DOM update
oncomplete: JavaScript code to be invoked after browser DOM
updatedata. Allows to get the additional data from the server
during an AJAX call. Value is serialized in JSON format.
onclick: JavaScript code to be invoked before AJAX request is sent.
onsubmit: JavaScript code to be invoked before AJAX request is sent.

Controlling Traffic

Flooding a server with small requests can cripple a web application, and any dependent services like databases.

Controlling Traffic, continued
Richfaces 3.3.0.GA and Higher

Queues can be defined using the<a4j:queue .../> component and are referred to as Named or Unnamed queues. Unnamed queues are also referred to as Default queues because components within a specified scope will use an unnamed queue by default.

Notable Attributes

Attribute Description
name Optional Attribute that determines if this is a named or unnamed queue
sizeExceededBehavior When the size limit reached: dropNext, DropNew, fireNext, fireNew
ignoreDupResponses If true then responses from the server will be ignored if there are queued evens of the same type waiting.
requestDelay Time in ms. events should wait in the queue incase more events of the same type are fired
Event Triggers onRequestDequeue, onRequestQueue, onSizeExeeded, onSubmit

Other notable attributes include: disabled, id, binding, status, size, timeout.

Named Queues

Named queues will only be used by components that reference them by name as below:

<a4j:queue name=”fooQueue” ... />
	<h:inputText … >
<a4j:support eventsQueue=”fooQueue” .../>

Unnamed Queues

Unnamed queues are used to avoid having to specifically reference named queues for every component.

Queue Scope Description
Global All views of the application will have a view scoped queue that does not need to be defined and that all components will use.
View Components within the parent will use this queue
Form Component within the parent or will use this queue

Global Queue

To enable the global queue for an application you must add this to the web.xml file.


It is possible to disable or adjust the global queue’s settings in a particular view by referencing it by its name.

<a4j:queue name=”org.richfaces.global_queue” disabled=”true”... />

View Scoped Default Queues

Defined the <a4j:queue> as a child to the <f:view>.

  <a4j:queue ... />

Hot Tip

Performance Tips:

  • Control the number of requests sent to the server.
  • Limit the size of regions that are updated per request using <a4j:region/>
  • Cache or optimize database access for AJAX requests
  • Don’t forget to refresh the page when needed

Controlling Traffic, continued

Form Scoped Default Queue

This can be useful for separating behavior and grouping requests in templates.

<a4j:queue ... />

A4j:* Tags

The a4j:* tags provide core AJAX components that allow developers to augment existing components and provide plumbing for custom AJAX behavior.


This component is just like ui:repeat from Facelets, but also allows AJAX updates for particular rows. In the example below the component is used to output a list of numbers together with controls to change (the value is updated for the clicked row only):

<a4j:repeat value=”#{items}” var=”item”>
	<h:outputText value=”#{item.value} “ id=”value”/>
	<a4j:commandLink action=”#{item.inc}” value=” +1 “

#{items} could be any of the supported JSF data models. var identifies a request-scoped variable where the data for each iteration step is exposed. No markup is rendered by the component itself so a4j:repeat cannot serve as a target for reRender.

The component can be updated fully (by usual means) or partially. In order to get full control over partial updates you should use the ajaxKeys attribute. This attribute points to a set of model keys identifying the element sequence in iteration. The first element has Integer(0) key, the second – Integer(1) key, etc. Updates of nested components will be limited to these elements.


Defines page areas that can be updated by AJAX according to application navigation rules. It has a viewId attribute defining the identifier of the view to include:

<a4j:include viewId=”/first.xhtml” /> 

One handy usage of a4j:include is for building multi-page wizards. Ajax4jsf command components put inside the included page (e.g. first.xhtml for our case) will navigate users to another wizard page via AJAX:

<a4j:commandButton action=”next” value=”To next page” />

(The “next” action should be defined in the faces-config.xml navigation rules for this to work). Setting ajaxRendered true will cause a4j:include content to be updated on every AJAX request, not only by navigation. Currently, a4j:include cannot be created dynamically using Java code.


Allows you to keep bean state (e.g. for request scoped beans) between requests:

<a4j:keepAlive beanName=”searchBean” />

Standard JSF state saving is used so in order to be portable

a4j:* Components, continued

it is recommended that bean class implements either java. io.Serializable or javax.faces.component.StateHolder.

a4j:keepAlive cannot be created programmatically using Java. Mark managed bean classes using the org.ajax4jsf. model.KeepAlive annotation in order to keep their states.JBoss Seam’s page scope provides a more powerful analog to ths behavior.

RichFaces provides several ways to load bundles, scripts, and styles into your application.

Tag Description
a4j:loadBundle a4j:loadBundle loads a resource bundle localized for the locale of the current view
a4j:loadScript loads an external JavaScript file into the current view
a4j:loadStyle loads an external .css file into the current view

Used to display the current status of AJAX requests such as “loading...” text and images. The component uses “start” and “stop” facets to define behavior. It is also possible to invoke Javascript or set styles based on status mode changes.

Adds additional request parameters and behavior to command components (like a4j:commandLink or h:commandLink). This component can also add actionListeners that will be fired after the model has been updated.

rich:* tags

The rich: tags are ready-made or self-contained components. They don’t require any additional wiring or page control components to function.

Input Tags

Tag Description
rich:calendar Advanced Date and Time input with many options such as
inline/popup, locale, and custom date and time patterns.
rich:editor A complete WYSIWYG editor component that supports
HTML and Seam Text
rich:inplaceInput Inline inconspicuous input fields
rich:inputNumberSlider min/max values slider

Components include: comboBox, fileUpload, inplaceSelect, inputNumberSpinner

Output Tags

Tag Description
rich:modalPanel Blocks interactions with the rest of the page while active
rich:panelMenu Collapsable grouped panels with subgroup support
rich:progressBar AJAX polling of server state
rich:tabPanel Tabbed panel with client, server, or ajax switching
rich:toolBar Complex content and settings

Components include: paint2D, panel, panelBar,simpleTogglePanel, togglePanel, toolTip

Data Grids, Lists, and Tables

RichFaces has support for AJAX-based data scrolling, complex cell content, grid/list/table formats, filtering, sorting, etc....

rich:* Tags, continued

Tag Description
rich:dataTable Supports complex content, AJAX updates, sortable, and filterable columns
rich:extendedDataTable Adds scrollable data, row selection options, adjustable column locations, and row/column grouping
rich:dataGrid Complex grid rendering of grouped data from a model

Complex Content Sample

Tags Table


Hierarchical menus available in RichFaces include:

Tag Description Menus
rich:contextMenu Based on page location
and can be attached to
most components link
images, labels, etc...
rich:dropDownMenu Classic application style
menu that supports
icons and submenus.

Components include: rich:menItem, rich:menuGroup,rich:menuSeparator


RichFaces has tree displays that support many options such as switching (AJAX client or server), drag-drop and are dynamically generated from data models.

Tag Description Menus2
rich:tree Core parent component for a tree
rich:treeNode Creates sets of tree elements
rich:treeNodeAdaptor Defines data model sources for trees
rich:recursiveTree NodeAdaptor Adds recursive node definition from models


Provides visually appealing list manipulation options for the UI.

Tag Description Menus3
rich:listShuttle Advanced data list manipulation (figure x)
rich:orderingList Visually manipulate a lists order

Validation Tags

AJAX endabled validation including hibernate validation.

Tag Description
rich:ajaxValidator Event triggered validation without updating the model- this skips all JSF phases except validation.
rich:beanValidator Validate individual input fields using hibernate validators in your bean/model classes
rich:graphValidator Validate whole subtree of components using hibernate validators. can also validate the whole bean after model updates.


Allows many component types to support drag and drop features.

Tag Description
rich:dragSupport Add as a child to components you want to drag.
righ:dropSupport Define components that support dropped items.
rich:dragIndicator Allows for custom visualizations while dragging an item.
rich:dndParam To pass parameters during a drag-n-drop action.


Tag Description
rich:componentControl Attach triggers to call JS API functions on the components after defined events.
rich:effect Scriptaculous visual effect support
rich:gmap Embed GoogleMaps with custom controls
rich:hotKey Define events triggered by hot key (example: alt-z)
rich:insert Display and format files from the file system
rich:virtualEarth Embed Virtual Earth images and controls

Components include: rich:message, rich:messages, rich:jQuery


Using out-of-the-box skins

RichFaces ships with a number of built-in skins.

Out-of-the-box Skins
default, classic, emeraldTown, blueSky, ruby, wine, deepMarine, sakura, plain, default, laguna*, glassx*, darkx*
* Require a separate jar file to function

Add the org.richfaces.SKIN context parameter to web.xml and set the skin name.


Sample blueSky skin Sample ruby skin
sample 1 sample 2

Using skin property values on the page

You can use skinBean implicit object to use any value from the skin file on your page.

<h:commandButton value=”Next”

sample 3

The button color is set according to the current skin[Ruby].s

Loading different skins at runtime

You can define an applications skin with EL expression like this:


Define a session scoped skinBean and manage its currentSkin property at runtime with your skin names values. Every time a page is rendered, RichFaces will resolve the value in #{skinBean.currentSkin} to get the current skin. Changing Skins should not be done via AJAX but with a full page refresh. A full page refresh will ensure that all CSS links are correctly updated based on the new skin

Hot Tip

Advanced Skinning Features

  • Create custom skins, or extend the default skins
  • Override or extend styles per page as needed
  • Automatically skin the standard JSF components
  • Plug'n'Skin feature used to generate whole new skins using Maven archetypes

Customizing redefined CSS classes

Under the hood all RichFaces components are equipped with a set of predefined rich-* CSS classes that can be extended to allow customization of a components style (see documentation for details). By modifying these CSS classes you can update all components that use them such as:

.rich-input-text {
   color: red;

Project links for more information or questions:

Project page (http://www.jboss.org/jbossrichfaces)
Documentation (http://jboss.org/jbossrichfaces/docs)

About The Author

Photo of author Nick Belaevski


Nick Belaevski Nick Belaevski is the team leader of the RichFaces project working for Exadel Inc. He has more than four years of experience in development of middleware products including JBoss Tools and RichFaces.

Projects: RichFaces

Photo of author Ilya Shaikovski

Ilya Shaikovski

Ilya Shaikovsky is the Exadel product manager working on the RichFaces project since Exadel began ajax4jsf. He’s responsible for requirements gathering, specification development, JSF related product analysis and supporting RichFaces and JSF related technologies for business applications. Prior to this he worked on the Exadel Studio Pro product.

Projects: RichFaces

Photo of author Jay Balunas

Jay Balunas

Jay Balunas works as the RichFaces Project Lead and core developer at JBoss, a division of Red Hat. He has been architecting and developing enterprise applications for over ten years specializing in web tier frameworks, UI design, and integration. Jay blogs about Seam, RichFaces, and other technologies at http://in.relation.to/Bloggers/Jay

Projects: RichFaces, Seam Framework, and JBoss Tattletale

Photo of author Max Katz

Max Katz

Max Katz is a senior system engineer at Exadel. He is the author of “Practical Rich- Faces” (Apress). He has been involved with RichFaces since its inception. He has written numerous articles, provided training, and presented at many conferences and webinars about RichFaces. Max blogs about RichFaces and RIA technologies at http://mkblog.exadel.com.

Projects: RichFaces

Recommended Book


JBoss RichFaces is a rich JSF component library that helps developers quickly develop next–generation web applications. Practical RichFaces describes how to best take advantage of RichFaces, the integration of the Ajax4jsf and RichFaces libraries, to create a flexible and powerful programs. Assuming some JSF background, it shows you how you can radically reduce programming time and effort to create rich AJAX based applications.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.

Scalability & High Availability

By Eugene Ciurana

34,167 Downloads · Refcard 43 of 204 (see them all)


The Essential Scalability & High Availability Cheat Sheet

Scalability and Availability are mentioned so often that often it is difficult to know what they actually mean in each case. They are often interchanged and create confusion that results in poorly managed expectations and unrealistic metrics. This DZone Refcard provides you with the tools to define these terms so that your team can implement mission-critical systems with well-understood performance goals. This Refcard also covers: An Overview of Scalability and High Availability, Implementing Scalable Systems, Caching Strategies, Clustering, Redundancy and Fault Tolerance, Hot Tips, and More.
HTML Preview
Scalability & High Availability

Scalability& High Availability

By Eugene Ciurana


Scalability, High Availability, and Performance

The terms scalability, high availability, performance, and mission-critical can mean different things to different organizations, or to different departments within an organization. They are often interchanged and create confusion that results in poorly managed expectations, implementation delays, or unrealistic metrics. This Refcard provides you with the tools to define these terms so that your team can implement mission-critical systems with wellunderstood performance goals.


It's the property of a system or application to handle bigger amounts of work, or to be easily expanded, in response to increased demand for network, processing, database access or file system resources.

Horizontal scalability
A system scales horizontally, or out, when it's expanded by adding new nodes with identical functionality to existing ones, redistributing the load among all of them. SOA systems and web servers scale out by adding more servers to a load-balanced network so that incoming requests may be distributed among all of them. Cluster is a common term for describing a scaled out processing system.


Figure 1:Clustering

Vertical scalability
A system scales vertically, or up, when it's expanded by adding processing, main memory, storage, or network interfaces to a node to satisfy more requests per system. Hosting services companies scale up by increasing the number of processors or

Scalability, continued

the amount of main memory to host more virtual servers in the same hardware.


Figure 2:Virtualization

High Availability

Availability describes how well a system provides useful resources over a set period of time. High availability guarantees an absolute degree of functional continuity within a time window expressed as the relationship between uptime and downtime.

A = 100 – (100*D/U), D ::= unplanned downtime, U ::= uptime; D, U expressed in minutes

Uptime and availability don't mean the same thing. A system may be up for a complete measuring period, but may be unavailable due to network outages or downtime in related support systems. Downtime and unavailability are synonymous.

High Availability, continued

Measuring Availability
Vendors define availability as a given number of "nines" like in Table 1, which also describes the number of minutes or seconds of estimated downtime in relation to the number of minutes in a 365-day year, or 525,600, making U a constant for their marketing purposes.

Availability % Downtime in Minutes Downtime per Year Vendor Jargon
90 52,560.00 36.5 days one nine
99 5,256.00 4 days two nines
99.9 525.60 8.8 hours three nines
99.99 52.56 53 minutes four nines
99.999 5.26 5.3 minutes five nines
99.9999 0.53 32 seconds six nines

Table 1:Availability as a Percentage of Total Yearly Uptime

High availability depends on the expected uptime defined for system requirements; don't be misled by vendor figures. The meaning of having a highly available system and its measurable uptime are a direct function of a Service Level Agreement. Availability goes up when factoring planned downtime, such as a monthly 8-hour maintenance window. The cost of each additional nine of availability can grow exponentially. Availability is a function of scaling the systems up or out and implementing system, network, and storage redundancy.

Service Level Agreement (SLA)

SLAs are the negotiated terms that outline the obligations of the two parties involved in delivering and using a system, like:

  • System type (virtual or dedicated servers, shared hosting)
  • Levels of availability
    • Minimum
    • Target
  • Uptime
    • Network
    • Power
    • Maintenance windows
  • Serviceability
  • Performance and Metrics
  • Billing

SLAs can bind obligations between two internal organizations (e.g. the IT and e-commerce departments), or between the organization and an outsourced services provider. The SLA establishes the metrics for evaluating the system performance, and provides the definitions for availability and the scalability targets. It makes no sense to talk about any of these topics unless an SLA is being drawn or one already exists.


SLAs determine whether systems must scale up or out. They also drive the growth timeline. A stock trading system must scale in real-time within minimum and maximum availability levels. An e-commerce system, in contrast, may scale in during the "slow" months of the year, and scale out during the retail holiday season to satisfy much larger demand.

Implementing Scalable Systems, continued

Load Balancing

Load balancing is a technique for minimizing response time and maximizing throughput by spreading requests among two or more resources. Load balancers may be implemented in dedicated hardware devices, or in software. Figure 3 shows how load-balanced systems appear to the resource consumers as a single resource exposed through a well-known address. The load balancer is responsible for routing requests to available systems based on a scheduling rule.

Load Balancer

Figure 3: Availability as percentage of Total Yearly Uptime

Scheduling rules are algorithms for determining which server must service a request. Web applications and services are balanced by following round robin scheduling rules. Caching pools are balanced by applying frequency rules and expiration algorithms. Applications where stateless requests arrive with a uniform probability for any number of servers may use a pseudo-random scheduler. Applications like music stores, where some content is statistically more popular, may use asymmetric load balancers to shift the larger number popular requests to higher performance systems, serving the rest of the requests from less powerful systems or clusters.

Persistent Load Balancers
Stateful applications require persistent or sticky load balancing, where a consumer is guaranteed to maintain a session with a specific server from the pool. Figure 4 shows a sticky balancer that maintains sessions from multiple clients. Figure 5 shows how the cluster maintains sessions by sharing data using a database.

Sticky Load Balancer

Figure 4:Sticky Load Balancer

Common Features of a Load Balancer
Asymmetric load distribution – assigns some servers to handle a bigger load than others

  • Content filtering - inbound or outbound
  • Distributed Denial of Services (DDoS) attack protection
  • Firewall
  • Payload switching - send requests to different servers based on URI, port and/or protocol
  • Priority activation - adds standing by servers to the pool

Load Balancing, continued

  • Rate shaping - ability to give different priority to different traffic
  • Scripting - reduces human interaction by implementing programming rules or actions
  • SSL offloading - hardware assisted encryption frees web server resources
  • TCP buffering and offloading - throttle requests to servers in the pool

Figure 5: Database Sessions


Stateful load balancing techniques require data sharing among the service providers. Caching is a technique for sharing data among multiple consumers or servers that are expensive to either compute or fetch. Data are stored and retrieved in a subsystem that provides quick access to a copy of the frequently accessed data.

Caches are implemented as an indexed table where a unique key is used for referencing some datum. Consumers access data by checking (hitting) the cache first and retrieving the datum from it. If it's not there (cache miss), then the costlier retrieval operation takes place and the consumer or a subsystem inserts the datum to the cache.

Write Policy

The cache may become stale if the backing store changes without updating the cache. A write policy for the cache defines how cached data are refreshed. Some common write policies include:

  • Write-through: every write to the cache follows a synchronous write to the backing store
  • Write-behind: updated entries are marked in the cache table as dirty and it's updated only when a dirty datum is requested.
  • No-write allocation: only read requests are cached under the assumption that the data won't change over time but it's expensive to retrieve

Caching Strategies, continued

Application Caching

  • Implicit caching happens when there is little or no programmer participation in implementing the caching. The program executes queries and updates using its native API and the caching layer automatically caches the requests independently of the application. Example: Terracotta (http://www.terracotta.org).
  • Explicit caching happens when the programmer participates in implementing the caching API and may also implement the caching policies. The program must import the caching API into its flow in order to use it. Examples: memcached (http://www.danga.com/memcached) and Oracle Coherence (http://coherence.oracle.com).

In general, implicit caching systems are specific to a platform or language. Terracotta, for example, only works with Java and JVM-hosted languages like Groovy. Explicit caching systems may be used with many programming languages and across multiple platforms at the same time. memcached works with every major programming language, and Coherence works with Java, .Net, and native C++ applications.

Web Caching

Web caching is used for storing documents or portions of documents (‘particles') to reduce server load, bandwidth usage and lag for web applications. Web caching can exist on the browser (user cache) or on the server, the topic of this section. Web caches are invisible to the client may be classified in any of these categories:

  • Web accelerators: they operate on behalf of the server of origin. Used for expediting access to heavy resources, like media files. Content distribution networks (CDNs) are an example of web acceleration caches; Akamai, Amazon S3, Nirvanix are examples of this technology.
  • Proxy caches: they serve requests to a group of clients that may all have access to the same resources. They can be used for content filtering and for reducing bandwidth usage. Squid, Apache, ISA server are examples of this technology.

Distributed Caching

Caching techniques can be implemented across multiple systems that serve requests for multiple consumers and from multiple resources. These are known as distributed caches, like the setup in Figure 7. Akamai is an example of a distributed web cache. memcached is an example of a distributed application cache.

Caching Strategies, continued

Distributed Cache

Figure 7:Distributed Cache


A cluster is a group of computer systems that work together to form what appears to the user as a single system. Clusters are deployed to improve services availability or to increase computational or data manipulation performance. In terms of equivalent computing power, a cluster is more costeffective than a monolithic system with the same performance characteristics.

The systems in a cluster are interconnected over high-speed local area networks like gigabit Ethernet, fiber distributed data interface (FDDI), Infiniband, Myrinet, or other technologies.

Load Balancing Cluster

Figure8: Load Balancing Cluster

Load-Balancing Cluster (Active/Active): Distribute the load among multiple back-end, redundant nodes. All nodes in the cluster offer full-service capabilities to the consumers and are active at the same time.

Load Balancing Cluster

High Availability Cluster

Clustering, continued

High Availability Cluster(Active/Passive): Improve services availability by providing uninterrupted service through redundant nodes that eliminate single points of failure. High availability clusters require two nodes at a minimum, a "heartbeat" to detect that all nodes are ready, and a routing mechanism that will automatically switch traffic if the main node fails.


Figure 10: Grid

Grid: Process workloads defined as independent jobs that don't require data sharing among processes. Storage or network may be shared across all nodes of the grid, but intermediate results have no bearing on other jobs progress or on other nodes in the grid, such as a Cloudera Map Reduce cluster (http://www.cloudera.com).

Figure 11

Figure 11: Computational Clusters

Computational Clusters: Exeute processes that require raw computational power instead of executing transactional operations like web or database clusters. The nodes are tightly coupled, homogeneous, and in close physical proximity. They often replace supercomputers.


Redundant system design depends on the expectation that any system component failure is independent of failure in the other components.

Fault tolerant systems continue to operate in the event of component or subsystem failure; throughput may decrease but overall system availability remains constant. Faults in hardware or software are handled through component redundancy. Fault tolerance requirements are derived from SLAs. The implementation depends on the hardware and software components, and on the rules by which they interact.

Redundancy and Fault Tolerance, continued

Fault Tolerance SLA Requirements

  • No single point of failure – redundant components ensure continuous operation and allow repairs without disruption of service
  • Fault isolation – problem detection must pinpoint the specific faulty component
  • Fault propagation containment – faults in one component must not cascade to others
  • Reversion mode – set the system back to a known state

Redundant clustered systems can provide higher availability, better throughput, and fault tolerance. The A/A cluster in Figure 12 provides uninterrupted service for a scalable, stateless application.

Figure 12

Figure 12:A/A Full Tolerance and Recovery

Some stateful applications may only scale up; the A/P cluster in Figure 13 provides uninterrupted service and disaster recovery for such an application. A/A configurations provide failure transparency. A/P configurations may provide failure transparency at a much higher cost because automatic failure detection and reconfiguration are implemented through a feedback control system, which is more expensive and trickier to implement.


Figure 13:A/P Fault Tolerance and Recovery

Enterprise systems most commonly implement A/P fault tolerance and recovery through fault transparency by diverting services to the passive system and bringing it on-line as soon as possible. Robotics and life-critical systems may implement probabilistic, linear model, fault hiding, and optimizationcontrol systems instead.

Cloud Computing

Cloud computing describes applications running on distributed, computing resources owned and operated by a third-party.

End-user apps are the most common examples. They utilize the Software as a Service (SaaS) and Platform as a Service (PaaS) computing models.

Cloud Computing, continued

Figure 14

Figure 14:Cloud Computing Configuration

Cloud Services Types

  • Web services – Salesforce com, USPS, Google Maps
  • Service platforms – Google App Engine, Amazon Web Services (EC2, S3, Cloud Front), Nirvanix, Akamai, MuleSource

Fault Detection Methods

Fault detection methods must provide enough information to isolate the fault and execute automatic or assisted failover action. Some of the most common fault detection methods include:

  • Built-in Diagnostics
  • Protocol Sniffers
  • Sanity Checks
  • Watchdog Checks

Criticality is defined as the number of consecutive faults reported by two or more detection mechanisms over a fixed time period. A fault detection mechanism is useless if it reports every single glitch (noise) or if it fails to report a real fault over a number of monitoring periods.


Performance refers to the system throughput under a particular workload for a defined period of time. Performance testing validates implementation decisions about the system throughput, scalability, reliability, and resource usage. Performance engineers work with the development and deployment teams to ensure that the system's non-functional requirements like SLAs are implemented as part of the system development lifecycle. System performance encompasses hardware, software, and networking optimizations.

Hot Tip

Performance testing efforts must begin at the same time as the development project and continue through deployment

System Performance, continued

The performance engineer's objective is to detect bottlenecks early and to collaborate with the development and deployment teams on eliminating them.

System Performance Tests

Performance specifications are documented along with the SLA and with the system design. Performance troubleshooting includes these types of testing:

  • Endurance testing - identifies resource leaks under the continuous, expected load.
  • Load testing - determines the system behavior under a specific load.
  • Spike testing - shows how the system operates in response to dramatic changes in load.
  • Stress testing - identifies the breaking point for the application under dramatic load changes for extended periods of time.

System Performance, continued

Software Testing Tools

There are many software performance testing tools in the market. Some of the best are released as open-source software. A comprehensive list of those is available from:


These include Java, native, PHP, .Net, and other languages and platforms.

Staying Current

Do you want to know about specific projects and cases where scalability, high availability, and performance are the hot topic? Join the scalability newsletter:


About The Author

Photo of author Eugene Ciurana

Eugene Ciurana

Eugene Ciurana is an open-source evangelist who specializes in the design and implementation of missioncritical, high-availability large scale systems. As Director of Systems Infrastructure, he and his team designed and built a 100% SOA and cloud system that enables millions of Internet-ready educational and handheld products and services. As chief liaison between Walmart.com Global and the ISD Technology Council, he led the official adoption of Linux and other open-source technologies at Walmart Stores Information Systems Division. He's also designed high performance systems for major financial institutions and many Fortune 100 companies in the United States and Europe.


  • Developing with the Google App Engine
  • Best Of Breed: Building High Quality Systems, Within Budget, On Time, and Without Nonsense
  • The Tesla Testament: A Thriller

Web site

Recommended Book

Google App Engine

Developing with Google App Engine introduces Google App Engine, a platform that provides developers and users with the infrastructure that Google itself uses for developing and deploying massively scalable applications. Using Python as the primary programming tool, Developing with Google App Engine makes it easy to implement scalability and high performance features like distributed databases, clustering, stateless applications, and sophisticated data caching.

Share this Refcard with
your friends & followers...

DZone greatly appreciates your support.

Your download should begin immediately.
If it doesn't, click here.