Workflow data are all business data accessed and modified during the execution of processes. They are indirectly manipulated by applications and directly by the process engine. Data may include control data, e.g. simple flags accessed in transition conditions or complex business data such as an invoice or a customer.
There are two general types of data: predefined and user-defined. Predefined data give read only access to intrinsic properties of the current process instance and are part of every model, e.g.:
User-defined data are data defined specifically for the process.
Every data is of some specific data type with its own configuration possibilities, e.g.:
The Stardust Process Engine ships with a couple of predefined data types. This type system may be programmatically extended.
The icon used for workflow data in Stardust is a set of two small barrels. The appearance of the icon on the canvas area depends on its data type.
The chapter Data Integration describes the usage of concrete data types in detail.
Data instantiated at runtime are called data values. The instantiation of these data values is based on the respective process instance. They have a persistent representation in the audit trail. As mentioned before, data of sub-processes are associated to the scope process instance.
Access points are named parameters to set or get values associated with a model element at runtime. At modeling time they have at least the following attributes:
Access points are a unique concept to make data (in a general sense) reachable in the Stardust runtime. Access points are e.g. defined for applications, activities, event handlers etc. Examples include:
Depending on the type of the access point a so-called access path can be applied to it. Basically, this is just an operation at runtime.
The syntax and semantics of access paths are defined by the data type of the corresponding access point. An access path may be empty.
A pair (access point, access path) belonging together is called an accessor. Like with access paths we distinguish IN accessors and OUT accessors. When an accessor is referenced as such a pair (a,b) the first element is referring to the access point, the second one to the access path.
At runtime you can apply an accessor, depending on the type and the model element. In many cases, e.g. for data mappings, this is done by the engine itself.
When applying an IN accessor at runtime you have to pass an object to this operation, when applying an OUT accessor you will retrieve an object. Such intermediate objects being passed around are called bridge objects.
The most common form of access points are access points with a Java style data type (e.g. of Serializable or Entity Bean type). In this case access paths have the form of chained getters and setters, e.g.:
Such a chain of getters/setters used for an access path is also called a method path (workflow data as access point). Consider an Entity Bean Customer used as workflow data:
public interface Customer extends EJBObject
public void setFirstName(String name);
public String getFirstName();
The Entity Bean itself forms the this access point. You can apply method paths as discussed in the preceding example (application parameter as access point). Consider a plain Java application:
public class Converter
public String concat(String a, String b)
With completion method concat concatenating two strings.
This application will offer the method parameters as IN access points sParam1, sParam2. Only empty access paths are allowed. Executing the accessor (sParam1, null) at runtime will just set the first parameter to the value of the bridge object.
The return value of the method is offered as an OUT access point returnValue. Executing the accessor (returnValue, toUpperCase()) will return the capitalized form of the concatenated string.
In the Stardust Process Workbench an accessor is always shown in a similar manner as shown in the example screenshot: the access point (Data) in a combo box and the access path (Data path) in a custom editor provided by the underlying data type. In the example case, a Java style access point, you can directly type the method path or you can use the associated edit button to open a dialog to choose the method path from a tree view.
This section discusses the various means to manipulate and read workflow data mainly based on the accessor concept discussed above.
Stardust controls the execution of business logic on business data according to the workflow model. You may regard
Hence, it needs to be specified in the workflow model
Figure: Collaborators for Data Manipulation during Activity Execution
Data mappings glue workflow data and activities together. They define a data flow from the data to the activity (IN-data mappings) or from the activity to the data (OUT-data mappings). This is done using accessors on the data side and the activity side as well.
OUT data mappings which "flow" from the activity to the data consist of the following:
At runtime, the bridge object from the activity accessor is passed to the data accessor. OUT data mappings are performed immediately before the activity is completed. Especially for an application activity the associated application has finished its work.
Similarly, IN data mappings which "flow" from the data to the activity consist of the following:
At runtime, the bridge object from the data accessor is passed to the activity accessor. IN data mappings are performed after an activity is activated. Especially for a non-interactive application activity the associated application has not yet started.
Figure: Data Mappings in the Activity Instance Lifecycle
To allow for successful data transfer the concrete type of the bridge object and the expected type of the IN accessor must match. This can often be checked at modeling time because a data type can give a hint to the modeling environment regarding the expected type for the IN accessor parameter or the expected return value type of an OUT accessor.
Java style data types can report the expected type of the accessor very precisely by means of the method signatures of the last part of the access paths:
Both types won't match and trying to combine them for a data mapping would result in an inconsistency warning in the Stardust Process Workbench.
For interactive activities it is possible to have data mappings with activity accessor undefined. In this case the bridge object is directly set or retrieved from the interactive execution context. See below for details.
As explained in the preceding section, data mappings make use of access points defined for the activity. This raises the question of where these access points are coming from. Roughly spoken, there may be different sources for access points which are grouped by the application context. This association between application paths and an application context is implied by the data mapping itself. The following application contexts are provided for a data mapping:
The activity itself offers a pseudo application context called the engine context. It contains a Java style access point activityInstance of type org.eclipse.stardust.engine.api.runtime.ActivityInstance which provides access to the attributes of the corresponding activity instance at runtime.
If the activity is non-interactive and there is an application associated with it, the so called non-interactive application context is available, offering all access points of the associated non-interactive application.
If the activity has an associated interactive application every application context defined in the application is exposed in the activity scope, too. It offers the access points defined in the application context itself (e.g. JSP Application).
There is another pseudo context for all interactive activities, the default context. This context allows only data mappings without activity access point/activity access path and is used for embedded programming and for manual activities.
According to the previous observations a data mapping as a model element has the following properties:
The triple (ID, direction, context) uniquely identifies a data mapping in the scope of the owning activity.
Non-interactive application activities exploit the following application contexts:
Apart from special cases the non-interactive application context is the context commonly used for offering access to the associated application. The engine context is only used when access to intrinsic attributes of the workflow, like the OID of the current activity instance, is needed.
Figure: Collaborators for Data Manipulation during Activity Execution
Consider the following Java application with completion method convertIntegerString associated with an activity convert:
String convertIntegerToString(int i)
Consider also two workflow data intvals of type int and person of type Person:
void setAge(String age)
The following data mappings of the activity convert will convert the input data interval appropriately and set it as the age of the data person:
Figure: Example Data Mappings for the Application Context
Data mappings for interactive activities work slightly different than non-interactive ones:
In essence, this means that the Stardust Process Engine has no knowledge of the application accessors. Additionally, for every single activity there may be different application contexts configured with different execution semantics on the application side.
Figure: Data Mappings in the Life Cycle of an Interactive Activity Instance
There are two possibilities to handle interactive data mappings:
Which strategy is chosen depends on the characteristics of the concrete application context.
Consider an Entity Bean Customer where we want to modify the name of the customer accessible by:
void setCustomerName(String name)
We will model the following data mappings for the JSP application context:
The JSP page has to be enabled to retrieve and set the bridge object representing the customer name, which in reality will exist beside a lot of other bridge objects representing e.g. other customer attributes. In the Stardust Workflow Execution Perspective this is supported by providing two maps (for IN and OUT data mappings respectively) containing the bridge objects referenced by their data mapping ID.
Creating data mappings for the JSP application is thus an example for the first strategy: the data mappings and bridge objects exploited by the application client are referenced by their IDs. Note that the Stardust Process Workbench helps you to manage those kinds of data mappings by having the data mapping IDs editable in those cases.
Figure: JSP Application Context: Modifying an Entity Bean Attribute
Manual activities allow using and manipulating data directly without invoking an application. This resembles the discussed behavior of the JSP context type:
More details on default panels/pages are found in the section Manual Activities of the System Integration chapter. To slightly modify our previous example, let's assume that the activity Modify_Customer is implemented as a manual activity having two data mappings in the default context:
These are precisely the same mappings we have specified for the JSP context in the previous example. At runtime, the Stardust Workflow Execution Perspective will render these mappings on the default panel/page as a single editable text field labeled customerName, being initialized with the current customer name. Completing the activity would write any changes in the field value back to the customer Entity Bean.
Figure: Embedded Client Context: Modifying an Entity Bean Attribute
Data mappings in the engine context for interactive activities are executed the same way as for non-interactive activities. This implies that they are executed completely under the control of the process engine.
Depending on its type, an event handler may offer data to its subordinate actions in form of access points.
The set data event action will use these values and apply an accessor to an associated data. This way a set data action very much resembles an OUT data mapping by combining two accessors to enable data flow from the activity to a workflow data. The crucial difference is the point in time when this data flow is "executed": it is done when catching an event and not after the activity completed its work.
When starting a process, the data of the created process instance may be initialized. This can be done as follows:
JMS triggers and Mail triggers offer the option to initialize workflow data by applying parameter mappings. Trigger parameters are provided at runtime in form of access points allowing for parameter mappings. They are configured in the Parameter Mapping panel of the triggers properties dialog. For more information on how to set up trigger parameters please refer to the chapter Working with Triggers.
A parameter mapping can be performed implicitly, when triggering a process by the trigger process action. A copy of the current data is then passed to the triggered process.
As for applications, it is possible to define custom data types beyond the data types shipped with Stardust . A custom data type defines:
For more information, see the Programming Guide.
Data paths are a way to define read/write accessors to workflow data of a process instance. They are used for embedded programming to gain fine-grained access to data and to visualize process data in the Stardust Portals when defined as descriptor.
To access data paths programmatically at runtime, you can use the provided class org.eclipse.stardust.engine.api.runtime.WorkflowService. Please see the section WorkflowService of the chapter Stardust Services in the Programming Guide for more information on the embedded usage of this Stardust service.
As a model element a data path has the following properties:
Executing a data path at runtime would just execute the corresponding workflow data accessor.
Special cases of data paths are descriptors, which are always evaluated when retrieving an activity instance via the Stardust API, e.g. for a worklist. This is a common shortcut to provide custom rendering attributes or often used data for an activity instance. Descriptors are especially used in the Stardust Workflow Execution Perspective in the Stardust Portal to visualize data relevant for work items in a participant's worklist. Custom descriptors must be defined separately for each process.
Note that timestamp data types like datetime, date and time are currently not supported as descriptors.