2016/12/07

KIE Server router - even more flexibility

Yet another article from KIE Server series. This time to tackle next steps when you already get yourself familiar with KIE Server and its capabilities.

KIE Server promotes architecture where there are many KIE Server instances responsible for running individual projects (kjars). Than in turn might be completely independent domains or the other way around - related to each other but separated on different runtime to avoid negative impact on each other.
Untitled Diagram.png


At this point, the client application needs to be aware of all the servers to properly interact with the servers. To put it in details client application needs to know:
  • location (url) of HR KIE Server
  • location (url) of IT KIE Server 1 and IT KIE Server 2
  • containers deployed to HR KIE Server
  • containers deployed to IT KIE Server (just one of them as they are considered to be homogeneous)
While knowing about available containers is not so difficult and can be retrieved from running server, knowing about all the locations is more tricky. Especially in dynamic (cloud) environments where servers can come and go based on various conditions.

To deal with these problems, KIE Server introduces new component - KIE Server Router. Router is responsible to bridge all KIE Servers grouped under same router to provide unified view of all servers. Unified view consists of:
  • Find the right server to deal with requests
  • Aggregate responses from different servers
  • Provide efficient load balancing
  • Deal with changing environment - like added/removed server instances

Then the only thing client knows is the location of the router. Router then exposes most of the capabilities of the KIE Server over HTTP. It comes with two main responsibilities:
  • proxy to the actual KIE Server instance based on contextual information - container id or alias
  • aggregator of data - to collect information from all distinct server instances in single client request

Untitled Diagram-2.png

There are two types of requests KIE Server Router supports from client perspective:
  • Modification requests - POST, PUT, DELETE - HTTP methods are all considered as such. Main requirement to be properly proxied is that it includes container id (or alias) in the URL
  • Retrieval requests - GET - HTTP method are seen as that as well. Though when they do include container id will be handled the same way as modification requests.
There is additional type of requests - administration requests - that KIE Server Router supports and these are strictly to allow router to function properly within changing environment.
  • Registers new servers and containers when server or container starts on any of the KIE Server instance
  • Unregisters existing servers and containers when server or container stops on any KIE Server instance
  • List available configuration of the router - what servers and containers it is aware of
Router itself has very limited information and the most important is to be able to route to correct server instance based on container id. So the assumption it has there will be only one set of servers that will host given container id/alias. Although this does not mean it's only single server. It can be as many servers as needed that can be dynamically added and removed. Proxy will load balance for all known servers for given container.

Other than the available containers and servers, KIE Server Router does not keep any information. This might not cover all possible scenarios but it does cover quite few of them.

Untitled Diagram-3.png

KIE Server Router comes in two pieces:
  • proxy that acts like a server 
  • client that is included in kie server to integrate with the proxy
Router client is responsible to bind into KIE Server life cycle and send notifications to KIE Server Router when configuration is changed:
  • When container is started (successfully) it will register it in the router
  • When container is stopped (successfully) it will unregister it from router
  • When entire server instance is stopped, it will unregister all containers (that are in state started) from router

KIE Server router client is packaged in KIE Server itself but by default is deactivated. It can be activated by setting router URL via system property:
org.kie.server.router 
Which can be one or more valid HTTP urls pointing to one or more routers this server should be registered in.

KIE Server router exposes api that is completely compatible with KIE Server Client interface so you can use the java client to talk to router as you would do when talking to any KIE Server instance. Though it has some limitations:
  • Router cannot be used to deploy new containers - this is due to it will not know given container id yet and thus won’t be able to decide in which server it should be deployed to
  • Router cannot deal with modification requests to endpoints of KIE Server that is not based on container id
    • Jobs
    • Documents
  • Router will return hard coded response when requesting KIE Server info
See basic KIE server Router capabilities in action in following screen cast:


Response aggregators

Retrieval requests are responsible for collecting data from various sources but they must return all the data aggregated into single response - that is well structured. Here response aggregators come into the picture. There are dedicated response aggregators that are per data format:
  • JSON
  • JAXB
  • Xstream 
XML based aggregators (both JAXB and Xstream) use Java SE xml parsers with some hints on what elements are the subject for aggregation. While JSON uses org.json library (which is the smallest one) to aggregate JSON responses.

All aggregated responses are compatible with data model returned by KIE Server and thus can be consumed by KIE Server Client without any issues.

Aggregators support both sorting and pagination of the aggregated results. It does the aggregation and sorting and paging on the router side. Though the initial sorting is done on the actual KIE Server instances as well to make sure it is properly respected on the source data.

Paging on the other side is bit more tricky as it needs to ask KIE Servers to always give from page 0 up to the requested one to properly take into consideration all KIE Servers before returning requested page.

See paging and sorting in action in following screencast.



That concludes quick tour about KIE Server Router that should provide more flexibility in dealing with more advanced environments with KIE Servers.

Comments, questions, ideas as usual - welcome

2016/11/21

Pluggable container locator and policy support in KIE Server

In this article, a container locator support was introduced. Commonly known as aliases. At that time it was by default using latest available version of the project configured with same alias. This idea was really well received and thus further enhanced.

Pluggable container locator


First of all, not always latest available container is the way to go. There might be a need to have time bound container selection for given alias, for example:

  • there are two containers for given alias
  • even though there is a new version already deployed it should not be used until predefined date

so users might implement their own container locator interface and register it by including that implementation bundled into a jar file on KIE Server class path. As usual, the discovery mechanism is based on ServiceLoader so the jar must include:
  • implementation of ContainerLocator interface
  • file named org.kie.server.services.api.ContainerLocator must be placed in META-INF/services directory
  • include fully qualified class name of the ContainerLocator implementation in META-INF/services/org.kie.server.services.api.ContainerLocator file
Since there might be multiple implementation present on class path, container locator to be used needs to be given via system property:
  • org.kie.server.container.locator where the value should be class name of the implementation of ContainerLocator interface - simple name not FQCN
that will be then used instead of default latest container locator.

so far so good, but what should happen with containers that are now idle or should not be used anymore? Since the container locator will make sure that selected (by default latest) container is going to be used in most of the cases, there might be containers that do no need to be on runtime any longer. Especially important in environments where new versions of containers are frequently deployed which might lead to increased use of memory. Thus efficient cleanup of not used containers is a must. 

Pluggable policy support

For this a policy support was added, but not only for this as policies are general purpose tool within KIE Server. So what's that?

Policy is a set of rules that are applied by KIE Server periodically. Each policy can be registered at different time to be applied. Policies are discovered when KIE Server starts and are registered but are not started by default. 
The reason for this is that the discovery mechanism (ServiceLoader) is based on class path scanning and thus are always performed regardless if they should be used or not. So there is another step required to make the policy to be activated. 

Policy activation is done by system property when booking KIE Server:
  • org.kie.server.policy.activate - where value is a comma separated list of policies (their names) to be activated
When policy manager activates given policy it will respect its life cycle:
  • will invoke start method of the policy
  • will retrieve interval from the policy (invoke getInterval method)
  • schedule periodic execution of that policy based on given interval - if interval is less than 1 it will ignore that policy
NOTE: scheduling is done based on interval for both first execution and then repeatable executions - meaning first execution will take place after interval. Interval must be given in milliseconds.

Similar, when KIE Server stops it will call stop method of every activated policy to properly shut it down.

Policy support can be used for various use cases, one that comes out of the box is to complement container locator (with its default latest only). So there is a policy available in KIE Server that will undeploy other container than latest. This police by default is applied once a day, but can be reconfigured via system properties. KeepLatestContainerOnlyPolicy will attempt to dispose containers that have lower versions, though it might fail on such attempt. Reasons of the failure might vary but most common will be when there are active process instances for container that is being disposed. In that case the container is left as started and the next day (or after another reconfigured period of time) the attempt will be retried. 

NOTE: KeepLatestContainerOnlyPolicy is aware of controller so it will notify controller that the policy was applied and stop container in controller only, but only stop it and not remove it. Same as any policy this one must be activated via system property as well.

This opens the door for tremendous amount of policy implementations, starting with cleanup, through blue-green deployments and finishing at reconfiguring runtimes. All performed periodically and automatically by KIE Server itself.

As always, comments and further ideas are more than welcome.

2016/11/08

Administration interfaces in jBPM 7

In many cases, when working with business processes, users end up in situations that were not foreseen before, e.g. task was assigned to a user that left the company, timer was scheduled with wrong expiration time and so on.

jBPM from it's early days had the capabilities to deal with these though it required substantial knowledge on how to use low level apis of jBPM. These days are now over, jBPM version 7 comes with administration api that cover:

  • process instance operations
  • user task operations
  • process instance migration

These administration interfaces are supported in jBPM services and in KIE Server so users have full power of performing quite advanced operations when utilizing jBPM as process engine regardless if that is embedded (jbpm services api) or as a service (KIE Server).

Let's start quickly by looking what sort of capabilities each of the service provide.

Process instance Administration


Process instance administration service provides operations around the process engine and individual process instance, following is complete list of operations supported and their short description:
  • get process nodes - by process instance id - this returns all nodes (including embedded subprocesses) that exists in given process instance. Even though the nodes come from process definition it's important to get them via process instance to make sure that given node exists and have valid node id so it can be used with other admin operations successfully
  • cancel node instance - by process instance id and node instance id - does exactly what the name suggests - cancels given nodes instance within process instance
  • retrigger node instance - by process instance id and node instance id - retrigger by first canceling the active node instance and create new instance of the same type - sort of recreates the node instance
  • update timer - by process instance id and timer id - updates timer expiration of active timer. It updates the timer taking into consideration time elapsed since the timer was scheduled. For example: In case timer was initially created with delay of 1 hour and after 30 min it's decided to update it to 2 hours it will then expire in 1,5 hour from the time it was updated. Allows to update
    • delay - duration after timer expires
    • period - interval between timer expiration - applicable only for cycle timers
    • repeat limit - limit the expiration to given number - applicable only for cycle timers
  • update timer relative to current time - by process instance id and timer id - similar to regular update time but the update is relative to the current time - for example: In case timer was initially created with delay of 1 hour and after 30 min it's decided to update it to 2 hours it will then expire in 2 hours from the time it was updated.
  • list timer instances - by process instance id - returns all active timers found for given process instance
  • trigger node - by process instance id and node id - allows to trigger (instantiate) any node in process instance at any time.

Complete ProcessInstanceAdminService can be found here.
KIE Server client version of it can be found here.


User task administration


User task administration mainly provides useful methods to manipulate task assignments (users and groups), data handling and automatic (time based) notifications and reassignments. Following is complete list of operations supported for user task administration service:
  • add/remove potential owners - by task id - supports both users and groups with option to remove existing assignment
  • add/remove excluded owners - by task id - supports both users and groups with option to remove existing assignment
  • add/remove business administrators  - by task id - supports both users and groups with option to remove existing assignment
  • add task inputs - by task id - modify task input content after task has been created
  • remove task inputs - by task id - completely remove task input variable(s)
  • remove task output - by task id - completely remove task output variable(s)
  • schedules new reassignment to given users/groups after given time elapses - by task id - schedules automatic reassignment based on time expression and state of the task:
    • reassign if not started (meaning when task was not moved to InProgress state)
    • reassign if not completed (meaning when task was not moved to Completed state)
  • schedules new email notification to given users/groups after given time elapses - by task id - schedules automatic notification based on time expression and state of the task:
    • notify if not started (meaning when task was not moved to InProgress state)
    • notify if not completed (meaning when task was not moved to Completed state)
  • list scheduled task notifications - by task id - returns all active task notifications
  • list scheduled task reassignments - by task id - returns all active tasks reassignments
  • cancel task notification - by task id and notification id - cancels (and unschedules) task notification
  • cancel task reassignment - by task id and reassignment id - cancels (and unschedules) task reassignment
NOTE: all user task admin operations must be performed as business administrator of given task - that means every single call to user task admin service will be checked in terms of authorization and only business administrators of given task will be allowed to perform the operation.

Complete UserTaskAdminService can be found here.
KIE Server client version of it can be found here.


Process instance migration


ProcessInstanceMigrationService provides administrative utility to move given process instance(s) from one deployment to another or one process definition to another. It’s main responsibility is to allow basic upgrade of process definition behind given process instance. That might include mapping of currently active nodes to other nodes in new definition.

Migration does not deal with process or task variables, they are not affected by migration. Essentially process instance migration means a change of underlying process definition process engine uses to move on with process instance.

Even though process instance migration is available it’s recommended to let active process instances finish and then start new instances with new version whenever possible. In case that approach can’t be used, migration of active process instance needs to be carefully planned before its execution as it might lead to unexpected issues.Most important to take into account is:
  • is new process definition backward compatible?
  • are there any data changes (variables that could affect process instance decisions after migration)?
  • is there need for node mapping?
Answers to these questions might save a lot of headache and production problems after migration. Best is to always stick with backward compatible processes - like extending process definition rather than removing nodes. Though that’s not always possible and in some cases there is a need to remove certain nodes from process definition. In that situation, migration needs to be instructed how to map nodes that were removed in new definition in case active process instance is at the moment in such a node.

Complete ProcessInstanceMigrationService can be found here.
KIE Server version of it can be found here.

With this, I'd like to emphasize that administrators of jBPM should be well equipped with enough tools for the most common operations they might face. Obviously that won't cover al possible cases so we're are more than interested in users feedback on what else might be there as admin function. So share it!

2016/10/25

Case management - jBPM v7 - Part 3 - dynamic activities

It's time for next article in "Case Management" series, this time let's look at dynamic activities that can be added to a case on runtime. Dynamic means process definition that is behind a case has no such node/activity defined and thus cannot be simply signaled as it was done for some of the activities in previous articles (Part 1 and Part 2).

So what can be added to a case as dynamic activity?

  • user task
  • service task - which is pretty much any type of service task that is implemented as work item 
  • sub process - reusable

User and service tasks are quite simple and easy to understand, they are just added to case instance and immediately executed. Depending of the nature of the task, it might start and wait for completion (user task) or it might directly finish after execution (service tasks). Although most of the service tasks (as defined in BPMN2 spec - Service Task) will be invoked synchronously it might be configured to run in background or even wait for external signal to be completed - all depends on the implementation of the work item handler.
Sub process is slightly different in the expectations process engine will have - process definition that is going to be created as dynamic process must exists in kjar. That is to make sure process engine can find that process by its id to execute it. There are no restriction on what the subprocess will do, it can be synchronous without wait states or it can include user tasks or other subprocesses. Moreover such created subprocess will have correlation key set with first property being the case id of the case where the dynamic task was created. So from case management point of view it will belong to that case and thus see all case data (from case file - see more details about case file in Part 2).

Create dynamic user task

To create dynamic user task there are few things that must be given:
  • task name
  • task description (optional though recommended to be used)
  • actors - list of comma separated actors to assign the task, can refer to case roles for dynamic resolving 
  • groups - same as for action but referring to groups, again can use case roles
  • input data - task inputs to be available to task actors
Dynamic user task can be created via following endpoint:

Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/tasks

where 
  • itorders is container id
  • IT-0000000001 is case id
Method::
POST

Body::
{
 "name" : "RequestManagerApproval",
 "data" : {
  "reason" : "Fixed hardware spec",
  "caseFile_hwSpec" : "#{caseFile_hwSpec}"
  }, 
 "description" : "Ask for manager approval again",
 "actors" : "manager",
 "groups" : "" 
}

this will then create new user task associated with case IT-000000001 and the task will be assigned to person who was assigned to case role named manager. This task will then have two input variables:
  • reason
  • caseFile_hwSpec - it's defined as expression to allow runtime capturing of process/case data
There might be a form defined to provide user friendly UI for the task which will be then looked up by task name - in this case it's RequestManagerApproval (and the form file name should be RequestManagerApproval-taskform.form in kjar).

Create dynamic service task

Service tasks are slightly less complex from the general point of view, though they might need more data to be provided to properly perform the execution. Service tasks require following things to be given:
  • name - name of the activity
  • nodeType - type of a node that will be then used to find the work item handler
  • data - map of data to properly deal with execution
Service task can be created with the same endpoint as user task with difference in the body payload.
Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/tasks

where 
  • itorders is container id
  • IT-0000000001 is case id
Method::
POST

Body::
{
 "name" : "InvokeService",
 "data" : {
  "Parameter" : "Fixed hardware spec",
  "Interface" : "org.jbpm.demo.itorders.services.ITOrderService",
  "Operation" : "printMessage",
  "ParameterType" : "java.lang.String"
  }, 
 "nodeType" : "Service Task"
}

In this example, an java based service is executed. It consists of an interface that is a public class org.jbpm.demo.itorders.services.ITOrderService, public printMessage method with single argument of type String. Upon execution Parameter value is passed to the method for execution.

Number and names, types of data given to create service tasks completely depends on the implementation of service task's handler. In this example org.jbpm.process.workitem.bpmn2.ServiceTaskHandler was used.

NOTE: For any custom service tasks, make sure handler is registered in deployment descriptor in WorkItem Handler section where the name is same as nodeType used when creating a dynamic service task.

Create dynamic subprocess

Dynamic subprocess expects only optional data to be provided, there are no special parameters as for tasks so it's quite straight forward to be created. 

Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/processes/itorders-data.place-order

where 
  • itorders is container id
  • IT-0000000001 is case id
  • itorders-data.place-order is the process id of the process to be created
Method::
POST

Body::
{
 "any-name" : "any-value"
}

Mapping of output data

Typically when dealing with regular tasks or subprocesses to map output data, users define data output associations to instruct the engine on what output of the source (task or sub process instance) to be mapped to what process instance variable. Since dynamic tasks do not have data output definition there is only one way to put output from task/subprocess to the process instance - by name. This means the name of the returned output of a task must match by name process variable to be mapped. Otherwise it will ignore that output, why is that? It's to safe guard case/process instance of putting unrelated variables and thus only expected information will be propagated back to case/process instance.

Look at this in action

As usual, there are screen casts to illustrate this in action. First comes the authoring part that shows:
  • creation of additional form to visualize dynamic task for requesting manager approval
  • simple java service to be invoked by dynamic service task
  • declaration of service task handler in deployment descriptor


Next, it is shown how it actually works in runtime environment (kie server)




Complete project that can be imported and executed can be found in github.

So that concludes part 3 of case management in jBPM 7. Comments and ideas more than welcome. And that's still not all that is coming :)

2016/10/19

Case management - jBPM v7 - Part 2 - working with case data

In Part 1, basic concepts around case management brought by jBPM 7 were introduced. It was a basic example (it order handling) as it was limited to just moving through case activities and basic data to satisfy milestone conditions.

In this article, case file will be described in more details and how it can be used from within a case and process. So let's start with quick recap of variables available in processes.

There are several levels where variables can be defined:

  • process level - process variable
  • subprocess level - subprocess variable
  • task level - task variable
Obviously the process level is the entry point where all other take the variables from. Meaning if a process instance creates subprocess it will usually include mapping from process level to subprocess level. Similar for tasks, tasks will get variables from process level. 
In such case it means the variable is copied to ensure isolation for each level. That is, in most of the cases, the desired behavior unless you need to keep all variables always up to date, regardless of the level they are in. That, in turn, is usual situation in case management which expects always the most up to date variables at any time in the case instance, regardless of their level.

So that's why case management in jBPM is equipped with Case File, which is then only one for entire case, regardless how many process instances compose the case instance. Storing data in case file promotes reuse instead of copy, so each process instance can take variable directly from case file and same for updates. There is no need to copy variable around, simply refer to it from your process instance.

Support for case file data is provided at design time by marking given variable as case file


as can be seen in the above screenshot, there is variable hwSpec marked as case file variable. And the other (approved) is process variable. That means hwSpec will be available to all processes within a case, and moreover it will be accessible directly from a case file even without process instance involvement.

Next, case variables can be used in data input and output mapping


case file variables are prefixed with caseFile_ so the engine can properly handle it. Though a simplified version (without the prefix) is expected to work as well. Though for clarity and readability it's recommended to always use the prefix.

Extended order hardware example

In part 1, there was a very basic, with no data case definition for handling order of IT hardware. In this article we extend the example to illustrate:
  • use of case file variables
  • use of documents
  • share of the information between process instances via case file
  • use business process (via call activity) to handle placing order activity


Following screencast shows entire design time activities to extend the part 1 example, including awesome feature to copy entire project!




So what was done here:

  • create new business process - place-order that will be responsible for placing order activity instead of script task from previous example
  • define case file variables:
    • hwSpec - which is a physical document that needs to be uploaded
    • ordered - which is indication for Milestone 1 to be achieved 
  • replace script task for Place order activity with reusable subprocess - important to note is that there are no variables mapping in place, all is directly taken from case file
  • generate forms to handle file upload and slightly adjust their look

With these few simple steps our case definition is enhanced with quite a bit of new features making it's applicability much better. It's quite common to include files/documents in a case, though they should be still available even if given process instance that uploaded them is gone. And that's provided by case file that is there as long as case instance was not destroyed.

Let's now run the example to see it in action




The execution is similar as it was in part one, meaning to start the case we need to use REST api. A worth noting part here is that we made a new version of the project:

  •  org.jbpm.demo
  • itorders
  • 2.0
and then it was deployed on top of the first version, in exact same kie server. Even though there are both versions running the URL to start the case didn't change:


Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/itorders.orderhardware/instances

where

  • itorders is the container alias that was deployed to KIE Server
  • itorders.orderhardware is case definition id

Method: POST

As described above, at the time when new case is started it should provide basic configuration - role assignments:

POST body::
{
  "case-data" : {  },
  "case-user-assignments" : {
    "owner" : "maciek",
    "manager" : "maciek"
  },
  "case-group-assignments" : {
    "supplier" : "IT"
 }
}

itorders is an alias that when used will always select the latest version of the project. Though if there is a need to explicitly pick given version then simply replace the alias with the container id (itorders_2.0 or itorders_1.0)

Once the process is started supplier (based on role assignment - IT group) will have task to complete to provide hardware specification - upload a document. Then manager can review the specification and approve (or not) the order. Then it goes to subprocess to actually handle the ordering, which once done will store status into case file which will then trigger milestone 1.

Throughout all these user oriented activities, case file information (hwSpec) is shared without any need to copy that around. Moreover, there was no need to configure anything to handle documents either, that is all done by creating a case project that by default sets up everything that is needed.

At any time (as long as case was not destroyed) you can get the case file to view the data. This can be retrieved by following endpoint

Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/caseFile 

where:
  • itorders - is the container alias
  • IT-0000000001 - is the case ID
Method: GET



With this, part 1 concludes with a note that since it is bit enhanced order hardware case, it's certainly not all that you can do with it so stay tuned for more :)

Try it yourself

As usual, complete source code is located in github.

2016/10/10

Case management - jBPM v7 - Part 1

This article starts a new series of blog posts about case management feature that is coming in jBPM v7 to illustrate its capabilities with complete examples that will get more complex/advanced on each part.

One of the most frequently requested features in jBPM is so called Case Management. Case management can mean different things depending who you talked to so I'd like to start with small scope definition what does it mean in context of jBPM (at the moment as that might change based on feedback, supported features and use cases and further evolution).

Case management can be best described when compared to business processes. Business processes are usually modeled as flow charts with clearly defined paths to reach a business goal. These processes usually have one (might have more) starting points and are structurally connected to build end to end flow of work and data.



While cases are more dynamic, they provide room for improvements as the case evolve without the need to foresee all possible actions in advance. So case definition usually consists of loosely coupled process fragments that can be connected (directly on indirectly) to lead to certain milestones and finally business goal.

Looking at different notations that can be used for case management, processes and cases might be represented differently:

  • BPMN2
  • CMMN
jBPM comes with cases support based on BPMN2 as most users are familiar with this notation and most if not all features can be represented with BPMN2 constructs. That's at least a starting point which might be revisited further on. A good comparison between BPMN2 and CMMN was published by Bruce Silver.

These article series will introduce readers to case management support gradually with more features as we go to not provide too much details at once and let the features described be backed with examples that can be seen (screencast) and executed on the actual environment with jBPM v7.

Case project

First thing to start with, is to create Case project - it's a special type of project in KIE workbench that is on top of regular project to configure it for the case management:
  • set runtime strategy to Per Case
  • configure marshallers for case file and documents
  • create WorkDefinition.wid files in the project and its packages to ensure case related nodes (e.g. Milestone) are available in palette 


Case definition

So let's start with basic case definition example that covers following use case - IT hardware orders. As in any company, there is a need from time to time to order new IT equipment - such as computers, phones, etc. This kind of system can be represented with a good case management as they usually deal with a bit of dynamic decisions that might influence the flow. 

Case definition is created in authoring perspective in KIE workbench - it expects name, location and optionally case ID prefix. What's that? Case ID prefix is configurable element that allows to easily distinguish different types of cases. Default mechanism is that the prefix is then followed with generated id in following format:

ID-XXXXXXXXXX

where X is generated number to produce unique id with the prefix. If prefix is not given it defaults to CASE and then each subsequent instance of that case definition will be:
CASE-0000000001
CASE-0000000002
CASE-0000000003

or when prefix is set to HR
HR-0000000001
HR-0000000002
HR-0000000003

Case definition is always an adhoc process definition meaning it is a dynamic process so does not require to have explicit start nodes.

Once the clean definition is created, it's time to define roles involved in the usual case of ordering new IT hardware:
  • owner - is the person who requests the hardware (can be only one)
  • manager - is direct manager of the owner to approve the requested hardware
  • supplier - set of people that can order and deliver physical equipment (usually more than one)
When the roles are known, case management must ensure that these are not hardcoded to single set of people/groups as part of case definition and that it can differ per each case instance. This is where case role assignments come into the picture and can be:
  • given when case starts
  • set at any given point in time while case is active
  • removed at any given point in time while case is active
second and third option does not alter the task assignments for already active tasks.

What is important to note here, is that in case management users should always use roles for task assignments instead of actual user/group names, that is to make the case as dynamic as possible so actual user/group assignment is done as late as possible. It's similar to process variables though without expression syntax (#{variable}).

Let's take a look at our case definition:


So what do we have here? First thing that is directly seen is - no start nodes of the process. Does that mean there is no way to tell what is going to be triggered when new instance of this case definition is created?
Quite the opposite - nodes that have no incoming connections and are marked as Adhoc Autostart (a property of a node) will be automatically triggered when instance is started.

In this case these are:
  • Prepare hardware spec
  • Milestone 1: Order placed
Both of these nodes are wait states, meaning they are triggered but they are not left, they wait for further action:
  • Prepare hardware spec - wait for supplier to provide the spec and complete the task
  • Milestone 1: Order placed - wait for condition to be met - there is a case file variable named "ordered" with value true
Hmmm, but what is a case file then? Case File is like a bucket for data for entire case instance. Since case can span across number of process instances, instead of coping data back and forth (that first of all might be expensive and second can lead to use of out of date information) process instance can write and read from case file that is accessible to all process instance that belong to the same case. CaseFile is stored in working memory and thus is persiteable same as ksession and process instance - meaning can use marshaling strategies to store in different places e.g. documents, JPA entities etc. Though what's more important - it is a fact in working memory and thus can be subject for rules.

Milestone actually uses case file as condition to trigger only if there is a ordered variable available in case file and its value is true. Only then milestone will be completed and will follow to next node.

Another worth noting part is the end signals that are at the end of Milestone 1 and Milestone 2 fragments. These signals are responsible for triggering next Milestone in line, but again, only triggering and not completing it as they will wait on condition. The scope of signal is process instance only so completing Milestone 1 in first case instance will not cause any side effects on other active case instances of the same definition.

Here is a complete design of this project and case definition as screencast.





Complete source code of this project (and the entire repository) can be found here. This repository can be cloned directly to workbench for build and deploy.

... speaking of build and deploy....

The project can be directly build and deploy in workbench and (assuming you have KIE Server connected to workbench) provisioned to execution environment where it can be started and worked on.

At the moment workbench does not provide any case management UI, thus we will use REST calls to start a case and put data into case file but we can use workbench for user task interaction and overall monitoring - process instance logs, process instance image, active nodes, etc.

Start new case

To start a new case use following endpoint:
Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/itorders.orderhardware/instances

where

  • itorders is the container alias that was deployed to KIE Server
  • itorders.orderhardware is case definition id
As described above, at the time when new case is started it should provide basic configuration - role assignments:

POST body::
{
  "case-data" : {  },
  "case-user-assignments" : {
    "owner" : "maciek",
    "manager" : "maciek"
  },
  "case-group-assignments" : {
    "supplier" : "IT"
 }
}

At the moment case-data is empty as we don't supply any data/information to the case. But we do configure our defined roles. Two of them are user assignments (as can be seen in the above screen cast they are referenced in Actor property of user tasks) and third is group assignments (as it is referenced in Groups property of user task).

Once successfully stared it will return case ID that should look like
IT-0000000001

Then this case can already be seen in process instance list in workbench, and its tasks should be available in task perspective. So the tasks can be completed and various milestones will be achieved until it reaches the Milestone that requires shipped variable to be present in case file.

Insert case file data

Case file data can be easily inserted into active case using REST api.
Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/caseFile/shipped

where
  • itorders is the container alias that was deployed to KIE Server
  • IT-0000000001 is the unique id of a case instance
  • shipped is the name of the case file variable to be set/replaced
POST body::
true

Same should be later repeated to insert "delivered" case file variable to achieve Milestone 3 and move to final task - Customer Satisfaction Survey. And that's all for this basic case example.

Execution in action can be found in this screencast



Comments and ideas more than welcome. And in addition, contribution to what cases should be provided as example are wanted!

2016/09/30

Improved container handling and updates in KIE Server

KIE Server allows to deploy multiple kjars, even the same project but different versions. And that is actually quite common to add new version of the project (kjar) next to already running. That in turn enforces to have unique container ids for each project.
Let's take an example - at the beginning we start with first project

Group id: org.jbpm
Artifact id: my-project
Version: 1.0

This project has single process inside and once built we deploy it with container id set to my-project.

Then we realized that the process needs an update (of whatever type) and thus we need to increase project version and again build and deploy it to KIE Server. So that gives us another project version

Group idorg.jbpm
Artifact id: my-project
Version: 2.0

Then we cannot deploy it with the same container id (my-project) so we need to change it to something else ... my-project2 (most likely ;))

So what's wrong with that? Well, first of all naming convention is starting to be affected by the versioning schema used, which might be good or bad, depending how it's used. But more important is that clients interacting with these projects must be aware of the versions all the time.
That in turn makes the client application to be bound to release cycle of the projects  in particular to their new versions (and by that processes and other assets).

What can we do about it...?
The improvement that comes in version 7 allows to define aliases for containers (which is runtime representation of kjar). Alias can be added to as many containers as needed and by default (when not given) uses artifact id of the project.
Aliases are not constrained to the same group and artifact ids so project with different GA coordinates can use same alias. Then alias can be used all the time when interacting with KIE Server, the behavior differs depending on the operations performed as it might require some additional logic to figure out which is the actual container to be used. Let's examine these situations

Starting new process instance

As listed above unique container id disables clients to use single endpoint to be used to start process instances of given process id as they always need to provide container id which differs between versions. When alias is used instead client application can actually always use the same container alias (instead of container id) to always start latest version of the process. 

To start process of latest version, KIE Server will get container alias and find all containers that declares that alias and then search for latest by comparing project versions - it's based on maven like version comparator though it only takes into account version and not group or artifact id.

So if we deploy the first project and then issue following request:
http://localhost:8230/kie-server/services/rest/server/containers/my-project/processes/evaluation/instances

where: 
  • my-project is container alias
  • evaluation is process id
it will then start new instance from org.jbpm:my-project:1.0 project.

Next if we deploy the 2.0 version then issue exact same request (the URL) then the new instance will be started from org.jbpm:my-project:2.0 project.

That gives us options to continuously deploy new versions and ensure that client who relies on our processes will always use latest version of the processes available in the system. It works always on the live information so if you then remove version 2.0 from KIE Server and start another instance it will be back on org.jbpm:my-project:1.0 project as it's the latest one available.

Interacting with existing process instance

Interaction with existing process instances depends on the process instance id. To be able to perform any operation on process instance its id must be given. Then based on that information KIE Server will identify correct container id to be used.
Container id is needed to be able:
  • unmarshal incoming data (like variables)
  • find correct runtime manager 
  • marshal outgoing data
Both incoming and outgoing data might refer to project specific types (that can change between versions) and thus it's important that the right one is used.

Interacting with tasks

Similar to process instances, interaction with tasks is dependent on the task id. KIE Server will locate proper container by task id to be able to correctly deal with the requests. Container id is used for exact same operations as in process instance case (unmarshall and marshal data and find correct runtime manager).

Interacting with process definition image and forms

Interacting with process definition image and process forms will work same as start process - returns always the latest one when using container alias.

Below you can see this in action in the following screen cast. This screencast illustrates use with workbench integrated with KIE Server so as you can see when you press build and deploy you will be given all the details already filled in:
  • container id is artifactId _ version
  • alias is artifact id
Although you're in full control and can change both defaults to whatever you need.



And again, container aliases are set either explicitly (if given when creating containers) or implicitly (based on artifact id). Although that does not mean you have to use them. In some case container ids are good enough and they will still work the same way as they do now.

Hopefully this will bring another reason to move to KIE Server and start using its full potential :)


2016/08/10

KIE Server (jBPM extension) brings document support

Another article for KIE Server series ... and what's coming in version 7. This time around documents and their use in business processes.

Business processes quite frequently need collaboration around documents (in any meaning of it), thus is it important to allow users to upload and download documents. jBPM provided documents support in version 6 already though it was not exposed on KIE Server for remote interaction.

jBPM 7 will come with support for documents in KIE Server - that covers both use within process context and outside - direct interaction with underlying document storage.


jBPM and documents

To recap quickly how document support is provided by jBPM

Documents are considered process variables, as such they are applicable for the pluggable persistence strategies defined. Persistence strategies allow to provide various backend storage for process variables, instead of always be put together with process instance into jBPM data base.

Document is represented by org.jbpm.document.service.impl.DocumentImpl type and comes with dedicated marshaling strategy to deal with this type of variables org.jbpm.document.marshalling.DocumentMarshallingStrategy. In turn marshaling strategy relies on org.jbpm.document.service.DocumentStorageService that is an implementation specific to document storage of your choice. jBPM comes with out of the box implementation of the storage service that simply uses file system as underlying storage system.
Users can implement alternative DocumentStorageService to provide any kind of storage like data base, ECM etc.

KIE Server in version 7, provides full support for described above usage - including pluggable DocumentStorageService implementations - and it extends it bit more. It provides REST api on top of org.jbpm.document.service.DocumentStorageService to allow easy access to underlying documents without a need to always go over process instance variables, though it still allows to access documents from within process instance.

KIE Server provides following endpoints to deal with documents:

  • list documents - GET - http://host:port/kie-server/services/rest/server/documents
    • accept page and pageSize as query parameters to control paging
  • create document - POST - http://host:port/kie-server/services/rest/server/documents
    • DocumentInstance representation in one of supported format (JSON, JAXB, XStream)
  • delete document - DELETE - http://host:port/kie-server/services/rest/server/documents/{DOC_ID}
  • get document (including content) - GET - http://host:port/kie-server/services/rest/server/documents/{DOC_ID}
  • update document - PUT - http://host:port/kie-server/services/rest/server/documents
    • DocumentInstance representation in one of supported format (JSON, JAXB, XStream)
  • get content - GET - http://host:port/kie-server/services/rest/server/documents/{DOC_ID}/content

NOTE: Same operations are also supported over JMS.

Documents in action

Let's see this in action, by just going over very simple use case:
  • Deploy translations project (that is part of jbpm-playground repository) to KIE Server
  • Create new translation process instance from workbench
  • Create new translation process instance from JavaScript client - simple web page
  • Download and remove documents from JavaScript client



As it can be seen in above screencast, there is smooth integration between workbench, kie server and JavaScript client. Even more is that kie server accept all the data over single endpoint - no separate upload of the document and then start of the process. 

Important note - be really cautious when using delete operation via KIE Server documents endpoint as it remove document completely meaning there will be no access to it from process instance (as presented in the screencast), moreover process instance won't be aware of it as it considers document storage as an external system.

Sample source

For those how would like to try it out themselves, here is a JavaScript client (a simple web page) that was used for the example screencast. Please make sure you drop it on kie server instance to not run into CORS related issues.

<html>
<head>
<title>Send document to KIE Server</title>
<style type="text/css">
table.gridtable {
 font-family: verdana,arial,sans-serif;
 font-size:11px;
 color:#333333;
 border-width: 1px;
 border-color: #666666;
 border-collapse: collapse;
}
table.gridtable th {
 border-width: 1px;
 padding: 8px;
 border-style: solid;
 border-color: #666666;
 background-color: #dedede;
}
table.gridtable td {
 border-width: 1px;
 padding: 8px;
 border-style: solid;
 border-color: #666666;
 background-color: #ffffff;
}
</style>


<script type='text/javascript'>
  var user = "";
  var pwd = "";
  var startTransalationProcessURL = "http://localhost:8230/kie-server/services/rest/server/containers/translations/processes/translations/instances";
  var documentsURL = "http://localhost:8230/kie-server/services/rest/server/documents";

  var srcData = null;
  var fileName = null;
  var fileSize = null;
  function encodeImageFileAsURL() {

    var filesSelected = document.getElementById("inputFileToLoad").files;
    if (filesSelected.length > 0) {
      var fileToLoad = filesSelected[0];
      fileName = fileToLoad.name;
      fileSize = fileToLoad.size;
      var fileReader = new FileReader();

      fileReader.onload = function(fileLoadedEvent) {
        var local = fileLoadedEvent.target.result; // <--- data: base64
        srcData = local.replace(/^data:.*\/.*;base64,/, "");


        console.log("Converted Base64 version is " + srcData);
      }
      fileReader.readAsDataURL(fileToLoad);
    } else {
      alert("Please select a file");
    }
  }

  function startTransalationProcess() {
    var xhr = new XMLHttpRequest();
    xhr.open('POST', startTransalationProcessURL);
    xhr.setRequestHeader('Content-Type', 'application/json');
    xhr.setRequestHeader ("Authorization", "Basic " + btoa(user + ":" + pwd));
    xhr.onreadystatechange = function () {
        if (xhr.readyState == 4 && xhr.status == 201) {
            loadDocuments();
        }
    }
    var uniqueId = generateUUID();
    xhr.send('{' +
      '"uploader_name" : " '+ document.getElementById("inputName").value +'",' +
      '"uploader_mail" : " '+ document.getElementById("inputEmail").value +'", ' +
      '"original_document" : {"DocumentImpl":{"identifier":"'+uniqueId+'","name":"'+fileName+'","link":"'+uniqueId+'","size":'+fileSize+',"lastModified":'+Date.now()+',"content":"' + srcData + '","attributes":null}}}');
  }


    function deleteDoc(docId) {
      var xhr = new XMLHttpRequest();
      xhr.open('DELETE', documentsURL +"/" + docId);
      xhr.setRequestHeader('Content-Type', 'application/json');
      xhr.setRequestHeader ("Authorization", "Basic " + btoa(user + ":" + pwd));
      xhr.onreadystatechange = function () {
          if (xhr.readyState == 4 && xhr.status == 204) {
            loadDocuments();
          }
      }

      xhr.send();
    }

  function loadDocuments() {
    var xhr = new XMLHttpRequest();
    xhr.open('GET', documentsURL);
    xhr.setRequestHeader('Content-Type', 'application/json');
    xhr.setRequestHeader ("Authorization", "Basic " + btoa(user + ":" + pwd));
    xhr.onreadystatechange = function () {
        if (xhr.readyState == 4 && xhr.status == 200) {
          var divContainer = document.getElementById("docs");
          divContainer.innerHTML = "";
          var documentListJSON = JSON.parse(xhr.responseText);
          var documentsJSON = documentListJSON['document-instances'];
          if (documentsJSON.length == 0) {
            return;
          }
          var col = [];
          for (var i = 0; i < documentsJSON.length; i++) {
              for (var key in documentsJSON[i]) {
                  if (col.indexOf(key) === -1) {
                      col.push(key);
                  }
              }
          }
          var table = document.createElement("table");
          table.classList.add("gridtable");

          var tr = table.insertRow(-1);                   

          for (var i = 0; i < col.length; i++) {
              var th = document.createElement("th");      
              th.innerHTML = col[i];
              tr.appendChild(th);
          }
          var downloadth = document.createElement("th");
          downloadth.innerHTML = 'Download';
          tr.appendChild(downloadth);
          var deleteth = document.createElement("th");
          deleteth.innerHTML = 'Delete';
          tr.appendChild(deleteth);

          for (var i = 0; i < documentsJSON.length; i++) {

              tr = table.insertRow(-1);

              for (var j = 0; j < col.length; j++) {
                  var tabCell = tr.insertCell(-1);
                  tabCell.innerHTML = documentsJSON[i][col[j]];
              }
              var tabCellGet = tr.insertCell(-1);
              tabCellGet.innerHTML = '<button id="button" onclick="window.open(\'' + documentsURL +'/'+documentsJSON[i]['document-id']+'/content\')">Download</button>';

              var tabCellDelete = tr.insertCell(-1);
              tabCellDelete.innerHTML = '<button id="button" onclick="deleteDoc(\''+documentsJSON[i]['document-id']+'\')">Delete</button>';
          }

          divContainer.appendChild(table);
        }
    }

    xhr.send();
  }


  function generateUUID() {
    var d = new Date().getTime();
    var uuid = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
        var r = (d + Math.random()*16)%16 | 0;
        d = Math.floor(d/16);
        return (c=='x' ? r : (r&0x3|0x8)).toString(16);
    });
    return uuid;
  }
</script>
</head>
<body>
  <h2>Start transalation process</h2>
  Name: <input name="name" type="text" id="inputName"/><br/><br/>
  Email: <input name="email" type="text" id="inputEmail"/><br/><br/>
  Document to translate: <input id="inputFileToLoad" type="file" onchange="encodeImageFileAsURL();" /><br/><br/>
  <input name="send" type="submit" onclick="startTransalationProcess();" /><br/><br/>
<hr/>
  <h2>Available documents</h2>
  <button id="button" onclick="loadDocuments()">Load documents!</button>
  <div id="docs">

  </div>
</body>
</html>

And as usual, share your feedback as that is the best way to get improvements that are important to you.

2016/07/27

Knowledge Driven Microservices

In the area of microservices more and more people are looking into lightweight and domain IT solutions. Regardless of how you look at microservice the overall idea is to make sure it does isolated work, don't cross the border of the domain it should cover.
That way of thinking made me look into it how to leverage the KIE (Knowledge Is Everything) platform to bring in the business aspect to it and reuse business assets you might already have - that is:
  • business rules
  • business process
  • common data model
  • and possibly more... depending on your domain
In this article I'd like to share the idea that I presented at DevConf.cz and JBCNConf this year. 

Since there is huge support for microservice architecture out there in open source world, I'd like to present one set of tools you can use to build up knowledge driven microservices, but keep in mind that these are just the tools that might (and most likely will) be replaced in the future.

Tools box

jBPM - for process management
Drools - for rules evenalutaion
Vert.x - for complete application infrastructure binding all together
Hazelcast - for cluster support for distributed environments 

Use case - sample

Overall use case was to provide a basic back loan solution that process loan applications. So the IT solution is partitioned into following services:



















  • Apply for loan service
    • Main entry point to the loan request system
    • Allow applicant to put loan request that consists of: 
      • applicant name 
      • monthly income 
      • loan amount 
      • length in years to pay off the loan

  • Evaluate loan service
    • Rule based evaluation of incoming loan applications 
      • Low risk loan 
        • when loan request is for amount lower that 1000 it’s considered low risk and thus is auto approved 
      • Basic loan 
        • When amount is higher than 1000 and length is less than 5 years - requires clerk approval process 
      • Long term loan 
        • When amount is higher than 1000 and length is more that 5 years - requires manager approval and might need special terms to be established
  • Process loan service
    • Depending on the classification of the loan different bank departments/teams will be involved in decision making about given loan request 
      • Basic loans department 
        • performs background check on the applicant and either approves or rejects the loan 
      • Long term loans department 
        • requires management approval to ensure a long term commitment can be accepted for given application.

Architecture

  • Each service is completely self contained 
  • Knowledge driven services are deployed with kjar - knowledge archives that provide business assets (processes, rules, etc)
  • Services talks to each other by exchanging data - business data 
  • Services can come and go as we like - dynamically increasing or decreasing number of instances of given service 
  • no API in the strict sense of its meaning - API is the data

More if you're interested...

Complete presentation from JBCNConf and video from DevConf.cz conference.



Presentation at DevConf.cz



In case you'd like to explore the code or run it yourself have a look at the complete source code of this demo in github.

2016/07/14

jBPM v7 - workbench and kie server integration

As part of ongoing development of jBPM version 7 I'd like to give a short preview of one of the changes that are coming. This in particular relates to changes how workbench and kie server are integrated.  In version 6 (when kie server was introduced with BPM capability) we have had two independent execution servers:

  • one embedded in workbench 
  • another in in kie server
in many cases it caused bit of confusion as users were expecting to see processes (and tasks, jobs etc) created in kie server vie workbench UI. To achieve that users where pointing workbench and kie server to same data base and that lead to number of unexpected issues as these two were designed in different way and were not intended to work in parallel.

jBPM version 7 is addressing this problem in two ways:
  • removes duplication when it comes to execution servers - only kie server will be available - no execution in workbench
  • integrates workbench with kie server(s) so its runtime views (e.g. process instances, definitions, tasks) can be used with kie server as backend

While the first point is rather clear and obvious, the second takes a bit to see its full power. It's not only about letting users use workbench UI to start processes or interact with user tasks, it actually makes the flexible architecture of kie server to be fully utilized (more on kie server can be found in previous blog series).
In version 6.4 there was new Server Management UI introduced to allow easy and efficient management of kie servers. This came with concept of server templates - that is a definition of the runtime environment regardless of how many physical instances are going to run with this definition. That in turn allow administrators to define partitioned environment where different server templates are representing different part of the organization or domain coverage.

Server template consists of:
  • name
  • list of assigned containers (kjars)
  • list of available capabilities
Once any kie server starts and connects to workbench (workbench acts as controller) then it will be presented in server management under remote servers. Remote servers reflects the current state of controller knowledge - meaning it only changes the list upon two events triggered by kie servers:
  • start of the kie server in managed mode - which connects to controller and registers itself as remote server
  • shutdown of the kie server in managed mode - which notifies controller to unregister itself from remote servers
With this setup users can create as many server templates as they need. Moreover at the same time each server template can be backed by as many kie server instances as it needs. That gives the complete control of how individual server templates (and buy that part of your business domain) can scale individually.

So enough of introduction, let's see how it got integrated for execution. Since there is no more execution server embedded in workbench, all that needs it will be carried on by kie server(s). To accomplish this workbench internally relies on two parts:
  • server management (that maintains server templates) to know what is available and where
  • kie server client to interact with remote servers
Server management is used to collect information about:
  • server templates whenever project is to be deployed - regardless if there are any remote servers or not - again this is just update to the definition of the server
  • remote servers whenever kie server interaction is required - start process, get process instances, etc
NOTE: in case of multiple server templates are available there will be selection box shown on the screen so users can decide which server template they are going to interact with. Again, users do not care about individual remote servers as they represent same setup so it's not important which server instance will be used for given request, as long as one of them is available.
Server templates that do not have any remote servers available won't be visible on the list of server templates.
And when there is only one server template selection is not required and that one becomes the default - for both deployment and runtime operations.


On right top corner users can find server template selection button in case there are more than one server template available. Once selected it will be preserved across screen navigation so it should be selected only once.

Build & Deploy has been updated to take advantage of the new server management as well, whenever users decide to do build and deploy:
  • if there is only one server template:
    • it gets selected as default
    • artifact name is used as container id
    • by default container is started
  • if there are more than one server templates available user is presented with additional popup window to select:
    • container id
    • server tempalte
    • if container should be started or not

That concludes the introduction and basic integration between kie server and workbench. Let's now look what's included and what's excluded from workbench point of view (or the differences that users might notice when switching from 6 to 7). 

First of all, majority of runtime operations are suppose to work exactly the same what, that includes:
  • Process definition view
    • process definition list
    • process definition details
    • Operations
      • start process instance (including forms)
      • visualize process definition diagram


  • Process instance view
    • process instance list (both predefined and custom filters)
    • process instance details
    • process instance variables
    • process instance documents
    • process instance log
    • operations
      • start process instance (including forms)
      • signal process instance (including bulk)
      • abort process instance (including bulk)
      • visualize process instance progress via diagram

  • Tasks instance view
    • task list (both predefined and custom filters)
    • task instance details (including forms)
    • life cycle operations of a task (e.g. claim, start, complete)
    • task assignment
    • task comments
    • task log

  • Jobs view
    • jobs list (both predefined and custom filters)
    • job details
    • create new job
    • depending on status cancel or requeue jobs


  • Dashboards view
    • out of the box dashboards for processes and tasks


All of these views retrieve data from remote kie server so that means there is no need for workbench to have any data sources defined. Even the default that comes with wildfly is not needed - not even for dashboards :) With that we have very lightweight workbench that comes with excellent authoring and management capabilities.

That leads us to a last section in this article that explains that changed and what was removed. 

Let's start with changes that are worth noting:
  • asynchronous processing of authoring REST operations has been moved from jbpm executor to UberFire async service - that makes the biggest change in cluttered setup where only cluster member that accepted request will know its status
  • build & deploy from project explorer is unified - regardless if the project is managed or unmanaged - there are only two options
    • compile - which does only in memory project build
    • build and deploy - which includes build, deploy to maven and provision to server template
Now moving to what was removed

  • since there is no jbpm runtime embedded in workbench there is no REST or JMS interfaces for jBPM, REST interfaces for the authoring part is unchanged (create org unit, repository, compile project etc)
  • jobs settings is no longer available as it does not make much sense in new (distributed) setup as the configuration of kie servers is currently done on server instance level
  • ad hoc tasks are temporally removed and will be introduced as part of case management support where it actually belongs
  • asset management is removed in the form it was known in v6 - the part that is kept is
    • managed repository that allows single or multi module projects
    • automatic branch creation - dev and release
    • repository structure management where users can manage their modules and create additional branches
    • project explorer still supports switch between branches as it used to
  • asset management won't have support for asset promotion, build of projects or release of projects
  • send task reminders - it was sort of hidden feature and more of admin so it's going to be added as part of admin interface for workbench/kie server


Enough talking (... writing/reading depends on point of view) it's time to see it in action. Following are two screen casts showing different use cases covered.

  • Case 1 - from zero to full speed execution
    • Create new repository and project
    • Create data object
    • Create process definition with user task and forms that uses created data object
    • Build and deploy the project
    • Start process instance(s)
    • Work on tasks
    • Visualize the progress of process instance
    • Monitor via dashboards



  • Case 2 - from existing project to document capable processes
    • Server template 1
    • Deploy already build project (translations)
    • Create process instance that includes document upload
    • Work on tasks
    • Visualize process instance details (including documents)

    • Server template 2
    • Deploy already build project (async-example)
    • Create process instance to check weather in US based on zip code
    • Work on tasks 
    • Visualize process instance progress - this project does not have image attached so it comes blank
    • Monitor via dashboards


Before we end a short note for these that want to try it out. Since we have integration with kie server and as you noticed it does not require any additional login to kie server and workbench uses logged in user, there is small need for configuration of Wildfly:
workbench comes with additional login module as part of kie-security.jar so to enable smooth integration when it comes to authentication, please declare in standalone.xml of your wildfly following login module:

so the default other security domain should look like this:
  <security-domain name="other" cache-type="default">
    <authentication>
       <login-module code="Remoting" flag="optional">
          <module-option name="password-stacking" value="useFirstPass"/>
        </login-module>
        <login-module code="RealmDirect" flag="required">
            <module-option name="password-stacking" value="useFirstPass"/>
        </login-module>
        <login-module code="org.kie.security.jaas.KieLoginModule" flag="optional"
                                module="deployment.kie-wb.war"/>
     </authentication>
  </security-domain>

important element is marked with red as it might differ between environments as it relies on the actual file name of the kie-wb.war. Replace it to match the name of your environment.

NOTE: this is only required for kie wb and not for kie drools wb running on wildfly. Current state is that this works on Wildfly/EAP7 and Tomcat, WebSphere and WebLogic might come later...

That's all for know, comments and ideas more than welcome