2015/12/03

KIE Server: Extend KIE Server client with new capabilities

Last but not least part of KIE Server extensions is about extending KIE Server Client with additional capabilities.

Use case

On top of what was built in second article (adding Mina transport to KIE Server), we need to add KIE Server Client extension that allow to use Mina transport with unified KIE Server Client API.

Before you start create empty maven project (packaging jar) with following dependencies:

<properties>
    <version.org.kie>6.4.0-SNAPSHOT</version.org.kie>
  </properties>

  <dependencies>
    <dependency>
      <groupId>org.kie.server</groupId>
      <artifactId>kie-server-api</artifactId>
      <version>${version.org.kie}</version>
    </dependency>

    <dependency>
      <groupId>org.kie.server</groupId>
      <artifactId>kie-server-client</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    
    <dependency>
      <groupId>org.drools</groupId>
      <artifactId>drools-compiler</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
  </dependencies>

Design ServicesClient API interface

First thing we need to do is to decide what API we should have to be exposed to the callers of our Client API. Since the Mina extension is an extension on top of Drools one so let's provide same capabilities as RulesServicesClient:

public interface RulesMinaServicesClient extends RuleServicesClient {

}

As you can notice it simply extends the default RulesServiceClient interface and thus provide same capabilities. 

Why we need to have additional interface for it? It's because we are going to register client implementations based on their interface and there can be only one implementation for given interface.

Implement RulesMinaServicesClient

Next step is to actually implement the client and here we are going to prepare a socket based communication for simplicity sake. We could use Apache Mina client API though this would introduce additional dependency which we don't need for sample implementation.

Note that this client implementation is very simple and in many cases can be improved, but the point here is to show how it can be implemented rather than provide bullet proof code.

So few aspects to remember when reviewing the implementation:
  • it relies on default configuration from KIE Server client and thus uses serverUrl as place where to provide host and port of Mina server
  • hardcodes JSON as marshaling format
  • decision if the response is success or failure is based on checking if the received message is a JSON object (start with {) - very simple though works for simple cases
  • uses direct socket communication with blocking api while waiting for first line of the response and then reads up all lines that are available
  • does not use "stream mode" meaning it disconnects from the server after invoking command
Here is the implementation
public class RulesMinaServicesClientImpl implements RulesMinaServicesClient {
    
    private String host;
    private Integer port;
    
    private Marshaller marshaller;
    
    public RulesMinaServicesClientImpl(KieServicesConfiguration configuration, ClassLoader classloader) {
        String[] serverDetails = configuration.getServerUrl().split(":");
        
        this.host = serverDetails[0];
        this.port = Integer.parseInt(serverDetails[1]);
        
        this.marshaller = MarshallerFactory.getMarshaller(configuration.getExtraJaxbClasses(), MarshallingFormat.JSON, classloader);
    }

    public ServiceResponse<String> executeCommands(String id, String payload) {
        
        try {
            String response = sendReceive(id, payload);
            if (response.startsWith("{")) {
                return new ServiceResponse<String>(ResponseType.SUCCESS, null, response);
            } else {
                return new ServiceResponse<String>(ResponseType.FAILURE, response);
            }
        } catch (Exception e) {
            throw new KieServicesException("Unable to send request to KIE Server", e);
        }
    }

    public ServiceResponse<String> executeCommands(String id, Command<?> cmd) {
        try {
            String response = sendReceive(id, marshaller.marshall(cmd));
            if (response.startsWith("{")) {
                return new ServiceResponse<String>(ResponseType.SUCCESS, null, response);
            } else {
                return new ServiceResponse<String>(ResponseType.FAILURE, response);
            }
        } catch (Exception e) {
            throw new KieServicesException("Unable to send request to KIE Server", e);
        }
    }

    protected String sendReceive(String containerId, String content) throws Exception {
        
        // content - flatten the content to be single line
        content = content.replaceAll("\\n", "");
        
        Socket minaSocket = null;
        PrintWriter out = null;
        BufferedReader in = null;

        StringBuffer data = new StringBuffer();
        try {
            minaSocket = new Socket(host, port);
            out = new PrintWriter(minaSocket.getOutputStream(), true);
            in = new BufferedReader(new InputStreamReader(minaSocket.getInputStream()));
        
            // prepare and send data
            out.println(containerId + "|" + content);
            // wait for the first line
            data.append(in.readLine());
            // and then continue as long as it's available
            while (in.ready()) {
                data.append(in.readLine());
            }
            
            return data.toString();
        } finally {
            out.close();
            in.close();
            minaSocket.close();
        }
    }
}

Once we have the client interface and client implementation we need to make it available for KIE Service client to find it.

Implement KieServicesClientBuilder

org.kie.server.client.helper.KieServicesClientBuilder is the glue interface that allows to provide additional client apis to generic KIE Server Client infrastructure. This interface have two methods:
  • getImplementedCapability - which must much the server capability (extension) is going to use
  • build - which is responsible for providing map of client implementations where key is the interface and value fully initialized implementation
Here is a simple implementation of the client builder for this use case

public class MinaClientBuilderImpl implements KieServicesClientBuilder {

    public String getImplementedCapability() {
        return "BRM-Mina";
    }

    public Map<Class<?>, Object> build(KieServicesConfiguration configuration, ClassLoader classLoader) {
        Map<Class<?>, Object> services = new HashMap<Class<?>, Object>();

        services.put(RulesMinaServicesClient.class, new RulesMinaServicesClientImpl(configuration, classLoader));

        return services;
    }

}

Make it discoverable

Same story as for other extensions ... once we have all that needs to be implemented, it's time to make it discoverable so KIE Server Client can find and register this extension on runtime. Since KIE Server  Client is based on Java SE ServiceLoader mechanism we need to add one file into our extension jar file:

META-INF/services/org.kie.server.client.helper.KieServicesClientBuilder

And the content of this file is a single line that represents fully qualified class name of our custom implementation of  KieServicesClientBuilder.


How to use it

The usage scenario does not much differ from regular KIE Server Client use case:
  • create client configuration
  • create client instance
  • get service client by type
  • invoke client methods
Here is implementation that create KIE Server Client for RulesMinaServiceClient

protected RulesMinaServicesClient buildClient() {
    KieServicesConfiguration configuration = KieServicesFactory.newRestConfiguration("localhost:9123", null, null);
    List<String> capabilities = new ArrayList<String>();
    // we need to add explicitly capabilities as the mina client does not respond to get server info requests.
    capabilities.add("BRM-Mina");
    
    configuration.setCapabilities(capabilities);
    configuration.setMarshallingFormat(MarshallingFormat.JSON);
    
    configuration.addJaxbClasses(extraClasses);
    
    KieServicesClient kieServicesClient =  KieServicesFactory.newKieServicesClient(configuration);
    
    RulesMinaServicesClient rulesClient = kieServicesClient.getServicesClient(RulesMinaServicesClient.class);
    
    return rulesClient;
}
And here is how it is used to invoke operations on KIE Server via Mina transport

RulesMinaServicesClient rulesClient = buildClient();

List<Command<?>> commands = new ArrayList<Command<?>>();
BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, "defaultKieSession");

Person person = new Person();
person.setName("mary");
commands.add(commandsFactory.newInsert(person, "person"));
commands.add(commandsFactory.newFireAllRules("fired"));

ServiceResponse<String> response = rulesClient.executeCommands(containerId, executionCommand);
Assert.assertNotNull(response);

Assert.assertEquals(ResponseType.SUCCESS, response.getType());

String data = response.getResult();

Marshaller marshaller = MarshallerFactory.getMarshaller(extraClasses, MarshallingFormat.JSON, this.getClass().getClassLoader());

ExecutionResultImpl results = marshaller.unmarshall(data, ExecutionResultImpl.class);
Assert.assertNotNull(results);

Object personResult = results.getValue("person");
Assert.assertTrue(personResult instanceof Person);

Assert.assertEquals("mary", ((Person) personResult).getName());
Assert.assertEquals("JBoss Community", ((Person) personResult).getAddress());
Assert.assertEquals(true, ((Person) personResult).isRegistered());

Complete code of this client extension can be found here.

And that's the last extension method to provide more features in KIE Server then given out of the box.

Thanks for reading the entire series of KIE Server extensions and any and all feedback welcome :)

KIE Server: Extend KIE Server with additional transport

There might be some cases where existing transports in KIE Server won't be sufficient, for whatever reason

  • not fast enough
  • difficult to deal with string based data formats (JSON, XML)
  • you name it..so there might be a need to build a custom transport to overcome this limitation.

Use case

Add additional transport to KIE Server that allows to use Drools capabilities. For this example we will use Apache Mina as underlying transport framework and we're going to exchange string based data that will still rely on existing marshaling operations. For simplicity reason we support only JSON format.

Before you start create empty maven project (packaging jar) with following dependencies:

<properties>
    <version.org.kie>6.4.0-SNAPSHOT</version.org.kie>
  </properties>

  <dependencies>
    <dependency>
      <groupId>org.kie</groupId>
      <artifactId>kie-api</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    <dependency>
      <groupId>org.kie</groupId>
      <artifactId>kie-internal</artifactId>
      <version>${version.org.kie}</version>
    </dependency>

    <dependency>
      <groupId>org.kie.server</groupId>
      <artifactId>kie-server-api</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    <dependency>
      <groupId>org.kie.server</groupId>
      <artifactId>kie-server-services-common</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    <dependency>
      <groupId>org.kie.server</groupId>
      <artifactId>kie-server-services-drools</artifactId>
      <version>${version.org.kie}</version>
    </dependency>

    <dependency>
      <groupId>org.drools</groupId>
      <artifactId>drools-core</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    <dependency>
      <groupId>org.drools</groupId>
      <artifactId>drools-compiler</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    <dependency>
      <groupId>org.slf4j</groupId>
      <artifactId>slf4j-api</artifactId>
      <version>1.7.2</version>
    </dependency>

    <dependency>
      <groupId>org.apache.mina</groupId>
      <artifactId>mina-core</artifactId>
      <version>2.0.9</version>
    </dependency>

  </dependencies>

Implement KieServerExtension

Main part of this implementation is done by implementing org.kie.server.services.api.KieServerExtension which is KIE Server extension main interface. This interface has number of methods which implementation depends on the actual needs:

In our case we don't need to do anything when container is created or disposed as we simply extend the Drools extension and rely on complete setup in that component. For this example we are mostly interested in implementing:
  • init method
  • destroy method
in these two methods we are going to manage life cycle of the Apache Mina server. 
public interface KieServerExtension {

    boolean isActive();

    void init(KieServerImpl kieServer, KieServerRegistry registry);

    void destroy(KieServerImpl kieServer, KieServerRegistry registry);

    void createContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters);

    void disposeContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters);

    List<Object> getAppComponents(SupportedTransports type);

    <T> T getAppComponents(Class<T> serviceType);

    String getImplementedCapability();

    List<Object> getServices();

    String getExtensionName();

    Integer getStartOrder();
}

Next there are few methods that describe the extension:
  • getImplementedCapability - should instruct what kind of capability is covered by this extension, note that capability should be unique within KIE Server
  • getExtensionName - human readable name of this extension
  • getStartOrder - defined when given extension should be started, important for extensions that have dependencies to other extensions like in this case where it depends on Drools (startup order is set to 0) so our extension should start after drools one - thus set to 20
Remaining methods are left with standard implementation to fulfill interface requirements.

Here is the implementation of the KIE Server extension based on Apache Mina:

public class MinaDroolsKieServerExtension implements KieServerExtension {

    private static final Logger logger = LoggerFactory.getLogger(MinaDroolsKieServerExtension.class);

    public static final String EXTENSION_NAME = "Drools-Mina";

    private static final Boolean disabled = Boolean.parseBoolean(System.getProperty("org.kie.server.drools-mina.ext.disabled", "false"));
    private static final String MINA_HOST = System.getProperty("org.kie.server.drools-mina.ext.port", "localhost");
    private static final int MINA_PORT = Integer.parseInt(System.getProperty("org.kie.server.drools-mina.ext.port", "9123"));
    
    // taken from dependency - Drools extension
    private KieContainerCommandService batchCommandService;
    
    // mina specific 
    private IoAcceptor acceptor;
    
    public boolean isActive() {
        return disabled == false;
    }

    public void init(KieServerImpl kieServer, KieServerRegistry registry) {
        
        KieServerExtension droolsExtension = registry.getServerExtension("Drools");
        if (droolsExtension == null) {
            logger.warn("No Drools extension available, quiting...");
            return;
        }
        
        List<Object> droolsServices = droolsExtension.getServices();
        for( Object object : droolsServices ) {
            // in case given service is null (meaning was not configured) continue with next one
            if (object == null) {
                continue;
            }
            if( KieContainerCommandService.class.isAssignableFrom(object.getClass()) ) {
                batchCommandService = (KieContainerCommandService) object;
                continue;
            } 
        }
        if (batchCommandService != null) {
            acceptor = new NioSocketAcceptor();
            acceptor.getFilterChain().addLast( "codec", new ProtocolCodecFilter( new TextLineCodecFactory( Charset.forName( "UTF-8" ))));
    
            acceptor.setHandler( new TextBasedIoHandlerAdapter(batchCommandService) );
            acceptor.getSessionConfig().setReadBufferSize( 2048 );
            acceptor.getSessionConfig().setIdleTime( IdleStatus.BOTH_IDLE, 10 );
            try {
                acceptor.bind( new InetSocketAddress(MINA_HOST, MINA_PORT) );
                
                logger.info("{} -- Mina server started at {} and port {}", toString(), MINA_HOST, MINA_PORT);
            } catch (IOException e) {
                logger.error("Unable to start Mina acceptor due to {}", e.getMessage(), e);
            }
    
        }
    }

    public void destroy(KieServerImpl kieServer, KieServerRegistry registry) {
        if (acceptor != null) {
            acceptor.dispose();
            acceptor = null;
        }
        logger.info("{} -- Mina server stopped", toString());
    }

    public void createContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters) {
        // no op - it's already handled by Drools extension

    }

    public void disposeContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters) {
        // no op - it's already handled by Drools extension

    }

    public List<Object> getAppComponents(SupportedTransports type) {
        // nothing for supported transports (REST or JMS)
        return Collections.emptyList();
    }

    public <T> T getAppComponents(Class<T> serviceType) {

        return null;
    }

    public String getImplementedCapability() {
        return "BRM-Mina";
    }

    public List<Object> getServices() {
        return Collections.emptyList();
    }

    public String getExtensionName() { 
        return EXTENSION_NAME;
    }

    public Integer getStartOrder() {
        return 20;
    }

    @Override
    public String toString() {
        return EXTENSION_NAME + " KIE Server extension";
    }
}
As can be noticed main part of implementation is in the init method that is responsible for collecting services from Drools extensions and bootstrapping Apache Mina server.
Worth noticing is the TextBaseIOHandlerAdapter class that is used as handler on Mina server that in essence will react to incoming requests.

Implement Apache Mina handler

Here is the implementation of the handler class that receives text message and executes it on drools service. 

public class TextBasedIoHandlerAdapter extends IoHandlerAdapter {
    
    private static final Logger logger = LoggerFactory.getLogger(TextBasedIoHandlerAdapter.class);

    private KieContainerCommandService batchCommandService;
    
    public TextBasedIoHandlerAdapter(KieContainerCommandService batchCommandService) {
        this.batchCommandService = batchCommandService;
    }

    @Override
    public void messageReceived( IoSession session, Object message ) throws Exception {
        String completeMessage = message.toString();
        logger.debug("Received message '{}'", completeMessage);
        if( completeMessage.trim().equalsIgnoreCase("quit") || completeMessage.trim().equalsIgnoreCase("exit") ) {
            session.close(false);
            return;
        }

        String[] elements = completeMessage.split("\\|");
        logger.debug("Container id {}", elements[0]);
        try {
            ServiceResponse<String> result = batchCommandService.callContainer(elements[0], elements[1], MarshallingFormat.JSON, null);
            
            if (result.getType().equals(ServiceResponse.ResponseType.SUCCESS)) {
                session.write(result.getResult());
                logger.debug("Successful message written with content '{}'", result.getResult());
            } else {
                session.write(result.getMsg());
                logger.debug("Failure message written with content '{}'", result.getMsg()); 
            }
        } catch (Exception e) {
            
        }
    }
}

Few details about the handler implementation:
  • each incoming request is single line, so make sure before submitting anything to it make sure it's single line
  • there is a need to pass container id in this single line so this handler expects following format:
    • containerID|payload
  • response is set the way it is produced by marshaller and that can be multiple lines
  • handlers allows "stream mode" that allows to send commands without disconnecting from KIE Server session. to be able to quit the stream mode - send either exit or quit

Make it discoverable

Same story as for REST extension ... once we have all that needs to be implemented, it's time to make it discoverable so KIE Server can find and register this extension on runtime. Since KIE Server is based on Java SE ServiceLoader mechanism we need to add one file into our extension jar file:

META-INF/services/org.kie.server.services.api.KieServerExtension

And the content of this file is a single line that represents fully qualified class name of our custom implementation of  KieServerExtension.


Last step is to build this project (which will result in jar file) and copy the result into:
 kie-server.war/WEB-INF/lib

Since this extension depends on Apache Mina we need to copy mina-core-2.0.9.jar into  kie-server.war/WEB-INF/lib as well.

Usage example

Clone this repository and build the kie-server-demo project. Once you build it you will be able to deploy it to KIE Server (either directly using KIE Server management REST api) or via KIE workbench controller.

Once deployed and KIE Server started you should find in logs that new KIE Server extension started:
Drools-Mina KIE Server extension -- Mina server started at localhost and port 9123
Drools-Mina KIE Server extension has been successfully registered as server extension

That means we are now interact with our Apache Mina based transport in KIE Server. So let's give it a go... we could write a code to interact with Mina server but to avoid another coding exercise let's use... wait for it .... telnet :)

Start telnet and connect to KIE Server on port 9123:
telnet 127.0.0.1 9123

once connected you can easily interact with alive and kicking KIE Server:
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
demo|{"lookup":"defaultKieSession","commands":[{"insert":{"object":{"org.jbpm.test.Person":{"name":"john","age":25}}}},{"fire-all-rules":""}]}
{
  "results" : [ {
    "key" : "",
    "value" : 1
  } ],
  "facts" : [ ]
}
demo|{"lookup":"defaultKieSession","commands":[{"insert":{"object":{"org.jbpm.test.Person":{"name":"john","age":25}}}},{"fire-all-rules":""}]}
{
  "results" : [ {
    "key" : "",
    "value" : 1
  } ],
  "facts" : [ ]
}
demo|{"lookup":"defaultKieSession","commands":[{"insert":{"object":{"org.jbpm.test.Person":{"name":"maciek","age":25}}}},{"fire-all-rules":""}]}
{
  "results" : [ {
    "key" : "",
    "value" : 1
  } ],
  "facts" : [ ]
}
exit
Connection closed by foreign host.

where:

  • green is request message
  • blue is response
  • orange is exit message


in the server side logs you will see something like this:
16:33:40,206 INFO  [stdout] (NioProcessor-2) Hello john
16:34:03,877 INFO  [stdout] (NioProcessor-2) Hello john
16:34:19,800 INFO  [stdout] (NioProcessor-2) Hello maciek

This illustrated the stream mode where we simply type in commands after command without disconnecting from the KIE Server.

This concludes this exercise and complete code for this can be found here.

KIE Server: Extend existing server capability with extra REST endpoint

First and most likely the most frequently required extension to KIE Server is to extend REST api of already available extension - Drools or jBPM. There are few simple steps that needs to be done to provide extra endpoints in KIE Server.

Our use case

We are going to extend Drools extension with additional endpoint that will do very simple thing - expose single endpoint that will accept list of facts to be inserted and automatically call fire all rules and retrieve all objects from ksession.
Endpoint will be bound to following path:
server/containers/instances/{id}/ksession/{ksessionId}

where:
  • id is container identifier
  • ksessionId is name of the ksession within container to be used

Before you start create empty maven project (packaging jar) with following dependencies:

 
 <properties>
    <version.org.kie>6.4.0-SNAPSHOT</version.org.kie>
  </properties>

  <dependencies>
    <dependency>
      <groupId>org.kie</groupId>
      <artifactId>kie-api</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    <dependency>
      <groupId>org.kie</groupId>
      <artifactId>kie-internal</artifactId>
      <version>${version.org.kie}</version>
    </dependency>

    <dependency>
      <groupId>org.kie.server</groupId>
      <artifactId>kie-server-api</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    <dependency>
      <groupId>org.kie.server</groupId>
      <artifactId>kie-server-services-common</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    <dependency>
      <groupId>org.kie.server</groupId>
      <artifactId>kie-server-services-drools</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    
    <dependency>
      <groupId>org.kie.server</groupId>
      <artifactId>kie-server-rest-common</artifactId>
      <version>${version.org.kie}</version>
    </dependency>

    <dependency>
      <groupId>org.drools</groupId>
      <artifactId>drools-core</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    <dependency>
      <groupId>org.drools</groupId>
      <artifactId>drools-compiler</artifactId>
      <version>${version.org.kie}</version>
    </dependency>
    <dependency>
      <groupId>org.slf4j</groupId>
      <artifactId>slf4j-api</artifactId>
      <version>1.7.2</version>
    </dependency>

  </dependencies>

Implement KieServerApplicationComponentsService

First step is to implement org.kie.server.services.api.KieServerApplicationComponentsService that is responsible for delivering REST endpoints (aka resources) to the KIE Server infrastructure that will be then deployed on application start. This interface is very simple and has only one method:

Collection<Object> getAppComponents(String extension, 
                                    SupportedTransports type, Object... services)

this method is then invoked by KIE Server when booting up and should return all resources that REST container should deploy.

This method implementation should take into consideration following:

  • it is called by all extensions and thus it provides extension name so custom implementations can decide if this extension is for it or not
  • supported type - either REST or JMS - in our case it will be REST only
  • services - dedicated services to given extensions that can be then used as part of custom extension - usually these are engine services
Here is a sample implementation that uses Drools extension as base (and by that its services)

 
public class CusomtDroolsKieServerApplicationComponentsService implements KieServerApplicationComponentsService {

    private static final String OWNER_EXTENSION = "Drools";
    
    public Collection<Object> getAppComponents(String extension, SupportedTransports type, Object... services) {
        // skip calls from other than owning extension
        if ( !OWNER_EXTENSION.equals(extension) ) {
            return Collections.emptyList();
        }
        
        RulesExecutionService rulesExecutionService = null;
        KieServerRegistry context = null;
       
        for( Object object : services ) { 
            if( RulesExecutionService.class.isAssignableFrom(object.getClass()) ) { 
                rulesExecutionService = (RulesExecutionService) object;
                continue;
            } else if( KieServerRegistry.class.isAssignableFrom(object.getClass()) ) {
                context = (KieServerRegistry) object;
                continue;
            }
        }
        
        List<Object> components = new ArrayList<Object>(1);
        if( SupportedTransports.REST.equals(type) ) {
            components.add(new CustomResource(rulesExecutionService, context));
        }
        
        return components;
    }

}


So what can be seen here is that it only reacts to Drools extension services and others are ignored. Next it will select RulesExecutionService and KieServerRegistry from available services. Last will create new CustomResource (implemented in next step) and returns it as part of the components list.

Implement REST resource

Next step is to implement custom REST resource that will be used by KIE Server to provide additional functionality. Here we do a simple, single method resource that:
  • uses POST http method
  • expects following data to be given:
    • container id as path argument
    • ksession id as path argument
    • list of facts as message payload 
  • supports all KIE Server data formats:
    • XML - JAXB
    • JSON
    • XML - Xstream
It will then unmarshal the payload into actual List<?> and create for each item in the list new InsertCommand. These inserts will be then followed by FireAllRules and GetObject commands. All will be then added as commands of BatchExecutionCommand and used to call rule engine. As simple as that. It is already available on KIE Server out of the box but requires complete setup of BatchExecutionCommand to be done on client side. Not that it's not possible but this extension is tailored one for simple pattern :
insert -> evaluate -> return

Here is how the simple implementation could look like:
 
@Path("server/containers/instances/{id}/ksession")
public class CustomResource {

    private static final Logger logger = LoggerFactory.getLogger(CustomResource.class);
    
    private KieCommands commandsFactory = KieServices.Factory.get().getCommands();

    private RulesExecutionService rulesExecutionService;
    private KieServerRegistry registry;

    public CustomResource() {

    }

    public CustomResource(RulesExecutionService rulesExecutionService, KieServerRegistry registry) {
        this.rulesExecutionService = rulesExecutionService;
        this.registry = registry;
    }
    
    @POST
    @Path("/{ksessionId}")
    @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response insertFireReturn(@Context HttpHeaders headers, 
            @PathParam("id") String id, 
            @PathParam("ksessionId") String ksessionId, 
            String cmdPayload) {

        Variant v = getVariant(headers);
        String contentType = getContentType(headers);
        
        MarshallingFormat format = MarshallingFormat.fromType(contentType);
        if (format == null) {
            format = MarshallingFormat.valueOf(contentType);
        }
        try {    
            KieContainerInstance kci = registry.getContainer(id);
            
            Marshaller marshaller = kci.getMarshaller(format);
            
            List<?> listOfFacts = marshaller.unmarshall(cmdPayload, List.class);
            
            List<Command<?>> commands = new ArrayList<Command<?>>();
            BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, ksessionId);
            
            for (Object fact : listOfFacts) {
                commands.add(commandsFactory.newInsert(fact, fact.toString()));
            }
            commands.add(commandsFactory.newFireAllRules());
            commands.add(commandsFactory.newGetObjects());
                
            ExecutionResults results = rulesExecutionService.call(kci, executionCommand);
                    
            String result = marshaller.marshall(results);
            
            
            logger.debug("Returning OK response with content '{}'", result);
            return createResponse(result, v, Response.Status.OK);
        } catch (Exception e) {
            // in case marshalling failed return the call container response to keep backward compatibility
            String response = "Execution failed with error : " + e.getMessage();
            logger.debug("Returning Failure response with content '{}'", response);
            return createResponse(response, v, Response.Status.INTERNAL_SERVER_ERROR);
        }

    }
}


Make it discoverable

Once we have all that needs to be implemented, it's time to make it discoverable so KIE Server can find and register this extension on runtime. Since KIE Server is based on Java SE ServiceLoader mechanism we need to add one file into our extension jar file:

META-INF/services/org.kie.server.services.api.KieServerApplicationComponentsService

And the content of this file is a single line that represents fully qualified class name of our custom implementation of  KieServerApplicationComponentsService.


Last step is to build this project (which will result in jar file) and copy the result into:
 kie-server.war/WEB-INF/lib

And that's all that is needed. Start KIE Server and then you can start interacting with your new REST endpoint that relies on Drools extension.

Usage example

Clone this repository and build the kie-server-demo project. Once you build it you will be able to deploy it to KIE Server (either directly using KIE Server management REST api) or via KIE workbench controller.

Once deployed you can use following to invoke new endpoint:
URL: 
http://localhost:8080/kie-server/services/rest/server/containers/instances/demo/ksession/defaultKieSession

HTTP Method: POST
Headers:
Content-Type: application/json
Accept: application/json

Message payload:
[
{
  "org.jbpm.test.Person":{
     "name":"john",
     "age":25}
   },
  {
    "org.jbpm.test.Person":{
       "name":"mary",
       "age":22}
   }
]

A simple list with two items representing people, execute it and you should see following in server log:
13:37:20,347 INFO  [stdout] (default task-24) Hello mary
13:37:20,348 INFO  [stdout] (default task-24) Hello john

And the response should contain objects retrieved after rule evaluation where each Person object has:
  • address set to 'JBoss Community'
  • registered flag set to true

With this sample use case we illustrated how easy it is to extend REST api of KIE Server. Complete code for this extension can be found here.

Extending KIE Server capabilities

As a follow up of previous articles about  KIE Server, I'd like to present the extensibility support provided by KIE Server. Let's quickly look at KIE Server architecture...

Extensions overview

KIE Server is built around extensions, every piece of functionality is actually provided by extension. Out of the box we have following:

  • KIE Server extension - this is the default extension that provides management capabilities of the KIE Server - like creating or disposing containers etc
  • Drools extension - this extension provides rules (BRMS) capabilities, e.g. allows to inserting facts and firing rules (among others)
  • jBPM extension - this extensions provides process (BPMS) capabilities e.g. business process execution, user tasks, async jobs
  • jBPM UI extension - added in 6.4 additional extension that depends on jBPM extension and provides UI related capabilities - forms and process images
With just these out of the box capabilities KIE Server provides quite a bit of coverage. But that not all... extensions provide the capabilities but these capabilities must be somehow exposed to the users. And here KIE Server comes with two transports by default:
  • REST
  • JMS
Due to a need to be able to effectively manage extensions in runtime these are packaged in different jar files. So looking at the out of the box extensions we have following modules:
  • Drools extension
    • kie-server-services-drools
    • kie-server-rest-drools
  • jBPM extension
    • kie-server-services-jbpm
    • kie-server-rest-jbpm
  • jBPM UI extension
    • kie-server-services-jbpm-ui
    • kie-server-rest-jbpm-ui
All above modules are automatically discovered on runtime and registered in KIE Server if the are enabled (which by default they are). Extensions can be disabled using system properties
  • Drools extension
    • org.drools.server.ext.disabled = true
  • jBPM extension
    • org.jbpm.server.ext.disabled = true
  • jBPM UI extension
    • org.jbpm.ui.server.ext.disabled = true
But this is not all... if someone does not like the client api a client api can also be extended by implementing custom interfaces. This is why there is an extra step needed to get remote client:

kieServerClient.getServicesClient(Interface.class)

Why extensions are needed?

Let's not look at why would someone consider extending KIE server?

  • First and foremost is there might be missing functionality which is not yet implemented in KIE Server but exists in engines (process or rule engine). 
    • REST extension
  • Another use case is that something should be done differently than it is done out of the box - different parameters and so on..
    • client extension
  • Last but not least, it should be possible to extend the transport coverage meaning allow users to add other transports next to REST and JMS.
    • server extension
With this users can first of all, cover their requirements even if the out of the box KIE Server implementation does not provide required functionality. Next such extensions cane contributed to be included in the project or can be shipped as custom extensions available for other users.

This benefits both project and users so I'd like to encourage every one to look into details and think if there is anything missing and if so try to solve it by building extensions.

Let's extend KIE Server capabilities

Following three articles will provide details on how to build KIE Server extensions:

Important note: While most of the work could be achieved already with 6.3.0.Final I'd strongly recommend to give it a go with 6.4.0 (and thus all dependencies refer to 6.4.0-SNAPSHOT) as the extensions have been simplified a lot.

2015/10/20

Installing KIE Server and Workbench on same server

A common requirement for installation on development machine is to run both KIE Workbench and KIE Server on same server to simplify execution environment and avoid any port offset configuration.

This article will explain all installation steps needed to make this happen on two most frequently used containers:

  • Wildfly 8.2.0.Final
  • Apache Tomcat 8

Download binaries

So let's get our hands dirty and play around with some installation steps. First make sure you download correct versions of workbench and KIE Server for the container you target.

Wildfly

Tomcat

Wildfly

Deploy applications

Copy downloaded files into WILDFLY_HOME/standalone/deployments, while copying rename them to simplify the context paths that will be used on application server:
  • rename kie-wb-distribution-wars-6.3.0.Final-wildfly8.war to kie-wb.war
  • rename kie-server-6.3.0.Final-ee7.war to kie-server.war

Configure your server

With Wildfly there is not much to setup as both transaction manager and persistence (including data source) is already preconfigured.

Configure users

  • create user in application realm 
    • name: kieserver 
    • password: kieserver1!
    • roles: kie-server
  • create user in application realm to logon to workbench
    • name: workbench 
    • password: workbench!
    • roles: admin, kie-server

Configure system properties

Following list of properties needs to be given to work smoothly for both workbench and KIE Server:
  • -Dorg.kie.server.id=wildfly-kieserver 
  • -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server 
  • -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller

Launching the server

best way is to add system properties into startup command when launching Wildfly server. Go to WILDFLY_HOME/bin and issue following command:

./standalone.sh --server-config=standalone-full.xml -Dorg.kie.server.id=wildfly-kieserver -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller

Tomcat


Deploy applications

Copy downloaded files into TOMCAT_HOME/webapps, while copying rename them to simplify the context paths that will be used on application server:
  • rename kie-wb-distribution-wars-6.3.0.Final-tomcat7.war to kie-wb.war
  • rename kie-server-6.3.0.Final-webc.war to kie-server.war

Configure your server

  1. Copy following libraries into TOMCAT_HOME/lib
    1. btm-2.1.4
    2. btm-tomcat55-lifecycle-2.1.4
    3. h2-1.3.161
    4. jacc-1.0
    5. jta-1.1
    6. kie-tomcat-integration-6.3.0.Final
    7. slf4j-api-1.7.2
    8. slf4j-api-1.7.2
  2. Create Bitronix configuration files to enable JTA transaction manager
  • Create file 'btm-config.properties' under TOMCAT_HOME/conf with following content
bitronix.tm.serverId=tomcat-btm-node0
bitronix.tm.journal.disk.logPart1Filename=${btm.root}/work/btm1.tlog
bitronix.tm.journal.disk.logPart2Filename=${btm.root}/work/btm2.tlog
bitronix.tm.resource.configuration=${btm.root}/conf/resources.properties
  • Create file 'resources.properties' under TOMCAT_HOME/conf with following content
resource.ds1.className=bitronix.tm.resource.jdbc.lrc.LrcXADataSource
resource.ds1.uniqueName=jdbc/jbpm
resource.ds1.minPoolSize=10
resource.ds1.maxPoolSize=20
resource.ds1.driverProperties.driverClassName=org.h2.Driver
resource.ds1.driverProperties.url=jdbc:h2:mem:jbpm
resource.ds1.driverProperties.user=sa
resource.ds1.driverProperties.password=
resource.ds1.allowLocalTransactions=true

Configure users

Create following users in tomcat-users.xml under TOMCAT_HOME/conf
  • create user
    • name: kieserver 
    • password: kieserver1!
    • roles: kie-server
  • create user to logon to workbench
    • name: workbench 
    • password: workbench!
    • roles: admin, kid-server
 
<tomcat-users>
  <role rolename="admin"/>
  <role rolename="analyst"/> 
  <role rolename="user"/>
  <role rolename="kie-server"/>

  <user username="workbench" password="workbench1!" roles="admin,kie-server"/>
  <user username="kieserver" password="kieserver1!" roles="kie-server"/>  
</tomcat-users>

Configure system properties

Configure following system properties in file setenv.sh under TOMCAT_HOME/bin
-Dbtm.root=$CATALINA_HOME 
-Dorg.jbpm.cdi.bm=java:comp/env/BeanManager 
-Dbitronix.tm.configuration=$CATALINA_HOME/conf/btm-config.properties 
-Djbpm.tsr.jndi.lookup=java:comp/env/TransactionSynchronizationRegistry 
-Djava.security.auth.login.config=$CATALINA_HOME/webapps/kie-wb/WEB-INF/classes/login.config 
-Dorg.kie.server.persistence.ds=java:comp/env/jdbc/jbpm 
-Dorg.kie.server.persistence.tm=org.hibernate.service.jta.platform.internal.BitronixJtaPlatform 
-Dorg.kie.server.id=tomcat-kieserver 
-Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server 
-Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller

NOTE: Simple copy this into setenv.sh files to properly setup KIE Server and Workbench on Tomcat:
CATALINA_OPTS="-Xmx512M -XX:MaxPermSize=512m -Dbtm.root=$CATALINA_HOME -Dorg.jbpm.cdi.bm=java:comp/env/BeanManager -Dbitronix.tm.configuration=$CATALINA_HOME/conf/btm-config.properties -Djbpm.tsr.jndi.lookup=java:comp/env/TransactionSynchronizationRegistry -Djava.security.auth.login.config=$CATALINA_HOME/webapps/kie-wb/WEB-INF/classes/login.config -Dorg.kie.server.persistence.ds=java:comp/env/jdbc/jbpm -Dorg.kie.server.persistence.tm=org.hibernate.service.jta.platform.internal.BitronixJtaPlatform -Dorg.kie.server.id=tomcat-kieserver -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller"

Launching the server

Go to TOMCAT_HOME/bin and issue following command:
./startup.sh

Going beyond default setup

Disabling KIE Server extensions

And that's all to do to setup both KIE Server and Workbench on single server instance (either Wildfly or Tomcat). This article focused on fully featured KIE server installation meaning both BRM (rules) and BPM (processes, tasks) capabilities. Although KIE Server can be configured to serve only subset of the capabilities - e.g. only BRM or only BPM.

To do so one can configure KIE Server with system properties to disable extensions (BRM or BPM)

Wildfly:
add following system property to startup command:
  • disable BRM: -Dorg.drools.server.ext.disabled=true
  • disable BPM: -Dorg.jbpm.server.ext.disabled=true
So the startup command would look like this:
./standalone.sh --server-config=standalone-full.xml -Dorg.kie.server.id=wildfly-kieserver -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller -Dorg.jbpm.server.ext.disabled=true

Tomcat
add following system property to setenv.sh script (must be still part of CATALINA_OPTS configuration):
  • disable BRM: -Dorg.drools.server.ext.disabled=true
  • disable BPM: -Dorg.jbpm.server.ext.disabled=true
Complete content of setenv.sh is as follows:
CATALINA_OPTS="-Xmx512M -XX:MaxPermSize=512m -Dbtm.root=$CATALINA_HOME -Dorg.jbpm.cdi.bm=java:comp/env/BeanManager -Dbitronix.tm.configuration=$CATALINA_HOME/conf/btm-config.properties -Djbpm.tsr.jndi.lookup=java:comp/env/TransactionSynchronizationRegistry -Djava.security.auth.login.config=$CATALINA_HOME/webapps/kie-wb/WEB-INF/classes/login.config -Dorg.kie.server.persistence.ds=java:comp/env/jdbc/jbpm -Dorg.kie.server.persistence.tm=org.hibernate.service.jta.platform.internal.BitronixJtaPlatform -Dorg.kie.server.id=tomcat-kieserver -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller -Dorg.jbpm.server.ext.disabled=true"

Changing data base and persistence settings

Since by default persistence uses just in memory data base (H2) it is good enough for first tryouts or demos but not for real usage. So to be able to change persistence settings following needs to be done:

KIE Workbench on Wildfly
Modify data source configuration in Wildfly - either via manual editing of standalone-full.xml file or using tools such as Wildfly CLI. See Wildfly documentation on how to define data sources.

  • Next modify persistence.xml that resides inside workbench war file. Extract the kie-wb.war file into directory with same name and in same location (WILDFLY_HOME/standalone/deployments). 
  • Then navigate to kie-wb.war/WEB-INF/classes/META-INF
  • Edit persistence.xml file and change following elements
    • jta-data-source to point to the newly created data source (JNDI name) for your data base
    • hibernate.dialect to hibernate supported dialect name for you data base
KIE Server on Wildfly
there is no need to do any changes to the application (the war file) as the persistence can be reconfigured via system properties. Set following system properties at the end of server startup command

  • -Dorg.kie.server.persistence.ds=java:jboss/datasources/jbpmDS
  • -Dorg.kie.server.persistence.dialect=org.hibernate.dialect.MySQL5Dialect
Full command to start server will be:
./standalone.sh --server-config=standalone-full.xml -Dorg.kie.server.id=wildfly-kieserver -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller -Dorg.kie.server.persistence.ds=java:jboss/datasources/jbpmDS 
-Dorg.kie.server.persistence.dialect=org.hibernate.dialect.MySQL5Dialect

KIE Workbench on Tomcat
To modify data source configuration in Tomcat you need to alter resources.properties (inside TOMCAT_HOME/conf) file that defines data base connection. For MySQL it could look like this:

resource.ds1.className=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource
resource.ds1.uniqueName=jdbc/jbpmDS
resource.ds1.minPoolSize=0
resource.ds1.maxPoolSize=10
resource.ds1.driverProperties.user=guest
resource.ds1.driverProperties.password=guest
resource.ds1.driverProperties.URL=jdbc:mysql://localhost:3306/jbpm
resource.ds1.allowLocalTransactions=true

Make sure you're copy mysql JDBC driver into TOMCAT_HOME/lib otherwise it won't provide proper connection handling.
  • Next modify persistence.xml that resides inside workbench war file. Extract the kie-wb.war file into directory with same name and in same location (TOMCAT_HOME/webapps). 
  • Then navigate to kie-wb.war/WEB-INF/classes/META-INF
  • Edit persistence.xml file and change following elements
    • jta-data-source to point to the newly created data source (JNDI name) for your data base
    • hibernate.dialect to hibernate supported dialect name for you data base
KIE Server on Tomcat
there is no need to do any changes to the application (the war file) as the persistence can be reconfigured via system properties. Set or modify (as data source is already defined there) following system properties in setenv.sh script inside TOMCAT_HOME/bin

  • -Dorg.kie.server.persistence.ds=java:comp/env/jdbc//jbpmDS
  • -Dorg.kie.server.persistence.dialect=org.hibernate.dialect.MySQL5Dialect
Complete content of the setenv.sh script is as follows:
CATALINA_OPTS="-Xmx512M -XX:MaxPermSize=512m -Dbtm.root=$CATALINA_HOME -Dorg.jbpm.cdi.bm=java:comp/env/BeanManager -Dbitronix.tm.configuration=$CATALINA_HOME/conf/btm-config.properties -Djbpm.tsr.jndi.lookup=java:comp/env/TransactionSynchronizationRegistry -Djava.security.auth.login.config=$CATALINA_HOME/webapps/kie-wb/WEB-INF/classes/login.config -Dorg.kie.server.persistence.ds=java:comp/env/jdbc/jbpmDS -Dorg.kie.server.persistence.tm=org.hibernate.service.jta.platform.internal.BitronixJtaPlatform 
-Dorg.kie.server.persistence.dialect=org.hibernate.dialect.MySQL5Dialect
-Dorg.kie.server.id=tomcat-kieserver -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller"

Note that KIE Server persistence is required only for BPM capability so if you disable it you can skip any KIE server related persistence changes.

And that would be it. Hopefully this article will help with installation of KIE Workbench and Server on single application server. 

Have fun and comments more than welcome.

2015/09/17

Unified KIE Execution Server - Part 4

Here we come with next part of the Unified KIE Execution Server blog series - this time Part 4 that introduces Client UI written in JavaScript - AngularJS.
This aims at illustrating how easy it is to built fully featured Client UI that interacts with KIE Execution Server through REST API.

KIE Execution Server has been designed from the very beginning to be lightweight and consumable with whatever technology you like. Obviously it has to run on Java but components that integrate with it can be written in any language. To demonstrate that it actually works I came up with very basic UI written in AngularJS that uses:

  • REST API
  • JSON as data format

So what can it do? Quite a lot to be honest - although those who are familiar with AngularJS will directly notice that I am not an expert in this area while looking at the code. Apologies for that, although this was not the intention to show best practice in building AngularJS applications but to show you how easy it is (as I managed to do so :)) to interact with KIE Execution Server.

Let's start with it then...

Installation

Installation is extremely  simple - just clone this repository where you find jbpm-angular-js module. This is the application that we'll be using for the demo. Once you have it locally 
  • copy app folder that exists in jbpm-angular-js into your wildfly installation:
          WILDFLY_HOME/standalone/deployments
          it should be co-located with kie-server.war
  • rename the folder from app to app.war
And that's it, your installation is complete.

NOTE: we did put that on the same server as KIE Execution Server to avoid any CORS releated issues that will come up when using JavaScript application that resides on different server than the back end application.

Now you can start the Wildfly server and (assuming you use configuration used in previous parts of this blog series) access the AngularJS application at: 



AngularJS logon screen for KIE Execution Server app
You'll be presented with very simple logon screen that asks (as usual) for user name and password and in addition to that for KIE Execution Server URL that will be used as our backend service. Here you can simply put:


Make sure to provide valid credentials (e.g kieserver/kieserver1!) that are known to KIE Execution Server to be properly authenticated.

Demo description

Let's try to make use of the application and backend KIE Execution Server to see how it works. Here are list of steps we are going to perform to illustrate capabilities of custom UI application:
  • look at available containers 
  • look at available process definitions
  • examine details of process definition we are going to start an instance of
  • start process instance with variables (both simple type and custom type)
  • examine process instance details
  • work with user tasks
    • list available user tasks for logged in user
    • examine details of selected task
    • claim task
    • start task
    • complete task with variables (complex type)
Following screenshot shows the process definition that we are going to use:


A very simple process that consists of two user tasks:
  • first 'Review and Register' is used for gathering data from assigned user
  • second 'Show details' is just for the demo purpose to illustrate that process variable was properly updated with data given in first task
This process has two process variables:
  • person - that is of type org.jbpm.test.Person and consists of following fields
    • name - String
    • address - String
    • age - Integer
    • registered - Boolean
  • note - String
While working with this process we are going to exchange data between client (JavaScript) and server (Java) and as data format we will use JSON.

An important note for this application - this is a vary basic and generic application so it requires to provide valid JSON values when working with variables. To give an example (or two...)

  • "my string" - for string type
  • 123 - for number type
  • {"one", "two", "three"} - for list of strings
  • {"Person":{"name":"john","age":25}} - for custom objects 
Custom objects requires identifier that KIE Execution Server can use when unmarshalling to proper type. This can be given in either way:
  • Simple class name = Person
  • Fully qualified class name = org.jbpm.test.Person
Both formats are supported, though FQCN is usually safer (in case of possible conflicts when there are more than one class with same simple name). That's not so common case therefore short/simple name might be used in most of the cases.

Before it can be actually used (as presented in below screencast) you need to deploy the container to kie server. Deploy sample project called kie-server-demo that you can find in this repository (simply clone it and build locally with maven or with (even better) KIE workbench) - see part 3 on how to deploy containers/projects

Demo


Here is screen cast demoing entire application working with described process. 




I'd like to encourage you to give it a try yourself and see how does it fit your needs. With this you can start building UI for KIE Execution Server in the preferred technology/language. It's has never be so simple :)


Comments and ideas for improvements more than welcome.

2015/09/11

Unified KIE Execution Server - Part 3

Part 3 of Unified KIE Execution Server deals with so called managed vs. unmanaged setup of the environment. In version 6.2 users went through Rules deployments perspective to create and manage KIE Execution Server instances.
That approach required to have execution server configured and up and running. Some sort of online only registration that did not work if the kie server instance was down.

In version 6.3, this has been enhanced to allow complete configuration of KIE Execution Servers inside workbench even if there is no actual instances configured. So let's first talk about managed and unmanaged instances....

Managed KIE Execution Server 

Managed instance is one that requires controller to be available to properly startup. Controller is a component responsible for keeping a configuration in centralized way. Though that does not mean there must be only single controller in the environment. Managed KIE Execution Servers are capable of dealing with multiple controllers.

NOTE: It's important to mention that even though there can be multiple controllers they should be kept in sync to make sure that regardless which one of them is contacted by KIE Server instance it will provide same set of configuration.

Controller is only needed when KIE Execution Server starts as this is the time when it needs to download the configuration before it can be properly started. In case KIE Execution Server is started it will keep trying to connect to controller until the connection is successfully established. That means that no containers will be deployed to it even when there is local storage available with configuration. The reason why it is like that is to ensure consistency. If KIE Execution Server was down and the configuration has changed, to make sure it will run with up to date configuration it must connect to controller to fetch that configuration.

Configuration has been mentioned several times but what is that? Configuration is set of information:

  • containers to be deployed and started
  • configuration items - currently this is a place holder for further enhancements that will allow remotely configure KIE Execution Server components - timers, persistence, etc

Controller is a component that is responsible for overall management of KIE Execution Servers. It provides a REST api that is divided into two parts:

  • controller itself that is exposed to interact with KIE Execution Server instances
  • administration that allows to remotely manage KIE Execution Server
    • add/remove servers
    • add/remove containers to/from the servers
    • start/stop containers on servers
Controller deals only with KIE Execution Server configuration or definition to put it differently. It does not handle any runtime components of KIE Execution Server instances. They are always considered remote to controller. Controller is responsible for persisting the configuration to preserve restarts of the controller itself. It should manage the synchronization as well in case multiple controllers are configured to keep all definitions up to date on all instances of the controller.

By default controller is shipped with KIE workbench (jbpm console) and allows fully featured management interface (both REST api and UI). It uses underlying git repository as persistent store and thus when GIT repositories are clustered (using Apache Zookeeper and Apache Helix) it will cover the controllers synchronization as well.

Above diagram illustrates single controller (workbench) setup with multiple KIE Execution Server instances managed by it. Following diagram illustrates the clustered setup where there are multiple instances of controller sync over Zookeeper.


In the above diagram we can see that KIE Execution Server instances are capable to connect to all controllers, but they will connect to only one. Each instance will attempt to connect to controller as long as it can reach one. Once connection is established with one of the controller it will skip other controllers.

Working with managed servers

There are two approaches that users can take when working with managed KIE Server instances:

Configuration first
With this approach, user will start working with controller (either UI or REST api) and create and configure KIE Execution Server definitions. That is composed of:
    • identification of the server (id and name + optionally version for improved readability)
    • containers 

Register first
Let the KIE Execution Server instance to auto register on controller and then configure it in terms of what containers to run on it. This is simply skipping the registration step done in first approach and populates it with server id, name and version directly upon auto registration (or to put it simple on connect)

In general there is no big difference and which approach is taken is pretty much a personal preference. The outcome of both will be the same.

Unmanaged KIE Execution Server

Unmanaged KIE Execution server is in turn just a standalone instance and thus must be configured individually using REST/JMS api of the KIE Execution server itself. The configuration is persisted into a file that is considered internal server state. It's updated upon following operations:
  • deploy container
  • undeploy container
  • start container
  • stop container
Note that KIE Execution server will start only the containers that are marked as started. Even if the KIE Execution Server will be restarted, upon boot it will only make containers available that were in started state before server was shutdown.


In most of the case KIE Execution Server should be ran in managed mode as that provides lots of benefits in terms of control and configuration. More benefits will be noticed when discussing clustering and scalability of KIE Execution Servers where managed mode will show its true power :)

Let's run in managed mode

So that's about it in theory, let's try to run the KIE Execution Server in managed mode to see how this can be operated.

For that we need to have one Wildfly instance that will host the controller - KIE Workbench and another one that will hold KIE Execution Server. Second we already have based on part 1 of the blog series.
NOTE: You can run both KIE workbench and KIE Execution Server on the same application server instance but it won't show the improved manageability as they will always be up or down together. 

So let's start with installing workbench on Wildfly. Similar to what we had to do for KIE Execution server we start with creating user(s):
  • kieserver (with password kieserver1!) that will be used to communicate between KIE Server and controller, that user must be member of following roles:
    • kie-server
    • rest-all
  • either add following roles to kieserver user or create another user that will be used to logon to KIE workbench to manage KIE Execution Servers
    • admin
    • rest-all
To do so use the Wildfly utility script - add-user located in WILDFLY_HOME/bin and add application users. (for details how to do that part 1 of this blog series)

Once we have the users created, let's deploy the application to it. Download KIE workbench for wildfly 8 and copy the way file into WILDFLY_HOME/standalone/deployments.

NOTE: similar to KIE Server, personally I remove the version number and classifier from the war file name and make it as simple as 'kie-wb.war' that makes the context path short and thus easier to type.

And now we are ready to launch KIE workbench, to do so go to WILDFLY_HOME/bin and start it with following command:

./standlone.sh --server-config=standalone-full.xml

wait for the server to finish booting and then go to: 


logon with user you created (e.g. kieserver) and go to Deployments --> Rules Deployments perspective. See following screencast (no audio) that showcase the capabilities described in this article. It starts with configure first approach and does show following:
  • create KIE Execution Server definition in the controller
    • specified identifier (first-kie-server) and name
  • create new container in the KIE Execution Server definition (org.jbpm:HR:1.0)
  • configure KIE Execution Server to be managed by specifying URL to the controller via system property:
    • -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller
    • -Dorg.kie.server.id=first-kie-server (this is extremely important that this id matches one created in first step in the KIE Workbench)
  • start kie server and observe controller's log to see notification that kie server has connected to it
  • start container in controller and observe it being automatically started on KIE Execution Server instance
  • shutdown KIE Execution Server and observe logs and UI with updated status of kie server being disconnected
  • illustrates various manage options in controller and it effect on KIE Execution Server instance.


So this screen cast concludes third part of the Unified KIE Execution Server series. With this in mind we move on into more advanced cases where we show integration with non java clients and clustering. More will come soon...




2015/09/10

Unified KIE Execution Server - Part 2

This blog post is continuation of the first of the series about KIE Execution Server. In this article KIE Server Client will be introduced and used for basic operations on KIE Execution Server.

In the first part, we have went through the details of installation on Wildfly and verification with simple REST client to show it's actually working. This time we do pretty much the same verification although we expand it with further operations and make it via KIE Server Client instead.

So let's get started. We are going to use same container project (hr - org.jbpm:HR:1.0) that includes hiring process, that process has set of user tasks that we will be creating and working with. To be able to work on tasks our user (kieserver) needs to be member of the following roles used by the hiring process:

  • HR
  • IT
  • Accounting
So to add these roles to our user we again use add-user script that comes with wildfly to simply update already existing user


NOTE: don't forget that kieserver user must have kie-server role assigned as well.

With that we are ready to start the server again

KIE Server Client

KIE Server Client is a lightweight library that custom application can use to interact with KIE Execution Server when is written in Java. That library extremely simplifies usage of the KIE Execution Server and make it easier to migrate between versions because it hides all internals that might change between versions. 

To illustrate that it is actually lightweight here is the list of dependencies needed on runtime to execute KIE Server Client


[INFO]
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ kie-server-client ---
[INFO] org.kie.server:kie-server-client:bundle:6.3.0-SNAPSHOT
[INFO] +- org.kie:kie-api:jar:6.3.0-SNAPSHOT:compile
[INFO] +- org.kie:kie-internal:jar:6.3.0-SNAPSHOT:compile
[INFO] +- org.kie.server:kie-server-api:jar:6.3.0-SNAPSHOT:compile
[INFO] |  +- org.drools:drools-core:jar:6.3.0-SNAPSHOT:compile
[INFO] |  |  +- org.mvel:mvel2:jar:2.2.6.Final:compile
[INFO] |  |  \- commons-codec:commons-codec:jar:1.4:compile
[INFO] |  +- org.codehaus.jackson:jackson-core-asl:jar:1.9.9:compile
[INFO] |  +- com.thoughtworks.xstream:xstream:jar:1.4.7:compile
[INFO] |  |  +- xmlpull:xmlpull:jar:1.1.3.1:compile
[INFO] |  |  \- xpp3:xpp3_min:jar:1.1.4c:compile
[INFO] |  \- org.apache.commons:commons-lang3:jar:3.1:compile
[INFO] +- org.jboss.resteasy:jaxrs-api:jar:2.3.10.Final:compile
[INFO] |  \- org.jboss.logging:jboss-logging:jar:3.1.4.GA:compile
[INFO] +- org.kie.remote:kie-remote-common:jar:6.3.0-SNAPSHOT:compile
[INFO] +- org.codehaus.jackson:jackson-xc:jar:1.9.9:compile
[INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.9:compile
[INFO] +- org.slf4j:slf4j-api:jar:1.7.2:compile
[INFO] +- org.jboss.spec.javax.jms:jboss-jms-api_1.1_spec:jar:1.0.1.Final:compile
[INFO] +- com.sun.xml.bind:jaxb-core:jar:2.2.11:compile
[INFO] \- com.sun.xml.bind:jaxb-impl:jar:2.2.11:compile


So let's setup a simple maven project that will use KIE Server Client to interact with the execution server

<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelversion>4.0.0</modelversion>
  <groupid>org.jbpm.test</groupid>
  <artifactid>kie-server-test</artifactid>
  <version>0.0.1-SNAPSHOT</version>
  
  <dependencies>
    <dependency>
        <groupid>org.kie</groupid>
        <artifactid>kie-internal</artifactid>
        <version>6.3.0-SNAPSHOT</version>
    </dependency>
    <dependency>
        <groupid>org.kie.server</groupid>
        <artifactid>kie-server-client</artifactid>
        <version>6.3.0-SNAPSHOT</version>
    </dependency>
    <dependency>
      <groupid>ch.qos.logback</groupid>
      <artifactid>logback-classic</artifactid>
      <version>1.1.2</version>
    </dependency>
  </dependencies>

That's all dependencies that are needed to have KIE Server Client embedded in custom application. Equipped with this we can start running KIE Server Client towards given server instance

Following is code snippet required to construct KIE Server Client instance using REST as transport

String serverUrl = "http://localhost:8230/kie-server/services/rest/server";
String user = "kieserver";
String password = "kieserver1!";

String containerId = "hr";
String processId = "hiring";

KieServicesConfiguration configuration = KieServicesFactory.newRestConfiguration(serverUrl, user, password);
// other formats supported MarshallingFormat.JSON or MarshallingFormat.XSTREAM
configuration.setMarshallingFormat(MarshallingFormat.JAXB);
// in case of custom classes shall be used they need to be added and client needs to be created with class loader that has these classes available 
//configuration.addJaxbClasses(extraClasses);
//KieServicesClient kieServicesClient =  KieServicesFactory.newKieServicesClient(configuration, kieContainer.getClassLoader());
KieServicesClient kieServicesClient =  KieServicesFactory.newKieServicesClient(configuration);

Once we have the the client instance we can start executing operations. We start with checking if the container we want to work with is already deployed and if not deploy it

boolean deployContainer = true;
KieContainerResourceList containers = kieServicesClient.listContainers().getResult();
// check if the container is not yet deployed, if not deploy it
if (containers != null) {
    for (KieContainerResource kieContainerResource : containers.getContainers()) {
        if (kieContainerResource.getContainerId().equals(containerId)) {
            System.out.println("\t######### Found container " + containerId + " skipping deployment...");
            deployContainer = false;
            break;
        }
    }
}
// deploy container if not there yet        
if (deployContainer) {
    System.out.println("\t######### Deploying container " + containerId);
    KieContainerResource resource = new KieContainerResource(containerId, new ReleaseId("org.jbpm", "HR", "1.0"));
    kieServicesClient.createContainer(containerId, resource);
}

Next let's check what is there available, in terms of processes and get some details about process id we are going to start


// query for all available process definitions
QueryServicesClient queryClient = kieServicesClient.getServicesClient(QueryServicesClient.class);
List<ProcessDefinition> processes = queryClient.findProcesses(0, 10);
System.out.println("\t######### Available processes" + processes);

ProcessServicesClient processClient = kieServicesClient.getServicesClient(ProcessServicesClient.class);
// get details of process definition
ProcessDefinition definition = processClient.getProcessDefinition(containerId, processId);
System.out.println("\t######### Definition details: " + definition);

We have all the details so we are ready to start the process instance for hiring process. We set two process variables:

  • name - of type string 
  • age - of type integer


// start process instance
Map<String, Object> params = new HashMap<String, Object>();
params.put("name", "john");
params.put("age", 25);
Long processInstanceId = processClient.startProcess(containerId, processId, params);
System.out.println("\t######### Process instance id: " + processInstanceId);

Once we started we can fetch tasks waiting to be completed for kieserver user

UserTaskServicesClient taskClient = kieServicesClient.getServicesClient(UserTaskServicesClient.class);
// find available tasks
List<TaskSummary> tasks = taskClient.findTasksAssignedAsPotentialOwner(user, 0, 10);
System.out.println("\t######### Tasks: " +tasks);

// complete task
Long taskId = tasks.get(0).getId();

taskClient.startTask(containerId, taskId, user);
taskClient.completeTask(containerId, taskId, user, null);


since the task has been completed and it has moved to another one we can continue until there are tasks available or we can simply abort the process instance to quit the work on this instance. Before we abort process instance let's examine what nodes has been completed so far

List<NodeInstance> completedNodes = queryClient.findCompletedNodeInstances(processInstanceId, 0, 10);
System.out.println("\t######### Completed nodes: " + completedNodes);

This will give us information if the task has already been completed and process moved on. Now let's abort the process instance

// at the end abort process instance
processClient.abortProcessInstance(containerId, processInstanceId);

ProcessInstance processInstance = queryClient.findProcessInstanceById(processInstanceId);
System.out.println("\t######### ProcessInstance: " + processInstance);

In the last step we get the process instance out to check if it was properly aborted - process instance state should be set to 3.

Last but not least, KIE Server Client can be used to insert facts and fire rules in very similar way

// work with rules
List<GenericCommand> commands = new ArrayList<GenericCommand>();
BatchExecutionCommandImpl executionCommand = new BatchExecutionCommandImpl(commands);
executionCommand.setLookup("defaultKieSession");

InsertObjectCommand insertObjectCommand = new InsertObjectCommand();
insertObjectCommand.setOutIdentifier("person");
insertObjectCommand.setObject("john");

FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand();

commands.add(insertObjectCommand);
commands.add(fireAllRulesCommand);

RuleServicesClient ruleClient = kieServicesClient.getServicesClient(RuleServicesClient.class);
ruleClient.executeCommands(containerId, executionCommand);
System.out.println("\t######### Rules executed");

So that concludes simple usage scenario of KIE Server Client that covers

  • containers
  • processes
  • tasks
  • rules
A complete maven project with this sample execution can be found here.

Enjoy and stay tuned for more to come about awesome KIE Execution Server :)