www.espertech.comDocumentation
This chapter summarizes integration and describes in detail each of the extension APIs that allow integrating external data and/or extend runtime functionality.
For information on calling external services via instance method invocation, for instance to integrate with dependency injection frameworks such as Spring or Guice, please see Section 5.17.5, “Class and Event-Type Variables”.
For information on input and output adapters that connect to an event transport and perform event transformation for incoming and outgoing on-the-wire event data, for use with streaming data, please see the EsperIO reference documentation. The data flow instances as described in Chapter 20, EPL Reference: Data Flow are an easy way to plug in operators that perform input and output. Data flows allow providing parameters and managing individual flows independent of runtime lifecycle. Also consider using the Plug-in Loader API for creating a new adapter that starts or stops as part of the CEP runtime initialization and destroy lifecycle, see Section 16.15, “Plug-In Loader”.
To join data that resides in a relational database and that is accessible via JDBC driver and SQL statement the runtime offers syntax for using SQL within EPL, see Section 5.13, “Accessing Relational Data via SQL”. A relational database input and output adapter for streaming input from and output to a relational database also exists (EsperIO).
To join data that resides in a non-relational store the runtime offers a two means: First, the virtual data window, as described below, for transparently integrating the external store as a named window. The second mechanism is a special join syntax based on static method invocation; see Section 5.14, “Accessing Non-Relational Data via Method, Script or UDF Invocation”.
The best way to test that your extension code works correctly is to write unit tests against a statement that utilizes the extension code. Samples can be obtained from Esper regression test code base.
For all extension code and similar to listeners and subscribers, to send events into the runtime from extension code the routeEvent
method should be used (and not sendEvent
) to avoid the possibility of stack overflow due to event-callback looping and ensure correct processing of the current and routed event.
Note that if outbound-threading is enabled, listeners and subscribers should use sendEvent
and not routeEvent
.
For all extension code it is not safe to deploy and undeploy within the extension code. For example, it is not safe to implement a data window that deploys compiled modules and that undeploys deployments.
Single-row functions return a single value. They are not expected to aggregate rows but instead should be stateless functions. These functions can appear in any expressions and can be passed any number of parameters.
The following steps are required to develop and use a custom single-row function.
Implement a class providing one or more public static methods accepting the number and type of parameters as required.
Register the single-row function class and method name with the compiler by supplying a function name.
You may not override a built-in function with a single-row function provided by you. The single-row function you register must have a different name then any of the built-in functions.
An example single-row function can also be found in the examples under the runtime configuration example.
Single-row function classes have no further requirement then provide a public static method.
The following sample single-row function simply computes a percentage value based on two number values.
This sample class provides a public static method by name computePercent
to return a percentage value:
public class MyUtilityClass { public static double computePercent(double amount, double total) { return amount / total * 100; } }
The class name of the class, the method name and the function name of the new single-row function must be added to the compiler configuration. The configuration shown below is XML however the same options are available through the configuration API:
<esper-configuration xmlns="http://www.espertech.com/schema/esper"> <compiler> <plugin-singlerow-function name="percent" function-class="mycompany.MyUtilityClass" function-method="computePercent" /> </compiler> </esper-configuration>
Note that the function name and method name need not be the same.
The new single-row function is now ready to use in a statement:
select percent(fulfilled,total) from MyEvent
When selecting from a single stream, you may also pass wildcard to the single-row function and the function receives the underlying event:
select percent(*) from MyEvent
If the single-row function returns an object that provides further functions, you may chain function calls.
The following demonstrates a chained single-row function. The example assumes that a single-row function by name calculator
returns an object that provides the add
function which accepts two parameters:
select calculator().add(5, amount) from MyEvent
When a single-row function receives parameters that are all constant values or expressions that themselves receive only constant values, the runtime can pre-evaluate the result of the single-row function at time of statement. By default, the runtime does not pre-evaluate the single-row function unless you configure the value cache as enabled.
The following configuration XML enables the value cache for the single-row function:
<esper-configuration xmlns="http://www.espertech.com/schema/esper"> <compiler> <plugin-singlerow-function name="getDate" function-class="mycompany.DateUtil" function-method="parseDate" value-cache="enabled" /> </compiler> </esper-configuration>
When the single-row function receives constants as parameters, the runtime computes the result once and returns the cached result for each evaluation:
select getDate('2002-05-30T9:00:00.000') from MyEvent
Your EPL may use plug-in single row functions among the predicate expressions as part of the filters in a stream or pattern.
For example, the EPL below uses the function computeHash
as part of a predicate expression:
select * from MyEvent(computeHash(field) = 100)
When you have many statements or many context partitions that refer to the same function, event type and parameters in a predicate expression, the compiler may optimize evaluation: The function gets evaluated only once per event.
While the optimization is enabled by default for all plug-in single row functions, you can also disable the optimization for a specific single-row function. By disabling the optimization for a single-row function the runtime may use less memory to identify reusable function footprints but may cause the runtime to evaluate each function more frequently than necessary.
The following configuration XML disables the filter optimization for a single-row function (by default it is enabled):
<esper-configuration xmlns="http://www.espertech.com/schema/esper"> <compiler> <plugin-singlerow-function name="computeHash" function-class="mycompany.HashUtil" function-method="computeHash" filter-optimizable="disabled" /> </compiler> </esper-configuration>
EPL allows parameters to a single-row function to be events. In this case, declare the method parameter type to either take EventBean
, Collection<EventBean>
or the underlying class as a parameter.
Sample method footprints are:
public static double doCompute(EventBean eventBean) {...} public static boolean doCheck(MyEvent myEvent, String text) {...} public static String doSearch(Collection<EventBean> events) {...}
To pass the event, specify the stream alias, or wildcard (*)
or the tag name when used in a pattern.
The EPL below shows example uses:
select * from MyEvent(doCompute(me) = 100) as me
select * from MyEvent where doCompute(*) = 100
select * from pattern[a=MyEvent -> MyEvent(doCheck(a, 'sometext'))]
select * from MyEvent#time(1 min) having doCompute(last(*))]
select * from MyEvent#time(1 min) having doSearch(window(*))]
Declare the method parameter as Collection<EventBean>
if the method expects
an expression result that returns multiple events.
Declare the method parameter as EventBean
if the method expects
an expression result that returns a single event.
A single-row function may return events. Please declare your single-row function method to return Collection<EventBean>
or EventBean[]
and configure the event type name.
For example, assuming there is an MyItem
event type such as created via create schema MyItem(id string)
:
public static EventBean[] myItemProducer(String string, EPLMethodInvocationContext context) { String[] split = string.split(","); EventBean[] events = new EventBean[split.length]; for (int i = 0; i < split.length; i++) { events[i] = context.getEventBeanService().adapterForMap(Collections.singletonMap("id", split[i]), "MyItem"); } return events; }
The sample EPL queries items filtering those items that have a given value for the id
field:
select myItemProducer(ordertext).where(v => v.id in ('id1', 'id3')) as c0 from Order
This sample code register the myItemProducer
function as a single-row function with an event type name:
ConfigurationCompilerPlugInSingleRowFunction entry = new ConfigurationCompilerPlugInSingleRowFunction(); entry.setName("myItemProducer"); entry.setFunctionClassName(...); entry.setFunctionMethodName(...); entry.setEventTypeName("MyItem"); Configuration configuration = new Configuration(); configuration.getCompiler().addPlugInSingleRowFunction(entry);
If your single row function returns EventBean[]
and is used with enumeration methods the configuration must provide an event type name.
The runtime can pass an object containing contextual information such as statement name, function name, runtime URI and context partition id to your
method. The container for this information is EPLMethodInvocationContext
in package com.espertech.esper.common.client.hook.expr
.
Please declare your method to take EPLMethodInvocationContext
as the last parameter. The runtime then passes the information along.
A sample method footprint and EPL are shown below:
public static double computeSomething(double number, EPLMethodInvocationContext context) {...}
select computeSomething(10) from MyEvent
By default the runtime logs any exceptions thrown by the single row function and returns a null value. To have exceptions be re-thrown instead, which makes exceptions visible to any registered exception handler, please configure as discussed herein.
Set the rethrow-exceptions
flag in the XML configuration or the rethrowExceptions
flag in the API
when registering the single row function to have the runtime re-throw any exceptions that the single row function may throw.
Use a virtual data window if you have a (large) external data store that you want to access as a named window. The access is transparent: There is no need to use special syntax or join syntax. All regular queries including subqueries, joins, on-merge, on-select, on-insert, on-delete, on-update and fire-and-forget are supported with virtual data windows.
There is no need to keep any data or events in memory with virtual data windows. The only requirement for virtual data windows is that all data rows returned are EventBean
instances.
When implementing a virtual data window it is not necessary to send any events into the runtime or to use insert-into. The event content is simply assumed to exist and accessible to the runtime via the API implementation you provide.
The distribution ships with a sample virtual data window in the examples folder under the name virtualdw
. The code snippets below are extracts from the example.
We use the term store here to mean a source set of data that is managed by the virtual data window. We use the term store row or just row to mean a single data item provided by the store. We use the term lookup to mean a read operation against the store returning zero, one or many rows.
Virtual data windows allow high-performance low-latency lookup by exposing all relevant statement access path information. This makes it possible for the virtual data window to choose the desired access method into its store.
The following steps are required to develop and use a virtual data window:
Implement the interface com.espertech.esper.common.client.hook.vdw.VirtualDataWindowForge
. This class is used by the compiler.
Implement the interface com.espertech.esper.common.client.hook.vdw.VirtualDataWindowFactoryFactory
. This class is referred to, by class name, by the compiler. It is used at runtime.
Implement the interface com.espertech.esper.common.client.hook.vdw.VirtualDataWindowFactory
(used at runtime only).
Implement the interface com.espertech.esper.common.client.hook.vdw.VirtualDataWindow
(used at runtime only).
Implement the interface com.espertech.esper.common.client.hook.vdw.VirtualDataWindowLookup
(used at runtime only).
Register the factory class in the configuration.
Once you have completed above steps, the virtual data window is ready to use in statements.
From a threading perspective, virtual data window implementation classes must be thread-safe if objects are shared between multiple named windows. If no objects are shared between multiple different named windows, thereby each object is only used for the same named window and other named windows receive a separate instance, it is no necessary that the implementation classes are thread-safe.
Your application must first register the virtual data window factory as part of configuration:
Configuration config = new Configuration(); config.getCompiler().addPlugInVirtualDataWindow("sample", "samplevdw", SampleVirtualDataWindowForge.class.getName());
Your application may then create a named window backed by a virtual data window.
For example, assume that the SampleEvent
event type is declared as follows:
create schema SampleEvent as (key1 string, key2 string, value1 int, value2 double)
The next statement creates a named window MySampleWindow
that provides SampleEvent
events and is backed by a virtual data window:
create window MySampleWindow.sample:samplevdw() as SampleEvent
You may then access the named window, same as any other named window, for example by subquery, join, on-action, fire-and-forget query or by consuming its insert and remove stream. While this example uses Map-type events, the example code is the same for POJO or other events.
Your application may obtain a reference to the virtual data window from the runtime context.
This code snippet looks up the virtual data window by the named window name:
try { return (VirtualDataWindow) runtime.getContext().lookup("/virtualdw/MySampleWindow"); } catch (NamingException e) { throw new RuntimeException("Failed to look up virtual data window, is it created yet?"); }
When you application registers a subquery, join or on-action query or executes a fire-and-forget query against a virtual data window the runtime interacts with the virtual data window. The interaction is a two-step process.
At time of deployment (once), the runtime uses the information the compiler collected by analyzing the EPL where-clause, if present. It then creates a list of hash-index and binary tree (btree, i.e. sorted) index properties. It passes the property names that are queried as well as the operators (i.e. =, >, range etc.) to the virtual data window. The virtual data window returns a lookup strategy object to the runtime.
At time of statement execution (repeatedly as triggered), the runtime uses that lookup strategy object to execute a lookup. It passes to the lookup all actual key values (hash, btree including ranges) to make fast and efficient lookup achievable.
To explain in detail, assume that your application creates a statement with a subquery as follows:
select (select * from MySampleWindow where key1 = 'A1') from OtherEvent
At the time of compilation of the statement above the compiler analyzes the statement. It determines that the subquery queries a virtual data window. It determines from the where-clause that the lookup uses property key1
and hash-equals semantics. The runtime then provides this information as part of VirtualDataWindowLookupContext
passed to the getLookup
method. Your application may inspect hash and btree properties and may determine the appropriate store access method to use.
The hash and btree property lookup information is for informational purposes, to enable fast and performant queries that return the smallest number of rows possible. Your implementation classes may use some or none of the information provided and may also instead return some or perhaps even all rows, as is practical to your implementation. The where
-clause still remains in effect and gets evaluated on all rows that are returned by the lookup strategy.
Following the above example, the sub-query executes once when a OtherEvent
event arrives. At time of execution the runtime delivers the string value A1
to the VirtualDataWindowLookup
lookup implementation provided by your application. The lookup object queries the store and returns
store rows as EventBean
instances.
As a second example, consider an EPL join statement as follows:
select * from MySampleWindow, MyTriggerEvent where key1 = trigger1 and key2 = trigger2
The compiler analyzes the statement and the runtime passes to the virtual data window the information that the lookup occurs on properties key1
and key2
under hash-equals semantics. When a MyTriggerEvent
arrives,
it passes the actual value of the trigger1
and trigger2
properties of the current MyTriggerEvent to the lookup.
As a last example, consider a fire-and-forget query as follows:
select * from MySampleWindow key1 = 'A2' and value1 between 0 and 1000
The compiler analyzes the statement and the runtime passes to the virtual data window the lookup information. The lookup occurs on property key1
under hash-equals semantics and on property value1
under btree-open-range semantics. When you application
executes the fire-and-forget query the runtime passes A2
and the range endpoints 0
and 1000
to the lookup.
For more information, please consult the JavaDoc API documentation for class VirtualDataWindow
, VirtualDataWindowLookupContext
or VirtualDataWindowLookupFieldDesc
.
For each named window that refers to the virtual data window, the runtime instantiates one instance of the forge at compile-time.
A virtual data window forge class is responsible for the following functions:
Implement the initialize
method that accepts a virtual data window forge context object as a parameter.
Implement the getFactoryMode
method that information how to initialize the factory-factory class (the class that acts as a factory for virtual data window factories).
Implement the getUniqueKeyPropertyNames
method that can return the set of property names that are unique keys, for the purpose of query planning.
The compiler instantiates a VirtualDataWindowForge
instance for each named window created by create window
.
The compiler invokes the initialize
method once in respect to the named window being created passing a VirtualDataWindowForgeContext
context object.
The sample code shown here can be found among the examples in the distribution under virtualdw
:
public class SampleVirtualDataWindowForge implements VirtualDataWindowForge { public void initialize(VirtualDataWindowForgeContext initializeContext) { } public VirtualDataWindowFactoryMode getFactoryMode() { // The injection strategy defines how to obtain and configure the factory-factory. InjectionStrategy injectionStrategy = new InjectionStrategyClassNewInstance(SampleVirtualDataWindowFactoryFactory.class); // The managed-mode is the default. It uses the provided injection strategy. VirtualDataWindowFactoryModeManaged managed = new VirtualDataWindowFactoryModeManaged(); managed.setInjectionStrategyFactoryFactory(injectionStrategy); return managed; } public Set<String> getUniqueKeyPropertyNames() { // lets assume there is no unique key property names return null; } }
Your forge class must implement the getFactoryMode
method which instructs the compiler how to
obtain a factory class that returns a factory for creating virtual data window instances (a factory-factory).
The class acting as the factory-factory will be SampleVirtualDataWindowFactoryFactory
.
At deployment time, the runtime instantiates the factory-factory and obtains a factory for virtual data windows.
A virtual data window factory-factory class is responsible for the following functions:
Implement the createFactory
method that accepts a factory-factory context and that returns the virtual data window factory.
The sample code shown here can be found among the examples in the distribution under virtualdw
:
public class SampleVirtualDataWindowFactoryFactory implements VirtualDataWindowFactoryFactory { public VirtualDataWindowFactory createFactory(VirtualDataWindowFactoryFactoryContext ctx) { return new SampleVirtualDataWindowFactory(); } }
For each named window that refers to the virtual data window, the runtime instantiates one instance of the factory.
A virtual data window factory class is responsible for the following functions:
Implement the initialize
method that accepts a virtual data window factory context object as a parameter.
Implement the create
method that accepts a virtual data window context object as a parameter and returns a VirtualDataWindow
implementation.
Implement the destroy
method that gets called once when the named window is undeployed.
The runtime instantiates a VirtualDataWindowFactory
instance for each named window created via create window
. The runtime invokes the initialize
method once in respect to the named window being created passing a VirtualDataWindowFactoryContext
context object.
If not using contexts, the runtime calls the create
method once after calling the initialize
method. If using contexts, the runtime calls the create
method every time it allocates a context partition.
If using contexts and your virtual data window implementation operates thread-safe, you may return the same virtual data window implementation object for each context partition. If using contexts and your implementation object is not thread safe, return a separate thread-safe implementation object for each context partition.
The runtime invokes the destroy
method once when the named window is undeployed. If not using contexts, the runtime calls the destroy
method of the virtual data window implementation object before calling
the destroy
method on the factory object. If using contexts, the runtime calls the destroy
method on each instance associates to a context partition at the time the associated context partition terminates.
The sample code shown here can be found among the examples in the distribution under virtualdw
:
public class SampleVirtualDataWindowFactory implements VirtualDataWindowFactory { public void initialize(VirtualDataWindowFactoryContext factoryContext) { } public VirtualDataWindow create(VirtualDataWindowContext context) { return new SampleVirtualDataWindow(context); } public void destroy() { // cleanup can be performed here } public Set<String> getUniqueKeyPropertyNames() { // lets assume there is no unique key property names return null; } }
Your factory class must implement the create
method which receives a VirtualDataWindowContext
object. This method is called once for each EPL that creates a virtual data window (see example create window
above).
The VirtualDataWindowContext
provides to your application:
String namedWindowName; // Name of named window being created. Object[] parameters; // Any optional parameters provided as part of create-window. EventType eventType; // The event type of events. EventBeanFactory eventFactory; // A factory for creating EventBean instances from store rows. VirtualDataWindowOutStream outputStream; // For stream output to consuming statements. AgentInstanceContext agentInstanceContext; // Other statement information in statement context.
When using contexts you can decide whether your factory returns a new virtual data window for each context partition or returns the same virtual data window instance for all context partitions. Your extension code may refer to the named window name to identify the named window and may refer to the agent instance context that holds the agent instance id which is the id of the context partition.
A virtual data window implementation is responsible for the following functions:
Accept the lookup context object as a parameter and return the VirtualDataWindowLookup
implementation.
Optionally, post insert and remove stream data.
Implement the destroy
method, which the runtime calls for each context partition when the named window is stopped or destroyed, or once when a context partition is ended/terminated.
The sample code shown here can be found among the examples in the distribution under virtualdw
.
The implementation class must implement the VirtualDataWindow
interface like so:
public class SampleVirtualDataWindow implements VirtualDataWindow { private final VirtualDataWindowContext context; public SampleVirtualDataWindow(VirtualDataWindowContext context) { this.context = context; } ...
When the compiler compiles a statement and detects a virtual data window, the compiler compiles access path information and the runtime invokes the getLookup
method indicating hash and btree access path information by passing a VirtualDataWindowLookupContext
context. The lookup method must return a VirtualDataWindowLookup
implementation that the statement
uses for all lookups until the statement is stopped or destroyed.
The sample implementation does not use the hash and btree access path information and simply returns a lookup object:
public VirtualDataWindowLookup getLookup(VirtualDataWindowLookupContext desc) { // Place any code that interrogates the hash-index and btree-index fields here. // Return the lookup strategy. return new SampleVirtualDataWindowLookup(context); }
The runtime calls the update
method when data changes because of on-merge, on-delete, on-update or insert-into. For example, if you have an on-merge statement that is triggered and that updates the virtual data window, the newData
parameter receives the new (updated) event and the oldData
parameter receives the event prior to the update. Your code may use these events to update the store or delete from the store, if needed.
If your application plans to consume data from the virtual data window, for example via select * from MySampleWindow
, then the code must implement the update
method to forward insert and remove stream events, as shown below, to receive the events in consuming statements. To post insert and remove stream data, use the VirtualDataWindowOutStream
provided by the context object as follows.
public void update(EventBean[] newData, EventBean[] oldData) { // This sample simply posts into the insert and remove stream what is received. context.getOutputStream().update(newData, oldData); }
Your application should not use VirtualDataWindowOutStream
to post new events that originate from the store. The object is intended for use with on-action statements. Use insert-into instead for any new events that originate from the store.
Views in EPL are used to derive information from an event stream, and to represent data windows onto an event stream. This chapter describes how to plug-in a new, custom view.
The following steps are required to develop and use a custom view.
Implement a view forge class. View forges are compile-time classes that accept and check view parameters and refer to the appropriate view factory for the runtime.
Implement a view factory class. View factories are classes that instantiate the appropriate view class at runtime.
Implement a view class. A view class commonly represents a data window or derives new information from a stream at runtime.
Configure the view factory class supplying a view namespace and name in the compiler configuration.
The example view factory and view class that are used in this chapter can be found in the examples source folder in the OHLC (open-high-low-close) example. The class names are OHLCBarPlugInViewForge
, OHLCBarPlugInViewFactory
and OHLCBarPlugInView
.
Views can make use of the runtime services available via StatementContext
, for example:
The SchedulingService
interface allows views to schedule timer callbacks to a view
Section 21.4.4, “View Contract” outlines the requirements for correct behavior of your custom view within the runtime.
Note that custom views may use runtime services and APIs that can be subject to change between major releases. The runtime services discussed above and view APIs are considered part of the runtime internal API and are only limited stable. Please also consider contributing your custom view to the project by submitting the view code.
A view forge class is a compile-time class and is responsible for the following functions:
Accept zero, one or more view parameters. View parameters are themselves expressions. The view forge must validate the expressions.
Build the view factory class. At deployment-time this code executes and builds the view factory.
Provide information about the event type of events posted by the view.
View forge classes must implement the ViewFactoryForge
interface. Additionally a view forge class must implement the DataWindowViewForge
interface if the view is a data window (retains events provided to it).
public class OHLCBarPlugInViewForge implements ViewFactoryForge { ...
Your view forge class must implement the setViewParameters
method to accept view parameters and the attach
method to attach the view to a stream:
public class OHLCBarPlugInViewForge implements ViewFactoryForge { private List<ExprNode> viewParameters; private ExprNode timestampExpression; private ExprNode valueExpression; private EventType eventType; public void setViewParameters(List<ExprNode> parameters, ViewForgeEnv viewForgeEnv, int streamNumber) throws ViewParameterException { this.viewParameters = parameters; } public void attach(EventType parentEventType, int streamNumber, ViewForgeEnv env) throws ViewParameterException { if (viewParameters.size() != 2) { throw new ViewParameterException("View requires a two parameters: the expression returning timestamps and the expression supplying OHLC data points"); } ExprNode[] validatedNodes = ViewForgeSupport.validate("OHLC view", parentEventType, viewParameters, false, env, streamNumber); timestampExpression = validatedNodes[0]; valueExpression = validatedNodes[1]; if ((timestampExpression.getForge().getEvaluationType() != long.class) && (timestampExpression.getForge().getEvaluationType() != Long.class)) { throw new ViewParameterException("View requires long-typed timestamp values in parameter 1"); } if ((valueExpression.getForge().getEvaluationType() != double.class) && (valueExpression.getForge().getEvaluationType() != Double.class)) { throw new ViewParameterException("View requires double-typed values for in parameter 2"); } ....
After the compiler supplied view parameters to the forge, the compiler will ask the view to attach to its parent and validate any parameter expressions against the parent view's event type. If the view will be generating events of a different type then the events generated by the parent view, then the view factory can allocate the new event type.
Finally, the compiler asks the view forge to generate code that initializes the view factory:
public CodegenExpression make(CodegenMethodScope parent, SAIFFInitializeSymbol symbols, CodegenClassScope classScope) { return new SAIFFInitializeBuilder(OHLCBarPlugInViewFactory.class, this.getClass(), "factory", parent, symbols, classScope) .exprnode("timestampExpression", timestampExpression) .exprnode("valueExpression", valueExpression) .build(); }
Use the internal SAIFFInitializeBuilder
to build your view factory providing it the expressions and other values it needs.
A view factory class is responsible for the following functions:
Implement initialization code when required.
Instantiate the actual view class.
Provide information about the event type of events posted by the view.
View factory classes implement the ViewFactory
interface. Additionally a view factory class must implement the DataWindowViewFactory
interface if the view is a data window (retains events provided to it).
public class OHLCBarPlugInViewFactory implements ViewFactory { ...
The runtime initializes a view factory by calling its init
method.
The runtime asks the view factory to create a view instance, and asks for the type of event generated by the view:
public View makeView(AgentInstanceViewFactoryChainContext agentInstanceViewFactoryContext) { return new OHLCBarPlugInView(this, agentInstanceViewFactoryContext); } public EventType getEventType() { return eventType; }
A view class is responsible for:
The update
method receives insert streams and remove stream events from its parent view
The iterator
method supplies an (optional) iterator to allow an application to pull or request results from an EPStatement
View classes subclass ViewSupport
. Additionally a view class must implement the DataWindowView
interface if the view is a data window (retains events provided to it).
public class OHLCBarPlugInView extends ViewSupport { ...
Your view's update
method will be processing incoming (insert stream) and outgoing (remove stream) events posted by the parent view (if any), as well as providing incoming and outgoing events to child views. The convention required of your update method implementation is that the view releases any insert stream events (EventBean object references) which the view generates as reference-equal remove stream events (EventBean object references) at a later time.
The view implementation must call child.update(...)
to post outgoing insert and remove stream events. Similar to the update
method, the child.update
takes insert and remove stream events as parameters.
A sample update
method implementation is provided in the OHLC example.
The update
method must adhere to the following conventions, to prevent memory leaks and to enable correct behavior within the runtime:
A view implementation that posts events to the insert stream must post unique EventBean
object references as insert stream events, and cannot post the same EventBean
object reference multiple times. The underlying event to the EventBean
object reference can be the same object reference, however the EventBean
object reference posted by the view into the insert stream must be a new instance for each insert stream event.
If the custom view posts a continuous insert stream, then the views must also post a continuous remove stream (second parameter to the updateChildren
method). If the view does not post remove stream events, it assumes unbound keep-all semantics.
EventBean
events posted as remove stream events must be the same object reference as the EventBean
events posted as insert stream by the view. Thus remove stream events posted by the view (the EventBean
instances, does not affect the underlying representation) must be reference-equal to insert stream events posted by the view as part of an earlier invocation of the update method, or the same invocation of the update method.
EventBean
events represent a unique observation. The values of the observation can be the same, thus the underlying representation of an EventBean
event can be reused, however event property values must be kept immutable and not be subject to change.
Array elements of the insert and remove stream events must not carry null values. Array size must match the number of EventBean
instances posted. It is recommended to use a null
value for no insert or remove stream events rather then an empty zero-size array.
Your view implementation must implement the AgentInstanceStopCallback
interface to receive a callback when the view gets destroyed.
Please refer to the sample views for a code sample on how to implement the iterator
method.
In terms of multiple threads accessing view state, there is no need for your custom view factory or view implementation to perform any synchronization to protect internal state. The iterator of the custom view implementation does also not need to be thread-safe. The runtime ensures the custom view executes in the context of a single thread at a time. If your view uses shared external state, such external state must be still considered for synchronization when using multiple threads.
The view factory class name as well as the view namespace and name for the new view must be added to the compiler configuration. The configuration shown below is XML however the same options are available through configuration:
<esper-configuration xmlns="http://www.espertech.com/schema/esper"> <compiler> <plugin-view namespace="custom" name="ohlc" forge-class="com.espertech.esper.example.ohlc.OHLCBarPlugInViewFactory" /> </compiler> </esper-configuration>
The new view is now ready to use in a statement:
select * from StockTick.custom:ohlc(timestamp, price)
Note that the view must implement additional interfaces if it acts as a data window view, or works in a grouping context, as discussed in detail below.
Your custom view may represent an expiry policy and may retain events and thus act as a data window view. In order to allow the compiler to validate that your view can be used with named windows, which allow only data window views, this section documents any additional requirement that your classes must fulfill.
Your view forge class must implement the DataWindowViewForge
interface. This marker interface (no methods required) indicates that your views are data window views.
Your view factory class must implement the DataWindowViewFactory
interface. This marker interface (no methods required) indicates that your views are data window views.
Your view class must implement the DataWindowView
interface. This interface indicates that your view is a data window view and therefore eligible to be used in any construct that requires a data window view. The DataWindowView
interface extends the ViewDataVisitable
interface. Please provide an empty implementation method for the visitView
method as required by ViewDataVisitable
(the default behavior is sufficient).
Your custom view may compute derived information from the arriving stream, instead of retaining events, and thus act as a derived-value view.
Your view class should implement the DerivedValueView
interface. This marker interface indicates that your view is a derived-value view,
affecting correct behavior of the view when used in joins.
Aggregation functions are stateful functions that aggregate events, event property values or expression results. Examples for built-in aggregation functions are count(*)
, sum(price * volume)
, window(*)
or maxby(volume)
.
EPL allows two different ways for your application to provide aggregation functions. We use the name aggregation single-function and aggregation multi-function for the two independent extension APIs for aggregation functions.
The aggregation single-function API is simple to use however it imposes certain restrictions on how expressions that contain aggregation functions share state and how they are evaluated.
The aggregation multi-function API is more powerful and provides control over how expressions that contain aggregation functions share state and are evaluated.
The next table compares the two aggregation function extension API's:
Table 21.1. Aggregation Function Extension API's
Single-Function | Multi-Function | |
---|---|---|
Return Value | Can only return a single value or object. Cannot return an EventBean event, collection of EventBean events or collection or array of values for use with enumeration methods, for example. | Can return an EventBean event, a collection of EventBean events or a collection or array of objects for use with enumeration methods or to access event properties. |
Complexity of API | Simple (consists of 2 interfaces). | More complex (consists of 6 interfaces). |
State Sharing | State and parameter evaluation shared if multiple aggregation functions of the same name in the same statement (and context partition) take the exact same parameter expressions. | State and parameter evaluation sharable when multiple aggregation functions of a related name (related thru configuration) for the same statement (and context partition) exist, according to a sharing-key provided by your API implementation. |
Function Name | Each aggregation function expression receives its own factory object. | Multiple related aggregation function expressions share a single factory object. |
Distinct Keyword | Handled by the runtime transparently depending on mode. | Indicated to the API implementation only. |
The following sections discuss developing an aggregation single-function first, followed by the subject of developing an aggregation multi-function.
The aggregation multi-function API is a powerful and lower-level API to extend the runtime. Any classes that are not part of the client
package should be considered unstable and are subject to change between minor and major releases.
This section describes the aggregation single-function extension API for providing aggregation functions.
The following steps are required to develop and use a custom aggregation single-function.
Implement an aggregation function forge by implementing the interface com.espertech.esper.common.client.hook.aggfunc.AggregationFunctionForge
. This class provides compile-time information.
Implement an aggregation function factory by implementing the interface com.espertech.esper.common.client.hook.aggfunc.AggregationFunctionFactory
(used at runtime).
Implement an aggregation function by implementing the interface com.espertech.esper.common.client.hook.aggfunc.AggregationFunction
(used at runtime).
Register the aggregation single-function forge class with the compiler by supplying a function name, via the compiler configuration.
Custom aggregation functions can also be passed multiple parameters, as further described in Section 21.5.1.5, “Aggregation Single-Function: Accepting Multiple Parameters”. In the example below the aggregation function accepts a single parameter.
The code for the example aggregation function as shown in this chapter can be found in the runtime configuration example in the package com.espertech.esper.example.runtimeconfig
by the name MyConcatAggregationFunction
. The sample function simply concatenates string-type values.
An aggregation function forge class is only used at compile-time and is responsible for the following functions:
Implement a setFunctionName
method that receives the function name.
Implement a validate
method that validates the value type of the data points that the function must process.
Implement a getValueType
method that returns the type of the aggregation value generated by the aggregation function instances. For example, the built-in count
aggregation function returns Long.class
as it generates long
-typed values.
Implement a getAggregationFunctionMode
which provided information about the factory class to the compiler.
Aggregation forge classes implement the interface AggregationFunctionForge
:
public class MyConcatAggregationFunctionForge implements AggregationFunctionForge { ...
The compiler constructs one instance of the aggregation function forge class for each time the function is listed in a statement, however the compiler may decide to reduce the number of aggregation forge instances if it finds equivalent aggregations.
The aggregation function forge instance receives the aggregation function name via set setFunctionName
method.
The sample concatenation function forge provides an empty setFunctionName
method:
public void setFunctionName(String functionName) { // no action taken }
An aggregation function forge must provide an implementation of the validate
method that is passed a AggregationFunctionValidationContext
validation context object. Within the validation context you find the result type of each of the parameters expressions to the aggregation function as well as information about constant values and data window use. Please see the JavaDoc API documentation for a comprehensive list of validation context information.
Since the example concatenation function requires string types it implements a type check:
public void validate(AggregationValidationContext validationContext) { if ((validationContext.getParameterTypes().length != 1) || (validationContext.getParameterTypes()[0] != String.class)) { throw new IllegalArgumentException("Concat aggregation requires a single parameter of type String"); } }
In order for the compiler to validate the type returned by the aggregation function against the types expected by enclosing expressions, the getValueType
must return the result type of any values produced by the aggregation function:
public Class getValueType() { return String.class; }
Finally the forge implementation must provide a getAggregationFunctionMode
method that returns information about the factory. The compiler uses this information to build the aggregation function factory.
public AggregationFunctionMode getAggregationFunctionMode() { // Inject a factory by using "new" InjectionStrategy injectionStrategy = new InjectionStrategyClassNewInstance(MyConcatAggregationFunctionFactory.class); // The managed mode means there is no need to write code that generates code AggregationFunctionModeManaged mode = new AggregationFunctionModeManaged(); mode.setInjectionStrategyAggregationFunctionFactory(injectionStrategy); return mode; }
An aggregation function factory class is responsible for the following functions:
Implement a newAggregator
method that instantiates and returns an aggregation function instance.
Aggregation function factory classes implement the interface AggregationFunctionFactory
:
public class MyConcatAggregationFunctionFactory implements AggregationFunctionFactory { ...
The runtime constructs the aggregation function factory at time of deployment.
The factory must provide a newAggregator
method that returns instances of AggregationFunction
. The runtime invokes this method for each new aggregation state to be allocated.
public AggregationFunction newAggregator() { return new MyConcatAggregationFunction(); }
An aggregation function class is responsible for the following functions:
Implement an enter
method that the runtime invokes to add a data point into the aggregation, when an event enters a data window
Implement a leave
method that the runtime invokes to remove a data point from the aggregation, when an event leaves a data window
Implement a getValue
method that returns the current value of the aggregation.
Implement a clear
method that resets the current value.
Aggregation function classes implement the interface AggregationFunction
:
public class MyConcatAggregationFunction implements AggregationFunction { ...
The class that provides the aggregation and implements AggregationFunction
does not have to be threadsafe.
The constructor initializes the aggregation function:
public class MyConcatAggregationFunction implements AggregationFunction { private final static char DELIMITER = ' '; private StringBuilder builder; private String delimiter; public MyConcatAggregationFunction() { builder = new StringBuilder(); delimiter = ""; } ...
The enter
method adds a datapoint to the current aggregation value. The example enter
method shown below adds a delimiter and the string value to a string buffer:
public void enter(Object value) { if (value != null) { builder.append(delimiter); builder.append(value.toString()); delimiter = String.valueOf(DELIMITER); } }
Conversly, the leave
method removes a datapoint from the current aggregation value. The example leave
method removes from the string buffer:
public void leave(Object value) { if (value != null) { builder.delete(0, value.toString().length() + 1); } }
Finally, the runtime obtains the current aggregation value by means of the getValue
method:
public Object getValue() { return builder.toString(); }
For on-demand queries the aggregation function must support resetting its value to empty or start values. Implement the clear
function to reset the value as shown below:
public void clear() { builder = new StringBuilder(); delimiter = ""; }
The aggregation function class name as well as the function name for the new aggregation function must be added to the compiler configuration. The configuration shown below is XML however the same options are available through the configuration API:
<esper-configuration xmlns="http://www.espertech.com/schema/esper"> <compiler> <plugin-aggregation-function name="concat" forge-class="com.espertech.esper.example.runtimeconfig.MyConcatAggregationFunctionFactory" /> </compiler> </esper-configuration>
The new aggregation function is now ready to use in a statement:
select concat(symbol) from StockTick#length(3)
Your plug-in aggregation function may accept multiple parameters. You must provide a different mode however:
public AggregationFunctionMode getAggregationFunctionMode() { InjectionStrategy injectionStrategy = new InjectionStrategyClassNewInstance(SupportCountBackAggregationFunctionFactory.class); AggregationFunctionModeMultiParam multiParam = new AggregationFunctionModeMultiParam(); multiParam.setInjectionStrategyAggregationFunctionFactory(injectionStrategy); return multiParam; }
For instance, assume an aggregation function rangeCount
that counts all values that fall into a range of values. The EPL that calls this function and provides a lower and upper bounds of 1 and 10 is:
select rangeCount(1, 10, myValue) from MyEvent
The enter
method of the plug-in aggregation function may look as follows:
public void enter(Object value) { Object[] params = (Object[]) value; int lower = (Integer) params[0]; int upper = (Integer) params[1]; int val = (Integer) params[2]; if ((val >= lower) && (val <= upper)) { count++; } }
Your plug-in aggregation function may want to validate parameter types or may want to know which parameters are constant-value expressions. Constant-value expressions are evaluated only once by the runtime and could
therefore be cached by your aggregation function for performance reasons. The runtime provides constant-value information as part of the AggregationValidationContext
passed to the validate
method.
When using AggregationFunctionModeManaged
the runtime already takes care of filters.
When using AggregationFunctionModeMultiParam
, the compiler takes the filter
named parameter filter expression as a boolean
-type value
and the runtime provides the value to your enter
method as the last value in the parameter array.
For instance, assume an aggregation function concat
that receives a word
value and that has a filter
expression as parameters:
select concat(word, filter: word not like '%jim%') from MyWordEvent
The enter
method of the plug-in aggregation function may look as follows:
public void enter(Object value) { Object[] arr = (Object[]) value; Boolean pass = (Boolean) arr[1]; if (pass != null && pass) { buffer.append(arr[0].toString()); } }
Your code can obtain the actual filter expression from the AggregationValidationContext
that is passed to the validate
method and that
returns the named parameters via getNamedParameters
.
When using AggregationFunctionModeManaged
the runtime already takes care of distinct.
When using AggregationFunctionModeMultiParam
your application code must determine and process distinct.
When the custom aggregation function returns an object as a return value, the EPL can use parenthesis and the dot-operator to invoke methods on the return value.
The following example assumes that the myAggregation
custom aggregation function returns an object that has getValueOne
and getValueTwo
methods:
select (myAggregation(myValue)).getValueOne(), (myAggregation(myValue)).getValueTwo() from MyEvent
Since the above EPL aggregates the same value, the runtime internally uses a single aggregation to represent the current value of myAggregation
(and not two instances of the aggregation, even though myAggregation
is listed twice).
This section introduces the aggregation multi-function API. Please refer to the JavaDoc for more complete class and method-level documentation.
Among the examples is an example use of the aggregation multi-function API in the example by name Cycle-Detect.
Cycle-Detect takes incoming transaction events that have from-account and to-account fields.
The example detects a cycle in the transactions between accounts in order to detect a possible transaction fraud.
Please note that the graph and cycle detection logic of the example is not part of the distribution:
The example utilizes the jgrapht
library.
In the Cycle-Detect example, the vertices of a graph are the account numbers. For example the account numbers Acct-1
, Acct-2
and Acct-3
.
In the graph the edges are transaction events that identify a from-account and a to-account. An example edge is {from:Acct-1, to:Acct-2}
.
An example cycle is therefore in the three transactions {from:Acct-1, to:Acct-2}
, {from:Acct-2, to:Acct-3}
and {from:Acct-3, to:Acct-1}
.
The code for the example aggregation multi-function as shown in this chapter can be found in the Cycle-Detect example in the package com.espertech.esper.example.cycledetect
.
The example provides two aggregation functions named cycledetected
and cycleoutput
:
The cycledetected
function returns a boolean value whether a graph cycle is found or not.
The cycleoutput
function outputs the vertices (account numbers) that are part of the graph cycle.
In the Cycle-Detect example, the following statement utilizes the two functions cycledetected
and cycleoutput
that
share the same graph state to detect a cycle among the last 1000 events:
@Name('CycleDetector') select cycleoutput() as cyclevertices from TransactionEvent#length(1000) having cycledetected(fromAcct, toAcct)
If instead the goal is to run graph cycle detection every 1 second (and not upon arrival of a new event), this sample statement uses a pattern to trigger cycle detection:
@Name('CycleDetector') select (select cycleoutput(fromAcct, toAcct) from TransactionEvent#length(1000)) as cyclevertices from pattern [every timer:interval(1)]
The following steps are required to develop and use a custom aggregation multi-function.
Implement an aggregation multi-function forge by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionForge
.
Implement one or more handlers for aggregation functions by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionHandler
.
Implement an aggregation state key by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionStateKey
.
Implement an aggregation state factory by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionStateFactory
.
Implement an aggregation state holder by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionState
.
Implement a state accessor factory by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionAccessorFactory
.
Implement a state accessor by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionAccessor
.
For use with tables, implement an agent factory by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionAgentFactory
.
For use with tables, implement an agent by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionAgent
.
For use with aggregation methods, implement an aggregation method factory by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionAggregationMethodFactory
.
For use with aggregation methods, implement an aggregation method by implementing the interface com.espertech.esper.common.client.hook.aggmultifunc.AggregationMultiFunctionAggregationMethod
.
Register the aggregation multi-function forge class with the compiler by supplying one or more function names, via the compiler configuration file or the runtime and static configuration API.
An aggregation multi-function forge class is a compile-time class responsible for the following functions:
Implement the addAggregationFunction
method that receives an invocation for each aggregation function declared in the statement that matches any of the function names provided at configuration time.
Implement the validateGetHandler
method that receives an invocation for each aggregation function to be validated in the statement that matches any of the function names provided at configuration time.
Aggregation multi-function factory classes implement the interface AggregationMultiFunctionForge
:
public class CycleDetectorAggregationForge implements AggregationMultiFunctionForge { ...
The compiler constructs a single instance of the aggregation multi-function forge class that is shared for all aggregation function expressions in a statement that have one of the function names provided in the configuration object.
The compiler invokes the addAggregationFunction
method at the time it compiles a statement.
The method receives a declaration-time context object that provides the function name as well as additional information.
The sample Cycle-Detect factory class provides an empty addAggregationFunction
method:
public void addAggregationFunction(AggregationMultiFunctionDeclarationContext declarationContext) { // provides an opportunity to inspect where used }
The compiler invokes the validateGetHandler
method at the time of expression validation. It passes
a AggregationMultiFunctionValidationContext
validation context object that contains actual parameters expressions.
Please see the JavaDoc API documentation for a comprehensive list of validation context information.
The validateGetHandler
method must return a handler object the implements the AggregationMultiFunctionHandler
interface.
Return a handler object for each aggregation function expression according to the aggregation function name and its parameters that are provided in the validation
context.
The example cycledetect
function takes two parameters that provide the cycle edge (from-account and to-account):
public AggregationMultiFunctionHandler validateGetHandler(AggregationMultiFunctionValidationContext validationContext) { if (validationContext.getParameterExpressions().length == 2) { fromExpression = validationContext.getParameterExpressions()[0]; toExpression = validationContext.getParameterExpressions()[1]; } return new CycleDetectorAggregationHandler(this, validationContext); }
An aggregation multi-function handler class is a compile-time class that must implement the AggregationMultiFunctionHandler
interface and is responsible for the following functions:
Implement the getReturnType
method that returns information about the type of return values provided.
Implement the getAggregationStateUniqueKey
method that provides a key object used by the compiler to determine which aggregation functions share state.
Implement the getStateMode
method that returns information to the compiler that the compiler uses to initialize the state factory at deployment time.
Implement the getAccessorMode
method that returns information to the compiler that the compiler uses to initialize the accessor factory at deployment time.
Implement the getAgentMode
method that returns information to the compiler that the compiler uses to initialize the agent factory at deployment time, for use with tables.
Implement the getTableReaderMode
method that returns information to the compiler that the compiler uses to initialize the table reader factory at deployment time, for use with tables.
In the Cycle-Detect example, the class CycleDetectorAggregationHandler
is the handler for all aggregation functions.
public class CycleDetectorAggregationHandler implements AggregationMultiFunctionHandler { ...
The getReturnType
method provided by the handler instructs the compiler about the return type of each aggregation accessor.
The class EPType
holds return type information.
In the Cycle-Detect example the cycledetected
function returns a single boolean value. The cycleoutput
returns a collection of vertices:
public EPType getReturnType() { if (validationContext.getFunctionName().toLowerCase(Locale.ENGLISH).equals(CycleDetectorConstant.CYCLEOUTPUT_NAME)) { return EPTypeHelper.collectionOfSingleValue(forge.getFromExpression().getForge().getEvaluationType()); } return EPTypeHelper.singleValue(Boolean.class); }
The compiler invokes the getAggregationStateUniqueKey
method to determine whether multiple aggregation function expressions
in the same statement can share the same aggregation state or should receive different aggregation state instances.
The getAggregationStateUniqueKey
method must return an instance of AggregationMultiFunctionStateKey
.
The compiler uses equals-semantics (the hashCode
and equals
methods) to determine whether multiple aggregation function share the state object.
If the key object returned for each aggregation function by the handler is an equal key object then the compiler shares aggregation
state between such aggregation functions for the same statement and context partition.
In the Cycle-Detect example the state is shared, which it achieves by simply returning the same key instance:
private static final AggregationMultiFunctionStateKey CYCLE_KEY = new AggregationMultiFunctionStateKey() {}; public AggregationMultiFunctionStateKey getAggregationStateUniqueKey() { return CYCLE_KEY; }
The compiler invokes the getStateMode
method to obtain an instance of AggregationMultiFunctionStateMode
. The
state mode is responsible to obtaining and configuring an aggregation state factory instance at time of deployment.
In the Cycle-Detect example the method passes the expression evaluators providing the from-account and to-account expressions to the state factory:
public AggregationMultiFunctionStateMode getStateMode() { AggregationMultiFunctionStateModeManaged managed = new AggregationMultiFunctionStateModeManaged(); InjectionStrategyClassNewInstance injection = new InjectionStrategyClassNewInstance(CycleDetectorAggregationStateFactory.class); injection.addExpression("from", forge.getFromExpression()); injection.addExpression("to", forge.getToExpression()); managed.setInjectionStrategyAggregationStateFactory(injection); return managed; }
The compiler invokes the getAccessorMode
method to obtain an instance of AggregationMultiFunctionAccessorMode
. The
accessor mode is responsible to obtaining and configuring an accessor factory instance at time of deployment.
The getAccessorMode
method provides information about the accessor factories according to whether the aggregation function name is cycledetected
or cycleoutput
:
public AggregationMultiFunctionAccessorMode getAccessorMode() { Class accessor; if (validationContext.getFunctionName().toLowerCase(Locale.ENGLISH).equals(CycleDetectorConstant.CYCLEOUTPUT_NAME)) { accessor = CycleDetectorAggregationAccessorOutputFactory.class; } else { accessor = CycleDetectorAggregationAccessorDetectFactory.class; } AggregationMultiFunctionAccessorModeManaged managed = new AggregationMultiFunctionAccessorModeManaged(); InjectionStrategyClassNewInstance injection = new InjectionStrategyClassNewInstance(accessor); managed.setInjectionStrategyAggregationAccessorFactory(injection); return managed; }
An aggregation multi-function state factory class must implement the AggregationMultiFunctionStateFactory
interface and is responsible for the following functions:
Implement the newState
method that returns an aggregation state holder.
The runtime invokes the newState
method to obtain a new aggregation state instance before applying aggregation state.
If using group by
in your statement, the runtime invokes the newState
method to obtain a state holder for each group.
In the Cycle-Detect example, the class CycleDetectorAggregationStateFactory
is the state factory for all aggregation functions:
public class CycleDetectorAggregationStateFactory implements AggregationMultiFunctionStateFactory { private ExprEvaluator from; private ExprEvaluator to; public AggregationMultiFunctionState newState(AggregationMultiFunctionStateFactoryContext ctx) { return new CycleDetectorAggregationState(this); } public void setFrom(ExprEvaluator from) { this.from = from; } public void setTo(ExprEvaluator to) { this.to = to; } public ExprEvaluator getFrom() { return from; } public ExprEvaluator getTo() { return to; } }
An aggregation multi-function state class must implement the AggregationMultiFunctionState
interface and is responsible for the following functions:
Implement the applyEnter
method that enters events, event properties or computed values.
Implement the applyLeave
method that can remove events or computed values.
Implement the clear
method to clear state.
In the Cycle-Detect example, the class CycleDetectorAggregationState
is the state for all aggregation functions.
Please review the example for more information.
An aggregation multi-function accessor factory class must implement the AggregationMultiFunctionAccessorFactory
interface and is responsible for the following functions:
Implement the newAccessor
method that returns a new accessor.
In the Cycle-Detect example, the class CycleDetectorAggregationAccessorDetectFactory
returns the accessor like so:
public class CycleDetectorAggregationAccessorDetectFactory implements AggregationMultiFunctionAccessorFactory { public AggregationMultiFunctionAccessor newAccessor(AggregationMultiFunctionAccessorFactoryContext ctx) { return new CycleDetectorAggregationAccessorDetect(); } }
An aggregation multi-function accessor class must implement the AggregationMultiFunctionAccessor
interface and is responsible for the following functions:
Implement the Object getValue(AggregationMultiFunctionState state, ...)
method that returns a result object for the aggregation state.
Implement the Collection<EventBean> getEnumerableEvents(AggregationMultiFunctionState state, ...)
method that returns a collection of events for enumeration,
if applicable (or null).
Implement the EventBean getEnumerableEvent(AggregationMultiFunctionState state, ...)
method that returns an event, if applicable (or null).
Implement the Collection getEnumerableScalar(AggregationMultiFunctionState state, ...)
method that returns an event, if applicable (or null).
In the Cycle-Detect example, the class CycleDetectorAggregationAccessorDetect
returns state for the cycledetected
aggregation function
and the CycleDetectorAggregationAccessorOutput
returns the state for the cycleoutput
aggregation function.
An aggregation multi-function configuration can receive one or multiple function names. You must also set a factory class name.
The sample XML snippet below configures an aggregation multi-function that is associated with the function names func1
and func2
.
<esper-configuration xmlns="http://www.espertech.com/schema/esper"> <compiler> <plugin-aggregation-multifunction function-names="cycledetected,cycleoutput" forge-class="com.espertech.esper.example.cycledetect.CycleDetectorAggregationFactory"/> </compiler> </esper-configuration>
The next example uses the configuration API to register the same:
String[] functionNames = new String[] {"cycledetected", "cycleoutput"}; ConfigurationPlugInAggregationMultiFunction config = new ConfigurationPlugInAggregationMultiFunction(functionNames, CycleDetectorAggregationFactory.class.getName()); Configuration configuration = new Configuration(); configuration.getCompiler().addPlugInAggregationMultiFunction(config);
The runtime shares an AggregationAccessor
instance between threads. The accessor should be designed stateless and should not use any locking of any kind
in the AggregationAccessor
implementation unless your implementation uses other state.
Since the runtime passes an aggregation state instance to the accessor it is thread-safe as long as it relies only on the aggregation state passed to it.
The runtime does not share an AggregationState
instance between threads. There is no need to use locking of any kind
in the AggregationState
implementation unless your implementation uses other state.
Tables allow columns to hold aggregation state including the state for multi-function aggregations. This section provides API pointers.
When a statement accesses a table column that declares aggregation state of a multi-function aggregation, the
AggregationMultiFunctionValidationContext
contains an optionalTableColumnRead
field that provides information
about the table column.
To find out the statement type, such as to determine whether the current statement is a create-table
statement, use
context.getValidationContext().getExprEvaluatorContext().getStatementType()
.
To find out whether the statement aggregates into a table, use context.getValidationContext().getIntoTableName()
that returns the table name or null if not aggregating into a table.
The compiler uses AggregationMultiFunctionStateKey
to determine whether an aggregation function listed with into table
is compatible with the aggregation type
that a table column declares. The equals
method of the object must return true for compatible and false for incompatible.
Your handler may provide a agent and aggregation method modes. Please follow the JavaDoc or inspect the regression test suite.
The filter
expression is passed to you in PlugInAggregationMultiFunctionValidationContext
as part of getNamedParameters
under the name filter
. When use with tables the filter expression is part of PlugInAggregationMultiFunctionAgentContext
.
Your application must invoke the filter expression as the runtime does not evaluate the filter expression for you. For example:
ExprEvaluator filterEval = validationContext.getNamedParameters().get("filter").get(0).getExprEvaluator();
public void applyEnter(EventBean[] eventsPerStream, ExprEvaluatorContext exprEvaluatorContext) { Boolean pass = (Boolean) filterEval.evaluate(eventsPerStream, true, exprEvaluatorContext); // note: pass "false" for applyLeave if (pass != null && pass) { Object value = valueEval.evaluate(eventsPerStream, true, exprEvaluatorContext); // note: pass "false" for applyLeave // do something } }
Pattern guards are pattern objects that control the lifecycle of the guarded sub-expression, and can filter the events fired by the subexpression.
The following steps are required to develop and use a custom guard object.
Implement a guard forge class, responsible for compile-time guard information.
Implement a guard factory class, responsible for creating guard object instances at runtime.
Implement a guard class (used at runtime).
Register the guard forge class with the compiler by supplying a namespace and name, via the compiler configuration.
The code for the example guard object as shown in this chapter can be found in the test source folder in the package com.espertech.esper.regressionlib.support.extend.pattern
by the name MyCountToPatternGuardForge
. The sample guard discussed here counts the number of events occurring up to a maximum number of events, and end the sub-expression when that maximum is reached.
Some of the APIs that you use to implement a pattern guard are internal APIs and are not stable and may change between releases. The client
package contains all the stable interface classes.
A guard forge class is only used by the compiler and is responsible for the following functions:
Implement a setGuardParameters
method that takes guard parameters, which are themselves expressions.
Implement a collectSchedule
method that collects guard schedule objects if any.
Implement a makeCodegen
method that provides the code to construct a guard factory at time of deployment.
Guard forge classes implement the GuardForge
:
public class MyCountToPatternGuardForge implements GuardForge { ...
The compiler constructs one instance of the guard forge class for each time the guard is listed in a statement.
The guard forge class implements the setGuardParameters
method that is passed the parameters to the guard as supplied by the statement. It verifies the guard parameters, similar to the code snippet shown next. Our example counter guard takes a single numeric parameter:
public void setGuardParameters(List<ExprNode> guardParameters, MatchedEventConvertorForge convertor, StatementCompileTimeServices services) throws GuardParameterException { String message = "Count-to guard takes a single integer-value expression as parameter"; if (guardParameters.size() != 1) { throw new GuardParameterException(message); } Class paramType = guardParameters.get(0).getForge().getEvaluationType(); if (paramType != Integer.class && paramType != int.class) { throw new GuardParameterException(message); } this.numCountToExpr = guardParameters.get(0); this.convertor = convertor; }
The makeCodegen
method is called by the compiler to receive the code that builds a guard factory. Use the SAIFFInitializeBuilder
to build factory initialization code:
public CodegenExpression makeCodegen(CodegenMethodScope parent, SAIFFInitializeSymbol symbols, CodegenClassScope classScope) { SAIFFInitializeBuilder builder = new SAIFFInitializeBuilder(MyCountToPatternGuardFactory.class, this.getClass(), "guardFactory", parent, symbols, classScope); return builder.exprnode("numCountToExpr", numCountToExpr) .expression("convertor", convertor.makeAnonymous(builder.getMethod(), classScope)) .build(); }
A guard factory class is responsible for the following functions:
Implement a makeGuard
method that constructs a new guard instance.
Guard factory classes implements the GuardFactory
:
public class MyCountToPatternGuardFactory implements GuardFactory { ...
The runtime obtains an instance of the guard factory class at time of deployment.
The makeGuard
method is called by the runtime to create a new guard instance. The example makeGuard
method shown below passes the maximum count of events to the guard instance. It also passes a Quitable
implementation to the guard instance. The guard uses Quitable
to indicate that the sub-expression contained within must stop (quit) listening for events.
public Guard makeGuard(PatternAgentInstanceContext context, MatchedEventMap beginState, Quitable quitable, Object guardState) { EventBean[] events = convertor == null ? null : convertor.convert(beginState); Object parameter = PatternExpressionUtil.evaluateChecked("Count-to guard", numCountToExpr, events, context.getAgentInstanceContext()); if (parameter == null) { throw new EPException("Count-to guard parameter evaluated to a null value"); } Integer numCountTo = (Integer) parameter; return new MyCountToPatternGuard(numCountTo, quitable); }
A guard class has the following responsibilities:
Provides a startGuard
method that initalizes the guard.
Provides a stopGuard
method that stops the guard, called by the runtime when the whole pattern is stopped, or the sub-expression containing the guard is stopped.
Provides an inspect
method that the pattern runtime invokes to determine if the guard lets matching events pass for further evaluation by the containing expression.
Guard classes implement the GuardSupport
interface as shown here:
public class MyCountToPatternGuard implements Guard {
The compiler invokes the guard factory class to construct an instance of the guard class for each new sub-expression instance within a statement.
A guard class must provide an implementation of the startGuard
method that the runtime invokes to start a guard instance. In our example, the method resets the guard's counter to zero:
public void startGuard() { counter = 0; }
The runtime invokes the inspect
method for each time the sub-expression indicates a new event result. Our example guard needs to count the number of events matched, and quit if the maximum number is reached:
public boolean inspect(MatchedEventMap matchEvent) { counter++; if (counter > numCountTo) { quitable.guardQuit(); return false; } return true; }
The inspect
method returns true for events that pass the guard, and false for events that should not pass the guard.
The guard factory class name as well as the namespace and name for the new guard must be added to the compiler configuration. The configuration shown below is XML however the same options are available through the configuration API:
<esper-configuration xmlns="http://www.espertech.com/schema/esper"> <compiler> <plugin-pattern-guard namespace="myplugin" name="count_to" forge-class="com.espertech.esper.regressionlib.support.extend.pattern.MyCountToPatternGuardForge"/> </compiler> </esper-configuration>
The new guard is now ready to use in a statement. The next pattern statement detects the first 10 MyEvent events:
select * from pattern [(every MyEvent) where myplugin:count_to(10)]
Note that the every
keyword was placed within parentheses to ensure the guard controls the repeated matching of events.
Pattern observers are pattern objects that are executed as part of a pattern expression and can observe events or test conditions. Examples for built-in observers are timer:at
and timer:interval
. Some suggested uses of observer objects are:
Implement custom scheduling logic using the runtime's own scheduling and timer services
Test conditions related to prior events matching an expression
The following steps are required to develop and use a custom observer object within pattern statements:
Implement an observer forge class, which is used by the compiler only and is responsible for validating parameters and for initializing an observer factory.
Implement an observer factory class, responsible for creating observer object instances.
Implement an observer class.
Register an observer factory class with the compiler by supplying a namespace and name, via the compiler configuration file or the configuration API.
The code for the example observer object as shown in this chapter can be found in the test source folder in package com.espertech.esper.regression.client
by the name MyFileExistsObserver
. The sample observer discussed here very simply checks if a file exists, using the filename supplied by the pattern statement, and via the java.io.File
class.
Some of the APIs that you use to implement a pattern observer are internal APIs and are not stable and may change between releases. The client
package contains all the stable interface classes.
An observer forge class is responsible for the following functions:
Implement a setObserverParameters
method that takes observer parameters, which are themselves expressions.
Implement a collectSchedule
method that collects observer schedule objects if any.
Implement a makeCodegen
method that provides the code to construct an observer factory at time of deployment.
Observer forge classes implement the ObserverForge
interface:
public class MyFileExistsObserverForge implements ObserverForge { ...
The compiler constructs one instance of the observer forge class for each time the observer is listed in a statement.
The observer forge class implements the setObserverParameters
method that is passed the parameters to the observer as supplied by the statement. It verifies the observer parameters, similar to the code snippet shown next. Our example file-exists observer takes a single string parameter:
public void setObserverParameters(List<ExprNode> observerParameters, MatchedEventConvertorForge convertor, ExprValidationContext validationContext) throws ObserverParameterException { String message = "File exists observer takes a single string filename parameter"; if (observerParameters.size() != 1) { throw new ObserverParameterException(message); } if (!(observerParameters.get(0).getForge().getEvaluationType() == String.class)) { throw new ObserverParameterException(message); } this.filenameExpression = observerParameters.get(0); this.convertor = convertor; }
The compiler calls the makeCodegen
method to provide code that initializes the observer factory at time of deployment. It uses the SAIFFInitializeBuilder
to build the code.
public CodegenExpression makeCodegen(CodegenMethodScope parent, SAIFFInitializeSymbol symbols, CodegenClassScope classScope) { SAIFFInitializeBuilder builder = new SAIFFInitializeBuilder(MyFileExistsObserverFactory.class, this.getClass(), "observerFactory", parent, symbols, classScope); return builder.exprnode("filenameExpression", filenameExpression) .expression("convertor", convertor.makeAnonymous(builder.getMethod(), classScope)) .build(); }
An observer factory class is responsible for the following functions:
Implement a makeObserver
method that returns a new observer instance.
Observer factory classes implement the ObserverFactory
:
public class MyFileExistsObserverFactory implements ObserverFactory { ...
The runtime obtains an instance of the observer factory class at time of deployment.
The runtime calls the makeObserver
method to create a new observer instance. The example makeObserver
method shown below passes parameters to the observer instance:
public EventObserver makeObserver(PatternAgentInstanceContext context, MatchedEventMap beginState, ObserverEventEvaluator observerEventEvaluator, Object observerState, boolean isFilterChildNonQuitting) { EventBean[] events = convertor == null ? null : convertor.convert(beginState); Object filename = PatternExpressionUtil.evaluateChecked("File-exists observer ", filenameExpression, events, context.getAgentInstanceContext()); if (filename == null) { throw new EPException("Filename evaluated to null"); } return new MyFileExistsObserver(beginState, observerEventEvaluator, filename.toString()); }
The ObserverEventEvaluator
parameter allows an observer to indicate events, and to indicate change of truth value to permanently false. Use this interface to indicate when your observer has received or witnessed an event, or changed it's truth value to true or permanently false.
The MatchedEventMap
parameter provides a Map of all matching events for the expression prior to the observer's start. For example, consider a pattern as below:
a=MyEvent -> myplugin:my_observer(...)
The above pattern tagged the MyEvent instance with the tag "a". The runtime starts an instance of my_observer
when it receives the first MyEvent. The observer can query the MatchedEventMap
using "a" as a key and obtain the tagged event.
An observer class has the following responsibilities:
Provides a startObserve
method that starts the observer.
Provides a stopObserve
method that stops the observer, called by the runtime when the whole pattern is stopped, or the sub-expression containing the observer is stopped.
Observer classes subclass com.espertech.esper.pattern.observer.ObserverSupport
as shown here:
public class MyFileExistsObserver implements EventObserver { ...
The runtime invokes the observer factory class to construct an instance of the observer class for each new sub-expression instance within a statement.
An observer class must provide an implementation of the startObserve
method that the runtime invokes to start an observer instance. In our example, the observer checks for the presence of a file and indicates the truth value to the remainder of the expression:
public void startObserve() { File file = new File(filename); if (file.exists()) { observerEventEvaluator.observerEvaluateTrue(beginState); } else { observerEventEvaluator.observerEvaluateFalse(); } }
Note the observer passes the ObserverEventEvaluator
an instance of MatchedEventMap
. The observer can also create one or more new events and pass these events through the Map to the remaining expressions in the pattern.
The observer factory class name as well as the namespace and name for the new observer must be added to the compiler configuration via the configuration API or using the XML configuration file. The configuration shown below is XML however the same options are available through the configuration API:
<esper-configuration xmlns="http://www.espertech.com/schema/esper"> <compiler> <plugin-pattern-observer namespace="myplugin" name="file_exists" forge-class="com.espertech.esper.regressionlib.support.extend.pattern.MyFileExistsObserverForge" /> </compiler> </esper-configuration>
The new observer is now ready to use in a statement. The next pattern statement checks every 10 seconds if the given file exists, and indicates to the listener when the file is found.
select * from pattern [every timer:interval(10 sec) -> myplugin:file_exists("myfile.txt")]