www.espertech.comDocumentation

Chapter 15. Compiler Reference

15.1. Introduction
15.2. Concepts
15.2.1. Module
15.2.2. EPL-objects
15.2.3. Dependencies
15.2.4. Dependency Resolution
15.2.5. Access Modifiers
15.2.6. Bus Modifier for Event Types
15.3. Compiling a Module
15.4. Reading and Writing a Compiled Module
15.5. Reading Module Content
15.6. Compiler Arguments
15.6.1. Compiler Configuration
15.6.2. Compiler Path
15.6.3. Compiler Options
15.7. Statement Object Model
15.7.1. Building an Object Model
15.7.2. Building Expressions
15.7.3. Building a Pattern Statement
15.7.4. Building a Select Statement
15.7.5. Building a Create-Variable and On-Set Statement
15.7.6. Building Create-Window, On-Delete and On-Select Statements
15.8. Substitution Parameters
15.9. OSGi, Class Loader, Class-For-Name
15.10. Authoring Tools
15.11. Testing Tools
15.12. Debugging
15.12.1. @Audit Annotation
15.13. Ordering Multiple Modules
15.14. Logging
15.15. Debugging Generated Code
15.16. Compiler Version and Runtime Version
15.17. Compiler Byte Code Optimizations
15.18. Compiler Filter Expression Analysis
15.18.1. Constituent Analysis
15.18.2. Value Composite Expression Analysis
15.18.3. Lookupable Composite Expression Analysis
15.18.4. Boolean Reusable Expression Analysis
15.18.5. Condition Analysis and Planning
15.19. Limitations

The compiler provides the following functions:

The most important function of the compiler is to produce byte code for your module. Deploy the byte code into a runtime for execution.

The compiler interface is EPCompiler in package com.espertech.esper.compiler.client. Your application obtains a compiler instance by calling the getCompiler method of EPCompilerProvider.

For example:

EPCompiler epCompiler = EPCompilerProvider.getCompiler();

Use the compiler as follows:

A module contains zero, one or multiple statements. A module is a source code unit as the compiler turns a module into byte code. A module does not need to be a text - a module can also be an object model.

In module text, statements appear separated by the semicolon (;) character. If there is a single statement in the module the semicolon can be left off.

The synopsis of a module file is:

[module module_name;]
	[uses module_name; | import import_name;] [uses module_name; | import import_name;] [...]
	[epl_statement;] [epl_statement;] [...]

Use the module keyword followed a module_name identifier or a package (identifiers separated by dots) to declare the name of the module. The module name declaration must be at the beginning of the file, comments and whitespace excluded. The module name serves to check uses-dependences of other modules.

If a module file requires certain constructs that may be shared by other modules, such as named windows, tables, variables, event types, variant streams or inserted-into streams required by statements, a module file may specify dependent modules with the uses keyword. It can be used to avoid name conflicts and automatic deployment can use this information to determine deployment order.

If the statements in the module require Java classes such as for underlying events or user-defined functions, use the import keyword followed by the fully-qualified class name or package name in the format package.*. The uses and import keywords are optional and must occur after the module declaration.

Following the optional deployment instructions are any number of epl_statement statements that are separated by semicolon (;).

The following is a sample module file explained in detail thereafter:

// Declare the name for the module (optional).
module org.myorganization.switchmonitor;

// Declare other module(s) that this module depends on (optional).
// This can be used to resolve name conflicts.
uses org.myorganization.common;

// Import any Java/.NET classes of the given package name (optional). 
// Imports only handle classpath and do not import other modules.
import org.myorganization.events.*;

// Declare an event type based on a Java class in the package that was imported as above
create schema MySwitchEvent as MySwitchEventPOJO;

// Sample statement
@Name('Off-On-Detector')
insert into MyOffOnStream
select * from pattern[every-distinct(id) a=MySwitchEvent(status='off') 
  -> b=MySwitchEvent(id=a.id, status='on')];

// Sample statement
@Name('Count-Switched-On')
@Description('Count per switch id of the number of Off-to-On switches in the last 1 hour')
select id, count(*) from MyOffOnStream#time(1 hour) group by id;

The example above declares a module name of org.myorganization.switchmonitor. The example demonstrates the import keyword to make a package name known to the compiler for resolving classpath items, as the example assumes that MySwitchEventPOJO is a POJO event class. In addition the example module contains two statements separated by semicolon characters.

A module usually depends on event types and may also depend on other EPL-objects such as named windows or tables, for example. The compiler resolves all dependencies at compile-time. It produces byte code based on the information associated with the EPL-object. Upon deploying a compiled module's byte code into the runtime the runtime validates that dependencies exist.

For example, consider the following module:

select accountId, amount from Withdrawal

The module above depends on the event type Withdrawal. The compiler resolves the event type name to an EventType instance. It produces code according to the event type. At time of deployment of the compiled module the runtime verifies that the Withdrawal event type exists.

Specifically, the compiler generates code like this:

In other words, the runtime does not validate that event property names, event property types, table column names and types, variable types, index property names and other compile-time information matches the information that was provided at compile time.

For event types there is a bus modifier that determines whether or not the event type is available for use with the sendEventType methods of the EPEventService runtime event service.

An event type may be declared with the bus modifier, in which case calls to sendEventType process the event.

An event type may be declared with the non-bus modifier (the default), in which case calls to sendEventType cause an exception to be thrown.

To understand this better, here is what sendEventType of EPEventService does: When your application calls any of the sendEventBean, sendEventMap, sendEventObjectArray, sendEventJson, sendEventXMLDOM or sendEventAvro methods of EPEventService, the runtime finds the event type using the event type name that is passed. It associates the event type to the event object for processing the given event. If the event type name is not recognized or the event type does not have the bus modifier it throws an exception.

The bus modifier is not required for pre-configured event types. The bus modifier requires public access.

Your application may set the bus modifier by:

The following module declares a public event type that allows an application to send in events of that name:

@public @buseventtype create schema AccountQueryEvent (accountId string)

The information herein pertains to the routeEventType and EventSender as well.

The compile method takes two parameters. The first parameter is the module text or an module object model. The second parameter are compiler arguments.

The output of the compiler is an EPCompiled instance. You can deploy EPCompiled instances directly into a runtime as described in Section 16.4, “Deploying and Undeploying Using EPDeploymentService”.

The EPCompiledIOUtil class is a utility for writing and reading EPCompiled instances to and from jar-files:

Read and parse module files using the readModule and parseModule methods, which return a Module instance to represent the module information.

This code snippet demonstrates reading and parsing a module given a file name:

Module module = EPCompilerProvider.getCompiler().read(new File("switchmonitor.epl"));

The compiler arguments are:

Pass a Configuration instance to the compiler to configure the compiler. By default the compiler uses an empty configuration object.

The compiler only uses the common section and the compiler section of the configuration. The compiler ignores the runtime section of the configuration.

It is not necessary to pass a configuration object or to pre-configure event types. You may create event types by means of create schema.

A pre-configured event types is a convenience since the event type is already defined and ready to use. The common section of the configuration holds the pre-configured event types. The following sample adds a pre-configured WithdrawalEvent map-based event type:

Map<String, Object> columns = new LinkedHashMap<>();
columns.put("accountId", String.class);
columns.put("amount", double.class);

Configuration configuration = new Configuration();
configuration.getCommon().addEventType("WithdrawalEvent", columns);
CompilerArguments args = new CompilerArguments(configuration);

To obtain a configuration object from a runtime call getConfigurationDeepCopy on EPRuntime:

Configuration configuration = epRuntime.getConfigurationDeepCopy();
CompilerArguments args = new CompilerArguments(configuration);

More information on the common and compiler configuration can be found at Chapter 17, Configuration.

The compiler path provides EPL-objects that other modules may declare and that the current module may use.

For example, assume a module M1 that declares a named window WithdrawalWindow:

@public create window WithdrawalWindow#time(10) as Withdrawal

A second module M2 may query the named window like so:

select (select count(*) from WithdrawalWindow) as cnt from Withdrawal

Module M2 depends on the EPL-object WithdrawalWindow (a named window) that module M1 declares.

You can build a path from:

Assume that your application compiled module M1 like so:

Map<String, Object> columns = new LinkedHashMap<>();
columns.put("accountId", String.class);
columns.put("amount", double.class);

Configuration configuration = new Configuration();
configuration.getCommon().addEventType("WithdrawalEvent", columns);

CompilerArguments arguments = new CompilerArguments(configuration);
EPCompiled compiledModuleM1 = EPCompilerProvider.getCompiler().compile("@public create window WithdrawalWindow#time(10) as Withdrawal", arguments);

The compiledModuleM1 instance holds the byte code of module M1.

The statement object model is a set of classes that provide an object-oriented representation of statement. The object model classes are found in package com.espertech.esper.common.client.soda. An instance of EPStatementObjectModel represents a statement's object model.

The statement object model classes are a full and complete specification of a statement. All EPL constructs including expressions and sub-queries are available in the statement object model.

The statement object model provides the means to building, changing or interrogating statements beyond the string representation. The object graph of the statement object model is fully navigable for easy querying by code, and is also serializable allowing applications to persist or transport statements in object form, when required.

The statement object model supports full round-trip from object model to statement string and back to object model: A statement object model can be rendered into a string representation via the toEPL method on EPStatementObjectModel. Further, the compiler API allows compiling a statement string into an object model representation via the eplToModel method on EPCompiler.

The statement object model is fully mutable. Mutating any list such as returned by getChildren(), for example, is acceptable and supported.

The following limitations apply:

  • Statement object model classes are not safe for sharing between threads other than for read access.

  • Between versions the serialized form of the object model is subject to change. There are no guarantees that the serialized object model of one version will be fully compatible with the serialized object model generated by another version. Please consider this issue when storing object models in persistent store.

A EPStatementObjectModel consists of an object graph representing all possible clauses that can be part of a statement.

Among all clauses, the SelectClause and FromClause objects are required clauses that must be present, in order to define what to select and where to select from.


Part of the statement object model package are convenient builder classes that make it easy to build a new object model or change an existing object model. The SelectClause and FromClause are such builder classes and provide convenient create methods.

Within the from-clause you have a choice of different streams to select on. The FilterStream class represents a stream that is filled by events of a certain type and that pass an optional filter expression.

We can use the classes introduced above to create a simple statement object model:

EPStatementObjectModel model = new EPStatementObjectModel();
model.setSelectClause(SelectClause.createWildcard());
model.setFromClause(FromClause.create(FilterStream.create("ReadyEvent")));

The model as above is equivalent to:

select * from ReadyEvent

Notes on usage:

  • Variable names can simply be treated as property names.

  • When selecting from named windows or tables, the name of the named window or table is the event type name for use in FilterStream instances or patterns.

  • To compile an arbitrary sub-expression text into an Expression object representation, simply add the expression text to a where clause, compile the EPL string into an object model via the eplToModel method on EPCompiler, and obtain the compiled where from the EPStatementObjectModel via the getWhereClause method.

This sample statement creates a named window:

create window OrdersTimeWindow#time(30 sec) as select symbol as sym, volume as vol, price from OrderEvent

The is the code that builds the create-window statement as above:

EPStatementObjectModel model = new EPStatementObjectModel();
model.setCreateWindow(CreateWindowClause.create("OrdersTimeWindow").addView("win", "time", 30));
model.setSelectClause(SelectClause.create()
		.addWithName("symbol", "sym")
		.addWithName("volume", "vol")
		.add("price"));
model.setFromClause(FromClause.create(FilterStream.create("OrderEvent)));

A second statement deletes from the named window:

on NewOrderEvent as myNewOrders
delete from OrdersNamedWindow as myNamedWindow
where myNamedWindow.symbol = myNewOrders.symbol

The object model is built by:

EPStatementObjectModel model = new EPStatementObjectModel();
model.setOnExpr(OnClause.createOnDelete("OrdersNamedWindow", "myNamedWindow"));
model.setFromClause(FromClause.create(FilterStream.create("NewOrderEvent", "myNewOrders")));
model.setWhereClause(Expressions.eqProperty("myNamedWindow.symbol", "myNewOrders.symbol"));

A third statement selects from the named window using the non-continuous on-demand selection via on-select:

on QueryEvent(volume>0) as query
select count(*) from OrdersNamedWindow as win
where win.symbol = query.symbol

The on-select statement is built from scratch via the object model as follows:

EPStatementObjectModel model = new EPStatementObjectModel();
model.setOnExpr(OnClause.createOnSelect("OrdersNamedWindow", "win"));
model.setWhereClause(Expressions.eqProperty("win.symbol", "query.symbol"));
model.setFromClause(FromClause.create(FilterStream.create("QueryEvent", "query", 
  Expressions.gt("volume", 0))));
model.setSelectClause(SelectClause.create().add(Expressions.countStar()));

Substitution parameters have the following syntax:

? [:[name] [:type]]

The name is optional. The absence of a name means the substitution parameter is only addressable by index.

The type is optional. The absence of the type means the type of the substitution parameter is java.lang.Object. Use cast or provide a type name when your expression requires a strongly-typed value. The type can also be a parameterized type such as java.util.List<String>.

Here are a few examples of valid substitution parameters:


All substitution parameters must either be unnamed or named. It is not possible to mix the two styles.

If not assigning a name to substitution parameters, the compiler assigns the first substitution parameter an index of 1 and subsequent parameters increment the index by one.

If assigning a name to each substitution parameter, the name can include slash (/) characters and can occur multiple times.

Substitution parameters can be inserted into any EPL construct that takes an expression. They are therefore valid in any clauses such as the select-clause, from-clause filters, where-clause, group-by-clause, having-clause or order-by-clause, including data window parameters and pattern observers and guards, for example. Substitution parameters cannot be used where a numeric constant is required rather than an expression and in SQL statements.

You may use square brackets ([]) to denote array-types and [primitive] for array of primitive. For example int[primitive] for array of int-primitive and int[] for array of Integer.

All substitution parameters must be replaced by actual values at time of deployment.

The configuration object (Configuration), in respect to classes, holds the fully-qualified class name and does not generally hold Class references. This is by design since the configuration object can be populated from XML.

The compiler may need to look up a class by name and may need to obtain a class loader. Your application has full control over class-for-name and classloader use. OSGi environments can provide a specific class-for-name and class loader. Please refer to Section 17.7, “Passing Services or Transient Objects”.

Enterprise Edition includes authoring tools for statements and modules by providing form-based dialogs, templates, an expression builder, simulation tool and other tools. Enterprise Edition also supports hot deployment and packaging options for EPL and related code.

Statements can be organized into modules as described above. Any text editor can edit statements and module text. A text editor or IDE that highlights SQL syntax or keywords works.

For authoring configuration files please consult the XSD schema files as provided with the distribution.

For information on authoring event classes or event definitions in general please see Chapter 3, Event Representations or Section 5.15, “Declaring an Event Type: Create Schema”.

We recommend testing modules using a test framework such as JUnit or TestNG. Please consult the regression test suite for extensive examples, which can be downloaded from the distribution site.

Esper's API provides test framework classes to simplify automated testing of statements. Please see Section 16.18, “Test and Assertion Support” for more information.

We recommend performing latency and throughput tests early in the development lifecycle. Please consider the performance tips in Chapter 24, Performance for optimal performance.

Consider runtime and statement metrics reporting for identifying slow-performing statements, for example. See Section 16.12, “Runtime and Statement Metrics Reporting”.

Enterprise Edition includes a debugger for module execution.

One important tool for debugging without Enterprise Edition is the parameterized @Audit annotation. This annotation allows to output, on statement-level, detailed information about many aspects of statement processing.

Another tool for logging runtime-level detail is Section 17.6.2.1, “Execution Path Debug Logging”.

Please see Section 17.9, “Logging Configuration” for information on configuring logging in general.

Use the @Audit annotation to have the runtime output detailed information about statement processing. The runtime reports, at INFO level, the information under log name com.espertech.esper.audit. You may define an output format for audit information via configuration.

You may provide a comma-separated list of category names to @Audit to output information related to specific categories only. The table below lists all available categories. If no parameter is provided, the runtime outputs information for all categories. Category names are not case-sensitive.

For the next statement the runtime produces detailed processing information (all categories) for the statement:

@Name('All Order Events') @Audit select * from OrderEvent

For the next statement the runtime provides information about new events and also about event property values (2 categories are listed):

@Name('All Order Events') @Audit('stream,property') select price from OrderEvent

Here is a more complete example that uses the API to create the schema, create above statement and send an event:

try {
  String module =
    "@public @buseventtype create schema OrderEvent(price double);\n" +
    "@name('All-Order-Events') @Audit('stream,property') select price from OrderEvent;\n";
  EPCompiled compiled = EPCompilerProvider.getCompiler().compile(module, null);

  EPRuntime runtime = EPRuntimeProvider.getDefaultRuntime();
  EPDeployment deployment = runtime.getDeploymentService().deploy(compiled);
  deployment.getStatements()[0].addListener(new SupportUpdateListener());
  runtime.getEventService().sendEventMap(Collections.singletonMap("price", 100d), "OrderEvent");
} catch (Throwable t) {
  log.error(t.getMessage(), t);
}

The output is similar to the following:

INFO  [audit] Statement All-Order-Events stream OrderEvent inserted {price=100.0}
INFO  [audit] Statement All-Order-Events property price value 100.0

Note that the runtime only evaluates select-clause expressions if either a listener or subscriber is attached to the statement or if used with insert-into.

Since modules may have inter-dependencies as discussed under the uses declaration, there is a ModuleOrderUtil class that provides the getModuleOrder method to order a collection of modules before deployment.

Assuming your application reads multiple modules into a mymodules module list, this code snippet orders the modules for deployment and validates dependency declarations for each module:

List<Module> mymodules =  ... read modules...;  
ModuleOrder order = ModuleOrderUtil.getModuleOrder(mymodules, new ModuleOrderOptions());

You can log generated classes at INFO log level by setting the configuration flag for code logging as described in Section 17.5.3.1, “Byte Code Generation Logging”.

The information herein is for developers and is specific to the Janino compiler at the version provided with the distribution.

Set the system property org.codehaus.janino.source_debugging.enable to true to have Janino compile code with debug symbols.

Set the system property org.codehaus.janino.source_debugging.dir to a file system directory to have Janino generate classes into a given directory.

The IDE can debug into generated classes and show the source code provided that the IDE can access the source code. For example:

-Dorg.codehaus.janino.source_debugging.dir=/path/to/directory
-Dorg.codehaus.janino.source_debugging.enable=true

To include additional EPL-related comments in the generated code you can change the configuration as outlined in Section 17.5.1, “Compiler Settings Related to Byte Code Generation”.

The version format is major.minor.patch-level.

When deploying a compiled module to a runtime, or when executing a compiled fire-and-forget query, the runtime compares the compiler version that produced the compiled module or compiled query with the runtime version. If the major or minor version does not match, the runtime indicates a version mismatch by throwing an exception.

For example, an application may compile an EPL module using the version 8.0.0 compiler, i.e. the compiler major version is eight and the compiler minor version is zero and the compiler patch level is zero. Assume the application attempts to deploy the compiled module to a runtime of version 8.1.0, i.e. the runtime major version is eight and the runtime minor version is one and the runtime patch level is zero. The runtime throws an EPDeployDeploymentVersionException exception to indicate that the minor version mismatches.

The compiler generates byte code that avoids down-casts and branching. It also removes many virtual calls as it transforms expression trees into byte code.

For aggregations the compiler produces a custom aggregation row class that has fields which represent the aggregation state. Therefore each aggregation row does not need additional objects to represent aggregations such as averages or sums and instead the aggregations are fields of the same class, reducing the number objects that the runtime manages per group-by key.

For any composite keys out of two or more expressions the compiler produces a class that represents the composite key and that implements equals and hashCode. This is applicable to the group-by clause including rollup, data windows with keys (such as #unique, #firstunique, #groupwin, #rank), partition-by for keyed segmented contexts, contexts with distinct, the select-clause distinct keyword, the query planner when planning implicit and explicit indexes, create index, every-distinct, the partition-by clause for match-recognize, table column keyed-access expressions, the for-clause for grouped delivery and the distinctOf enumeration method.

The compiler analyzes filter expressions so that the runtime can build filter indexes. For more information on filter indexes please see Section 2.18.2, “Filter Indexes”.

By default the compiler uses advanced techniques for planning filter indexes. Your application may disable some or all of the advanced techniques as described in Section 17.5.8.2, “Filter Index Planning”.

To output the filter plans please set a compiler flag as described in Section 17.5.3.2, “Filter Plan Logging”.

Tip

  • Filter indexed planning, like all index planning, reflects a trade-off of amount of space needed for the index, the processing time required to build and maintain an index versus the performance gain of matching incoming events to statements and their partitions when there are many (same or different) filters.

  • The compiler does automatically plan all possible filter indexes. For performance tuning the ConfigurationCompilerExecution.FilterIndexPlanning.BASIC setting and ConfigurationCompilerExecution.FilterIndexPlanning.NONE setting and the hints are also available.

This section applies in the default configuration (FilterIndexPlanning.ADVANCED) and the non-default FilterIndexPlanning.BASIC configuation.

The compiler analyzes filter expressions by first determining the constituents. Constituents are the individual expressions, connected by the and and or logical operator, that each return a boolean-type value and that make up the overall expression.

For instance, this EPL statement:

select * from Withdrawal(accountId = '123' and accountType = 'Checking')

The constituent expressions are accountId = '123' and accountType = 'Checking' (two constituent expressions).

The compiler analyzes each constituent expression separately. From a single constituent expression it attempts to form a lookupable-expression, filter-index-operator and value-expression triplet. For those constituent expressions that the compiler cannot form a triplet the compiler aggregates such expression into a common boolean expression. The remainder of the discussion focuses on a constituent expression and forming a triplet from it.

The lookupable-expression is the expression providing the filter index lookup value, for example the accountId expression in accountId = '123'.

The value-expression is the expression providing the indexed value, for instance the '123' expression in accountId = '123'.

The filter-index-operator means the type of index such as equals(=), relational (<,>,<=, >=) etc..

Therefore, in Withdrawal(accountId = '123'), the filter-index-operator is equals and the lookupable-expression is accountId and the value-expression is '123'.

The expressions that are left and right of a filter-index-operator are commutative, meaning they give the same result whatever the expression occurs on either side of the operator. It is equivalent to say accountId = '123' and '123' = accountId.

The order in which constituent expressions appear does matter. Place the most frequent constituent expressions first. Each filter-index-operator has a precedence level and the compiler orders by precedence level keeping the order as provided within the same precedence level. The precedence levels are, sorted by higher precedence level first, as follows: equal, is, in-list-of-values, advanced-index, range-open, range-half-open, range-half-closed, range-closed, less, less-or-equal, greater-or-equal, greater, not-range-closed, not-range-half-closed, not-range-half-open, not-range-open, not-in-list-of-values, not-equal, is-not.

The lookupable-expression in a triplet must be any of the following:

The following operators are the triplet filter-index-operator operators. A constituent expression that has one of these operators qualifies for a triplet:

  • equals =

  • not equals !=

  • comparison operators < , > , >=, <=

  • ranges

    • use the between keyword for a closed range where both endpoints are included

    • use the in keyword and round () or square brackets [] to control how endpoints are included

    • for inverted ranges use the not keyword and the between or in keywords

  • list-of-values checks using the in keyword or the not in keywords followed by a comma-separated list of values

  • Other operators; See hint Section 15.18.4, “Boolean Reusable Expression Analysis” for more information.

The value-expression in a triplet can only be any of the expressions below:

  • Constant

  • Substitution parameter

  • Event property of a prior-matching event (not the currently arriving event)

  • Context property

  • The typeOf function

  • Plug-in single-row node with filter-optimizable setting enabled

  • Declared expression

  • Other expressions; See hint Section 15.18.2, “Value Composite Expression Analysis” for more information.

The or logical operator plays a special role in compiler analysis. The compiler rewrites the expression accountId = '123' or accountId = '456' to accountId in ('123', '456'). It also plans multiple filter index path which is further described at Section 17.5.8.1, “Filter Service Max Filter Width”.

The order of the triplets, which follows the ordering of the constituents and the ordering according to filter-index-operator precedence level (see above), defines the default nesting of filter indexes. The runtime, upon adding to the filter, may however encounter existing filter indexes and always prefers to add to existing filter indexes and following an existing path where possible, or creating new branches and leafs from the existing path.

The compiler unwinds logical and and or and may also rewind such expressions for planning as below. Therefore the expression a and (b and c) is the same as a and b and c and the expression a or (b or c) is the same as a or b or c.

This section applies to the default configuration (FilterIndexPlanning.ADVANCED) and also when the configuration is FilterIndexPlanning.BASIC and the EPL contains the @Hint('filterindex(valuecomposite)') hint.

The compiler considers an expression that is meeting certain criteria to be a value-expression provided that there is one of the filter-index-operators as described above.

The compiler considers value expressions that are any expression and that do not have any of the following:

For example, consider the following EPL statements:

create constant variable string PREFIX = "CHK";
select * from Withdrawal(accountId = PREFIX || '123')

The expression PREFIX || '123' is a filter index value-expression and its return value is subject to entry into a filter index as a key.

The following example demonstrates the concept further:

create context TransactionContext initiated by Transaction as txn terminated after 10 minutes;
@Hint('filterindex(valuecomposite)')
context TransactionContext select * from Withdrawal(accountId = context.txn.transactionTarget || '-A')

Assume the following events arrive:

The below table is a sample filter index:


When a Withdrawal event arrives, the runtime evaluates accountId just once and performs one keyed lookup into the filter index to determine which statements and their partitions should process the event.

The following example pattern results in equivalent filter index planning:

select * from pattern[txn=Transaction -> Withdrawal(accountId = txn.transactionTarget || '-A')]

This section applies to the default configuration (FilterIndexPlanning.ADVANCED) and also when the configuration is FilterIndexPlanning.BASIC and the EPL contains the @Hint('filterindex(lkupcomposite)') hint.

The compiler considers an expression that is meeting certain criteria to be a lookupable-expression provided that there is one of the filter-index-operators as described above.

The lookupable-expression, in any subexpression, must reference at least one event property or the event itself and cannot have any of the following:

This hint is usually used when:

Consider the following EPL statement:

select * from Withdrawal(accountId || accountType = '123Checking')

The expression accountId || accountType can serve as a lookupable-expression. The compiler plans a filter index that indexes values such as '123Checking' and that uses the return value of accountId || accountType to perform a keyed lookup into the indexed values.

Extending the example above, please look at the following EPL:

create context TransactionContext initiated by Transaction as txn terminated after 10 minutes;
@Hint('filterindex(lkupcomposite)')
context TransactionContext select * from Withdrawal(accountId || accountType = context.txn.transactionTarget)

Assume the following events arrive:

The below table is a sample filter index:


When a Withdrawal event arrives, the runtime evaluates accountId || accountType just once and performs one keyed lookup into the filter index to determine which statements and their partitions should process the event.

This section applies to the default configuration (FilterIndexPlanning.ADVANCED) and also when the configuration is FilterIndexPlanning.BASIC and the EPL contains the @Hint('filterindex(boolcomposite)') hint.

The compiler considers any expression that is meeting certain criteria to be a reusable boolean expression (does not require any of the filter-index-operators).

The expression must not have any of the following:

The expression must, in any subexpression, reference at least one event property or the event itself. All other expressions must be value-expressions that meet the same requirements as outlined for valuecomposite. The expression may not, in all subexpressions, have multiple value expressions that reference context properties or, in a pattern, event properties of prior events or prior events themselves.

Review the following EPL statement:

select * from Withdrawal(accountId regexp '.*123.*')

The compiler plans a filter index that holds the expression itself and all actual values (determined at runtime, such as '.*123.*') and their related statements and partitions.

Consider the following wherein Transaction events have a regexpFilter field that provides the regexp-right-hand-side value:

create context TransactionContext initiated by Transaction as txn terminated after 10 minutes;
@name('Stmt-1') context TransactionContext select * from Withdrawal(accountId regexp context.txn.regexpFilter)

Assume the following events arrive:

The below table is a sample filter index:


When a Withdrawal event arrives, the runtime evaluates accountId regexp '.*123.*' and accountId regexp '.*456.*' both once.

This section applies to the default configuration (FilterIndexPlanning.ADVANCED) and also when the configuration is FilterIndexPlanning.BASIC and the EPL contains the @Hint('filterindex(condition)') hint.

The compiler identifies:

The JVM platform limits the constant pool of classes and method size of methods. The EPL compiler minimizes the chance of stepping over JVM limits, see Section 17.5.1.1, “Byte Code General Settings”. It is however currently still possible but unlikely to exceed limits. Please contact support for progress on this issue.