web statistics
top of page

How to run fraud tests in the bulk mode

Updated: Oct 31, 2023

In this tutorial, you will learn how to create your first rule, which will be composed of one subrule (rules policy) and then how to execute it from directly from your ETL process.

Before you start you will need the following:

  • You need a template, for instance, let's give it a name AML_test

  • You need to cut API keys in console (* later referred as apiKey:apiSecret)

  • You need a test file.

How to create a template

Click create template, select one time type and give it a name

Template will be created using only native sensors (which are executed within a rules engine)

You can download the template JSON file from here:

Download JSON • 13KB

You can upload this file in your environment, no further sensors are needed, as these come together with the engine.

Template also comes with a set of default variables: so you can test the template directly in the debugger:

We can test the template by running it in a debugger mode, let's say once every second (using periodic setting), even though it is configured to run once, we can change that setting while debugging a task:

This template will later be part of the subflow (policy rule) which can be then combined with other subflows.

So, let's create one parent template which will call this one:

Note that the Subflow node is going to consume a stream of data (variables settings points to stream data), and that template that this sensor will be calling is AML_test:

Note that conditional evaluator node enables you to test if the calling subflow (policy rule) has evaluated data to be in RISK or not this way:

${nodes.AML_FundTransfer.rawData.Risk} == "True"  

This can be later enhanced with sending results to a kinesis stream, or calling other functions or storing the input data and results into another database.

Here is the file you can upload in your environment

Download JSON • 1KB

In case you have a bulk file that you would like to use for tests or in production, it needs to be first transformed into a JSON format.

For instance, this is a BULK file with only one data entry:

{   "data": [     [       {         "resource": "AML_Resource,         "Txns": 35,         "Field": 100,         "Transaction_Value": 0.1,         "Instrument": "Wire",         "Transaction_Type": "Transfer",         "Monthly_Revenue": 0.04,         "Transaction_Date": "2022-11-10 00:00:00",         "AvgMonthlyTransactionValue": 17142.8571428571,         "Avg12MonthTransactionValue": 271428.5714285714,         "ClientGroup": "Platinum",         "PeerGroup": 17142.8571428571,         "TransactionFlow": "Dr",         "TransactionCounterparty": "Beneficiary",         "CounterpartyLocation": "HK",         "TaxHaven": "Yes",         "BeneficialOwnerResidence": "HK",         "CustomerStatus": "A",         "Transaction_Value_Monthly": 1,         "Transaction_Volume": 1,         "Avg_Monthly_Transaction_volume": 0.75,         "Last_two_days_trn_value": 922000.0,         "DrDate": "2022-11-10",         "CrDate": "2022-11-10"       }     ]   ] }   

Imagine that this file is stored on your laptop under the name dataInput.json.

Download JSON • 960B

After that, you can test the template with this curl command:

curl --user apiKey:apiSecret -H "Content-Type: application/json" -X POST -d @./dataInputOne.json https://<yourdomain>/rules/v1/templates/AML_Parent/run   

The outcome of the test run we can see in the task logs (note that last line says that we are in the RISK)

So, that's it. If you have your ETL process running, you can simply extend it with this call and let the rest happen in our system. In some other blogs, you will see how we can first send data to a pub/sub (for instance Kinesis) and consume messages from there and forward them to our platform.

In some of the following posts, we shall see how to extend this use case with case management and analytics reports (see the figure below), so stay tuned and have fun!

Full video with end to end test is below:

52 views0 comments
bottom of page