Mulesoft 4: Using Java in Dataweave 2.0

Mule 4 introduces DataWeave 2.0 as the default expression language replacing Mule Expression Language (MEL). DataWeave 2.0 is tightly integrated with the Mule 4 runtime engine, which runs the scripts and expressions in your Mule application.

Since Dataweave 2.0 is default expression language for Mule 4, Dataweave can use almost all place within your Mule application. So, In some use-case Dataweave needs to call java method or instantiate java class to execute java complex business logic.

In my previous blog I explained usage of java within Mulesoft flow. In this blog I am explaining usage of java within Dataweave 2.0.

There are 2 ways we can use java within Dataweave code

  1. Calling java method
  2. Instantiate Java class
1. Calling java method There is restriction with Dataweave when calling to java. you can only call Static methods via DataWeave (methods that belong to a Java class, not methods that belong to a specific instance of a class). Before making a method call from java class, you must import the class.

Here is Dataweave code

In dataweave this method can use multiple way

Import only method instead of the whole class:

%dw 2.0
import java!com::vanrish::AppUtils:: encode
output application/json
---
{
encode:encode("mystring" as String)
}

import and call it in a single line:

%dw 2.0
output application/json
---

    encode: java!com::vanrish::AppUtils:: encode ("mystring" as String)
}
2. Instantiate Java class Dataweave allows to instantiate a new object of any java class but you can’t call its instance method through dataweave. You can refer it as variables.

%dw 2.0
import java!com::vanrish::AppUtils
output application/json
---
{
     value: AppUtils::new().data
}

AppUtils.java

package com.vanrish;

import sun.misc.BASE64Encoder;
/**
 * @author rajnish
 */
public class AppUtils{
	public static final BASE64Encoder encoder = new BASE64Encoder();
        private String data;
	
	/**
	 * @param plainString
	 * @return
	 */
	public static String encode(String plainString)
	{
		String encodedString = encoder.encodeBuffer(plainString.getBytes());
		return encodedString;
	}

/**
	 * @param dataStr
	 * @return
	 */
	public String getData(String dataStr)
	{
              data = dataStr+" : test";
              return  data;
	}
}

MuleSoft 4: Using Java in Mule Flow

MuleSoft is a lightweight integration and API platform that allows you to connect anything anywhere and enable your data through API. Mule evolved from java and spring framework. MuleSoft supports multiple language although all Mule module are developed in java.

 Since Mule evolved from java it has capability to use direct java class and method in Mule flow. This capability gives flexibility to Mule developer to use java for complex business logic.

There are several ways you can use java within Mule. Here are some of Java modules available to use within MuleSoft application

There are 4 java modules are available in MuleSoft flow

  1. New
  2. Invoke
  3. Invoke static
  4. Validate type

To explain all these components and uses in Mule flow I created Utils.java and AppUtils.java classes

1. New – AppUtils.java class instantiation can be achieved by calling constructor of this class through MuleSoft New component within Mule flow.

AppUtils java class defined 2 contractors, So Mule constructor properties for NEW component is showing 2 options.

New module without parameter

<java:new doc:name="Instantiate appUtils" doc:id="22ddcb7e-82ed-40f8-bc11-b779ceedd1a1"
constructor="AppUtils()" class="com.vanrish.AppUtils" target="appInst">
</java:new>

New module with parameter

<java:new doc:name="Instantiate appUtils" doc:id="22ddcb7e-82ed-40f8-bc11-b779ceedd1a1"
constructor="AppUtils(String)" class="com.vanrish.AppUtils" target="appInst">
<java:args ><![CDATA[#[{paramVal:"Hello world}]]]>
</java:new>

In above code, Instance of AppUtils class is created and placed into the “appInst”  target variables to reuse same instance in Mule flow.

New module
2. InvokeIn new java module we instantiate AppUtils.java class and placed into “appInst” variable. Now to use this variable set Invoke module and call one of method define in AppUtils.java class. In AppUtils.java class, there is one non static method “generateRandomNumber” defined with String parameter. In example we call this method through Invoke module.
<java:invoke doc:name="Invoke" doc:id="9348e2cf-87fe-4ff7-958c-f430d0421702"
instance="#[vars.appInst]" class="com.vanrish.AppUtils" method="generateRandomNumber(String)">
<java:args ><![CDATA[
#[{numVal:”100”}]]]></java:args>
</java:invoke>
Invoke module
3. Invoke staticInvoke static java module enable mule flow to call java static method. This is one of the easy ways to call any java method in Mule flow.

Mule code is calling to java static method

<java:invoke-static doc:name="Invoke static" doc:id="bc3e110c-d970-47ef-891e-93fb3ffb61bd" 
class="com.vanrish.AppUtils" method="encode(String)">
<java:args ><![CDATA[#[{plainString:"mystringval"}]]]></java:args>
</java:invoke-static>
Invoke-static module
4. Validate typeValidate type java module use instance of method from java. This module accepts “Accept subtypes” parameter which indicates if the operation should accept all subclasses of a class. By default it acceptSubtypes=“true” which means it will accept all sub class of main class but if it will set as false acceptSubtypes=“false” then during execution the operation throws an error (JAVA:WRONG_INSTANCE_CLASS)
<java:validate-type doc:name="Validate type" doc:id="288c791c-50eb-4be0-b924-56481dfdc023"
class="com.vanrish.Utils" instance="#[vars.appInst]" acceptSubtypes="false"/>
Validate-type module

Java in Mule flow diagram

java in Mule flow

Utils.java

package com.vanrish;

public class Utils{
	
}

AppUtils.java

package com.vanrish;
import java.util.Random;
import sun.misc.BASE64Encoder;
/**
 * @author rajnish
 */
public class AppUtils extends Utils {
	public static final BASE64Encoder encoder = new BASE64Encoder();
	//Constructor without Parameter
	public AppUtils(){
		System.out.println("Constructor with no parameter");
	}
	//Constructor with Parameter
  public AppUtils(String paramVal){
		System.out.println("Constructor with parameter value="+paramVal);
	}
	/**
	 * @param String
	 * @return
	 */
	public  String generateRandomNumber(String numVal) {
		Integer numNoRange = null;
		  Random rand = new Random();
		  if(numVal !=null) {
			  numNoRange = rand.nextInt(new Integer(numVal));
		  }else {
		   numNoRange = rand.nextInt();
		  }
	    return numNoRange.toString();
	}
	/**
	 * @param plainString
	 * @return
	 */
	public static String encode(String plainString)
	{
		String encodedString = encoder.encodeBuffer(plainString.getBytes());
		return encodedString;
	}
}

Mulesoft Code

<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:java="http://www.mulesoft.org/schema/mule/java" xmlns:db="http://www.mulesoft.org/schema/mule/db"
xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns="http://www.mulesoft.org/schema/mule/core"
xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd
http://www.mulesoft.org/schema/mule/hl7 http://www.mulesoft.org/schema/mule/hl7/current/mule-hl7.xsd
http://www.mulesoft.org/schema/mule/db
http://www.mulesoft.org/schema/mule/db/current/mule-db.xsd
http://www.mulesoft.org/schema/mule/java http://www.mulesoft.org/schema/mule/java/current/mule-java.xsd"> <http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="af3ce281-bf68-4c7b-83fb-52b2d2506677" > <http:listener-connection host="0.0.0.0" port="8081" /> </http:listener-config> <flow name="helloworldFlow" doc:id="a13826dc-67a1-4cda-8133-bc16a59ddba2" > <http:listener doc:name="Listener" doc:id="82522c77-5c33-4003-820a-7a04b51c3001" config-ref="HTTP_Listener_config" path="helloworld"/> <logger level="INFO" doc:name="Logger" doc:id="b40bc107-1ab5-4a3f-8f20-d7dfb63e5acb" message="entering flow"/> <java:new doc:name="Instantiate appUtils" doc:id="c1854580-f4d4-4e5c-a34d-7ca185152d02" constructor="AppUtils(String)" class="com.vanrish.AppUtils" target="appInst" > <java:args ><! [CDATA[#[{ paramVal:"Hello world" }]]]></java:args> </java:new> <java:invoke doc:name="Invoke" doc:id="9348e2cf-87fe-4ff7-958c-f430d0421702" instance="#[vars.appInst]" class="com.vanrish.AppUtils" method="generateRandomNumber(String)"> <java:args ><![CDATA[#[{ numVal:null }]]]></java:args> </java:invoke> <java:invoke-static doc:name="Invoke static" doc:id="bc3e110c-d970-47ef-891e-93fb3ffb61bd" class="com.vanrish.AppUtils" method="encode(String)"> <java:args ><![CDATA[#[{ plainString:"mystringval" }]]]></java:args> </java:invoke-static> <java:validate-type doc:name="Validate type" doc:id="288c791c-50eb-4be0-b924-56481dfdc023" class="com.vanrish.Utils" instance="#[vars.appInst]"/> <set-payload value="Success" doc:name="Set Payload" doc:id="8b6c9c0b-07c8-4e17-b649-2c24d4da8bea" /> </flow> </mule>

MuleSoft: Cloudhub vCores Usage Optimization

Mulesoft Connect 2019 was wrapped last month in North america. These connects are one of the premier conferences for API led connectivity and digital transformation.These conference brought more content for developers, architects, and business executives across different business domain. At MuleSoft CONNECT plethora of market experts, and business executives including industry’s CEO/CTO,  discussed their Mulesoft experience and democratization of innovation.

During these conferences, I got an opportunity to talk to some business executives about their Mulesoft experiences and challenges.

One of the biggest challenges is to optimize Mulesoft vCore in cloudhub to keep their project in budget. 

Here are few steps in Mulesoft application to keep vCore usage low and project in budget.

1. API Optimization — As per Mulesoft best practices, Mulesoft suggest API led connectivity to expose data to application within or outside of your organization through reusable and purposeful APIs.

The APIs used in an API-led approach to connectivity falls  into three categories:

  • Experience APIs
  • Process APIs 
  • System APIs

When you are working on API led connectivity, do we really need all three layers of APIs every time?

No, It is not necessary to implement all three layers of APIs every time.

API Layers

Here are some of API layers use-case to save vCores usage and optimize APIs led connectivity.

  • Experience APIs — Experience API is similar to process APIs but unlike Process APIs, Experience APIs are more specifically tied to a unique business context, and project data formats, interaction timings, or protocols into a specific channel and context. These  APIs simplifies your front end data, based on different GUI. For example if you are working on PC website or Mobile website, we display data based on user experience, so we need different APIs to show these data, but if your application needs  only data irrespective of user experience we can skip Experience APIs and application can work only on Process APIs or System APIs. This will save some vCores and keep project in budget.
  • Process APIs — Process APIs, if you are working on complex business logic based on different organization department then you can incorporate all these business specific data in process layer and expose these data through process APIs. But if APIs are not incorporating any complex business logic and most of datas are processing through System APIs then in this use-case you can skip Process APIs and expose your data through System APIs. In this way you can save some vCore and keep your project within budget.

2. Salesforce Platform Events Integration – Salesforce integration with Mulesoft is one of the very common integration use-cases. In the old days  Salesforce synced their data through polling. Poll run couple of time in whole day and sync data between different salesforce org. Since this is polling process, it is not easy to predict the volume of data flowing through Mulesoft application during a certain period of time. So in this case, we go with higher mulesoft vCore to avoid any memory leak. 

Salesforce introduced “Platform Events”   the Salesforce Enterprise Messaging Platform on June 2017. After introduction of “Platform Event”,  integration of Mulesoft and salesforce has become very easy. “Platform Event” enterprise messaging service is event based. So any update for any create Object within salesforce generates event and sends payload to salesforce messaging queue. Mulesoft-Salesforce connector read these payload for data sync from Salesforce messaging queue FIFO based. Since this integration is event based, so as soon as Mulesoft receives event from “Platform event” it is processes Platform event message. So any time we have no large set of data to process. In this integration then we can go for lower vCore and execute project within budget. 

3. Batch Process Optimization — Mulesoft allows to process messages in batch. Mule batch process provides a construct for asynchronous processing larger-than-memory data sets that can split into individual records. Mulesoft batch extracting, transforming and loading (ETL) information into a target system like hadoop.

               Mulesoft needs large memory/vCore to run large sets of data in batch process.   These Mulesoft batch process runs max once or twice a day . These Mulesoft batch  hold large number of vCore idle rest of day without any active usage. You can optimize vCore usage and reduce your batching processing cost by following these two steps.

  • Reuse vCore by deploying multiple batch process applications — As you know, batch run certain time of day once or twice. Suppose one batch application is running every midnight and other batch application is running every morning . Both your batch application is taking 1 vCore. So both applications consuming  total 2 vCore.

If you are configuring any CI/CD process like Jenkin/Code build to deploy your batch application into cloud then it is very easy to manage your process to reuse your vCore. Your can configure you CI/CD process to build your application and deploy your application into cloud when you want to run batch. Once batch is done then you un-deploy your application and deploy next batch application on same memory. In this way you can keep reusing your vCore memory and keep your project within budget.

  • Deploy Batch application in on-premise Mulesoft server — As we all know Batch process is simple and easy to maintain application in most to their use-case. In this case it is very easy to maintain on-premise Mulesoft server and deploy your batch application without much worry about vCore usage.

Link:

Salesforce Platform Event Mulesoft Integration : https://www.vanrish.com/blog/2018/10/01/mulesoft-salesforce-platform-events-integration/

Mulesoft: FedRamp Compliance Cloud Integration for Government

Fiscal year 2019, government estimated $45.8 billion on IT investments at major civilian agencies, which will be used to acquire, develop, and implement modern technologies.78% of this budget goes to maintain existing IT system. In a constantly changing IT landscape, the migration of federal on-premise technologies to the cloud is increasing every year. Federal agencies have the opportunity to save money and time by adopting innovative cloud services to meet their critical mission needs and keep up to date with current technology. Federal agencies are required by law to protect any federal information that is collected, maintained, processed, disseminated, or disposed of by cloud service offerings, in accordance with FedRAMP requirements.

What is Federal Risk and Authorization Management Program (FedRamp) ? 

FedRamp is a US government-wide program that delivers a standard approach to security assessment, authorization, and continuous monitoring for cloud products and services. The stakeholders for FedRamps are 

  1. Federal Agencies
  2. FedRamp PMO & JAB(Joint Authorization Board)
  3. Third Party Assessment Organization

FedRamp Process There are 3 ways a cloud service can be proposed for FedRamp Authorization.

  1. Cloud BPA — Cloud Services through FCCI BPAs
  2. Government Cloud Systems — Services must be intended for use by multiple government or government approved agencies.
  3. Agency Sponsorship — This is the most popular route for cloud service providers (CSPs) to take when working toward a FedRAMP Authorization. CSP to establish a partnership with an Agency and agree to work together for an Authority to Operate(ATO).

Mulesoft FedRAMP Authorize Integration Platform

Mulesoft recently announced, FedRAMP process implementation of Anypoint Platform. MuleSoft is one of the first integration platform companies with FedRamp authorization and enabling both on-premises and cloud integration in the federal government and state government. Enablement of FedRamp of Mulesoft Anypoint platform, government IT teams can leverage the same core Anypoint Platform benefits in the cloud to accelerate their project delivery via reusable APIs.Anypoint Platform allows all government integration assets to be managed and monitored from a single, secure, cloud based management console, simplifying operations and increasing IT agility. 

Mulesoft Anypoint platform enables FedRamp-compliant iPAAS for government organization. Government IT integration project deploy in Anypoint platform within Mulesoft Government cloud 

  1. Accelerate government IT project deliveries by deploying sophisticated cross-cloud integration applications and create new APIs on top of existing data sources
  2. Project deliveries improve efficiencies at lower cost by allowing IT integration teams to focus on designing, deploying, and managing integrations in the cloud and allowing agencies to only pay for what they use, .
  3. Reduce risk of your IT project integration and increase application reliability by using of self-healing mechanism to recover from problems and load balancing.  

What is Mulesoft Government Cloud?

Mulesoft government cloud is a FedRamp-compliant, cloud based deployment environment for Anypoint platform. 

  1. It is built on AWS GovCloud with FedRamp control. 
  2. Mule Runtimes configured in secure mode to support the highest encryption standards and FIPS(Federal Information Processing Standard)  140-2 hardware and software encryption compliance.
  3. It is FedRamp-compliance at the moderate impact level.
  4. It is continuous 3rd party(3 POs) auditing and monitoring of security control.

Mulesoft government cloud can be access through this link https://gov.anypoint.mulesoft.com/login/ . Mulesoft Government cloud resources are available through Anypoint exchange. Mulesoft Government cloud exchange URL is https://gov.anypoint.mulesoft.com/exchange/ .

If you are accessing FedRamp-compliant Anypoint platform, after logging you get end user agreement as a consent. It is very typical for FedRamp-compliant government application.  

Conclusion — Executing any government or state project and working on different integration as well as API enablement, FedRamp-compliant Anypoint platform is one of the best options. It accelerate IT project deliveries, improve efficiencies and reduce IT risk .    

Mule 4: Consuming APIs through Mule 4 application

Mulesoft is all about API strategy and digital transformation of your organization through APIs within cloudHub or in premise.  Mulesoft also provides platform for APIs to monitor and analyze the usage, control access and protect sensitive data with security policies. API is at the heart of digital transformation and it enables greater speed, flexibility and agility of any organization.

            Exposing of your APIs is one aspect of your digital transformation strategy, but consuming API is also as important as exposing APIs. Consuming API is either application getting data from APIs or create/update data through APIs. Most of APIs are based on HTTP/HTTPS protocol. In Mule 4 consuming APIs is also start with configuration of HTTP/HTTPs protocol.

Configuration of HTTP/HTTPS— HTTP/HTTPS configuration start with selecting protocol. If API is available through HTTP then select protocol HTTP with default port 80 or change port based on expose API document.  If APIs are available through secured connection, then select HTTPS protocol with default port 443. Fill the Host with your expose API end point without any protocol. Fill the other field with default value.

Authentication of API are available with five different selection

  1. None – No authentication. Available for everyone
  2. Expression – Custom or expression-based authentication
  3. Basic authentication – Username/Password authentication
  4. Digest authentication — web server can use to negotiate credentials, such as username or password, with a user’s web browser
  5. Ntlm authentication — NT (New Technology) LAN Manager (NTLM) . Microsoft security protocols intended to provide authentication, integrity, and confidentiality to users


If you are working on post/patch/put method api to send data into expose api, set some important parameter based on streaming mode. If API are exposed as streaming mode, then you need to mention content-size of streaming otherwise set value as “NEVER”, then you no need to set content-size.

API Get Call – API get call implement GET method of APIs. Implementation of API get call need parameters. Based on these parameters application get set of data. MuleSoft provide 4 ways to pass these parameters or values.

  • Body
  • Headers
  • Query Parameters
  • URI Parameters

Flow for GET Method

API POST/PUT/PATCH Call –

POST – Create data

PUT/PATCH – Update data

Similar to Get method call, for POST/PUT/PATCH method application send API parameters based on API requirement. Since application is creating/Updating data through POST/PUT/PATCH api call, application sends these data through body parameters with content-type.

Flow for POST/PUT/PATCH Method

Here is flow of API GET call

Implemented code

<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:db="http://www.mulesoft.org/schema/mule/db" xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core"
xmlns:http="http://www.mulesoft.org/schema/mule/http"
xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd
http://www.mulesoft.org/schema/mule/db http://www.mulesoft.org/schema/mule/db/current/mule-db.xsd">
 <configuration-properties doc:name="Configuration properties" doc:id="4bfab9b8-5f36-4b72-b335-35f6f7c3627e" file="mule-app.properties" />
 
<http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="49f690e3-8636-45a2-a3f3-244fa170f2f0" basePath="/media" >
<http:listener-connection host="0.0.0.0" port="8081" />
</http:listener-config>
<http:request-config name="HTTPS_Request_configuration" doc:name="HTTP Request configuration" doc:id="4e8927df-e355-4dcd-9bdc-bf154ab1146a" requestStreamingMode="NEVER">
<http:request-connection host="api.vanrish.com" port="443" protocol="HTTPS" connectionIdleTimeout="50000" streamResponse="true">
<http:authentication >
<http:basic-authentication username="2xxxxxxx-xxx0-xxbx-bxxf-xxxxxxxc5" password="xxxxxxxx3dxxxxxb153dbxxxxx29cxxx"/>
</http:authentication>
</http:request-connection>
</http:request-config>
<flow name="demo-mediaFlow" doc:id="a55b1d37-b63d-4336-8d8e-5b70bc354078">
<scheduler doc:name="Scheduler" doc:id="9dc041e5-3140-47a1-a13c-39ee3ba59389" >
<scheduling-strategy >
<fixed-frequency timeUnit="SECONDS"/>
</scheduling-strategy>
</scheduler>
<logger level="INFO" doc:name="Logger" doc:id="9bc54079-8718-4dc1-a433-91081dfdabff" message="Get flow Entering ..... "/>
<http:request method="GET" doc:name="Request" doc:id="872f3240-e954-469a-b330-83aa7e40c3db" config-ref="HTTPS_Request_configuration" path="/api/users">
         <http:headers ><![CDATA[#[output application/java
---
{
"client_id" : "xxxx2-d960-4xxx-b5df-4d704ce2xxxx",
"client_secret" : "xxxd5e8663xxxf1b153db732529cxxx",
"Range" : "items=0-2000"
}]]]></http:headers>
<http:uri-params ><![CDATA[#[output application/java
---
{
test : "123"
}]]]></http:uri-params>
     
</http:request>
<logger level="INFO" doc:name="Logger" doc:id="9b733315-038b-4f4c-9c2e-15fc026f0524" message="data coming from GET API ...... total payload size --- #[sizeOf(payload)]"/>
<ee:transform doc:name="Transform Message" doc:id="383e857d-efe7-45df-ab33-b75a073080b7" >
<ee:message >
<ee:set-payload ><![CDATA[%dw 2.0
output application/json
---
payload]]></ee:set-payload>
</ee:message>
</ee:transform>
<logger level="INFO" doc:name="Logger" doc:id="769a6306-ee9a-4f51-94ad-562ad1042c5e" message="Getting data from API #[payload]"/>
</flow>
</mule>

Mule 4: APIKit for SOAP Webservice

Mule 4 introduced APIKit for soap webservice. It is very similar to APIKit for Rest. In SOAP APIKit, it accepts WSDL file instead of RAML file. APIKit for SOAP generates work flow from remote WSDL file or downloaded WSDL file in your system.

To create SOAP APIKit project, First create Mulesoft project with these steps in Anypoint studio.

Under File Menu -> select New -> Mule Project

Mule 4 Project Settings

In above pic WSDL file gets selected from local folder to create Mule Project.

Once you click finish, it generates default APIKit flow based on WSDL file.

In this Mulesoft SOAP APIKit example project, application is consuming SOAP webservice and exposing WSDL and enabling SOAP webservice.

Mule 4 API Kit for Soap Router

In SOAP Router APIKit, APIKit SOAP Configuration is defined WSDL location, Services and Port from WSDL file.

API Kit SOAP configuration

In above configuration, “soapkit-config” SOAP Router look up for requested method. Based on requested method it reroutes request from api-main flow to method flow. In this example, requested method is “ExecuteTransaction” from existing wsdl, so method flow name is

<flow name=“ExecuteTransaction:\soapkit-config”>  

In this example we are consuming same WSDL but end point is different.

To call same WSDL we have to format our request based on WSDL file. In dataweave, create request based on WSDL and sending request through HTTP connector.

Here is dataweave transformation to generate request for existing WSDL file

%dw 2.0
output application/xml
ns soap http://schemas.xmlsoap.org/soap/envelope/
ns xsi http://www.w3.org/2001/XMLSchema-instance
ns ns0 http://localhost/Intellect/ExternalWebService
ns xsd http://www.w3.org/2001/XMLSchema
ns ns1 xsd:string
---
 
{
  	soap#Envelope @('xmlns:xsi': 'http://www.w3.org/2001/XMLSchema-instance'): {
  	 	
  	soap#Body: {
     	ExecuteTransaction @('xmlns': 'http://localhost/Intellect/ExternalWebService'): {
     	  Request @(xsi#'type': 'xsd:string'): payload.soap#Body.ns0#ExecuteTransaction.Request 
     	 
     	  }
     	
    	}
    	
  	}
}

Here is main flow

Main flow for API SOAP Kit

Here is full code

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<mule xmlns="http://www.mulesoft.org/schema/mule/core"
xmlns:apikit-soap="http://www.mulesoft.org/schema/mule/apikit-soap"
xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core"
xmlns:http="http://www.mulesoft.org/schema/mule/http"
xmlns:wsc="http://www.mulesoft.org/schema/mule/wsc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core
http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http
http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/apikit-soap
http://www.mulesoft.org/schema/mule/apikit-soap/current/mule-apikit-soap.xsd
http://www.mulesoft.org/schema/mule/wsc
http://www.mulesoft.org/schema/mule/wsc/current/mule-wsc.xsd
http://www.mulesoft.org/schema/mule/ee/core
http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd"
> <http:listener-config basePath="/fda" name="api-httpListenerConfig"> <http:listener-connection host="0.0.0.0" port="8081"/> </http:listener-config> <apikit-soap:config httpStatusVarName="httpStatus" name="soapkit-config" port="ISTCS2SubmitOrderSoap" service="ISTCS2SubmitOrder" wsdlLocation="ISTCOrder.wsdl"/> <wsc:config doc:id="b2979182-c4e9-489b-9420-b9320cfe9311" doc:name="Web Service Consumer Config" name="Web_Service_Consumer_Config"> <wsc:connection address="https://enterprisetest.vanrish.com/pub/xchange/request/atlas" port="ISTCS2SubmitOrderSoap" service="ISTCS2SubmitOrder" wsdlLocation="api/ISTCOrder.wsdl"/> </wsc:config> <http:request-config doc:id="408de2f8-c21a-42af-bfe7-2d7e25d153b0" doc:name="HTTP Request configuration" name="HTTP_Request_configuration"> <http:request-connection host="enterprisetest.fadv.com" port="443" protocol="HTTPS"/> </http:request-config> <flow name="api-main"> <http:listener config-ref="api-httpListenerConfig" path="/ISTCS2SubmitOrder/ISTCS2SubmitOrderSoap"> <http:response statusCode="#[attributes.protocolHeaders.httpStatus default 200]"/> <http:error-response statusCode="#[attributes.protocolHeaders.httpStatus default 500]"> <http:body><![CDATA[#[payload]]]></http:body> </http:error-response> </http:listener> <apikit-soap:router config-ref="soapkit-config"> <apikit-soap:attributes><![CDATA[#[%dw 2.0 output application/java --- { headers: attributes.headers, method: attributes.method, queryString: attributes.queryString }]]]></apikit-soap:attributes> </apikit-soap:router> </flow> <flow name="ExecuteTransaction:\soapkit-config"> <logger doc:id="62a3748e-b81c-4a95-9af0-99c5a282b237" doc:name="Logger" level="INFO" message="Entering into flow"/> <ee:transform doc:id="c130d7ff-bd70-4af0-b7d4-9a6caa0d771f"> <ee:message> <ee:set-payload><![CDATA[%dw 2.0 output application/xml ns soap http://schemas.xmlsoap.org/soap/envelope/ ns xsi http://www.w3.org/2001/XMLSchema-instance ns ns0 http://localhost/Intellect/ExternalWebService ns xsd http://www.w3.org/2001/XMLSchema ns ns1 xsd:string --- { soap#Envelope @('xmlns:xsi': 'http://www.w3.org/2001/XMLSchema-instance'): { soap#Body: { ExecuteTransaction @('xmlns': 'http://localhost/Intellect/ExternalWebService'): { Request @(xsi#'type': 'xsd:string'): payload.soap#Body.ns0#ExecuteTransaction.Request } } } } ]]></ee:set-payload> </ee:message> </ee:transform> <http:request config-ref="HTTP_Request_configuration" doc:id="6d7001f3-b90a-4ed8-96d2-d577329d21d5" doc:name="Request" method="POST" path="/pub/xchange/request/atlas"/> <logger doc:id="a05e704f-e539-48f3-9556-fe66641e3f64" doc:name="Logger" level="INFO" message="#[payload]"/> </flow> </mule>


Digital Transformation Journey

When I started my career, there was Y2K issue  going on. Every company was trying to convert their data to be  compatible with upcoming Y2K. In that era, all companies without any 2nd thought were allocating their budgets in these projects. They wanted  to make their systems   compatible to Y2K as soon as possible.

                Currently companies are going through same situation. This time  it is Digital transformation. All industries want to enable digital transformation to expand their business. Digital transformation is touching every company in every industry. But CEOs and CTOs have big challenge to enable data for business and drive their company towards digital transformation. 

                Suppose you are in supply chain industries and your data is sitting in some legacy system.   So in this case there is no use of data. Business will not be  able to run any analytic on this data so that business can identify customer behavior or new business opportunity. It will not be able to add any business value with this data. Company can go out of business due to this lack of vision and data transformation.  In this fast pace world, keeping relevant to your customer it is very necessary to your business to move your data fast and enable new business opportunity.

Here are few challenges CTO’s/Architect are facing to enable their data transformation and delivering innovation.

  • All systems are not delivering seamless experience within organization. Every department is working independently.
  • There is Lack of support for 360 degree view of customer or an agent from the various touch points of business.
  • Duplicate data between systems and lack of data transparency.
  • Growing need of security and compliances are not implemented with growing business.

Here are some of steps to achieve your company’s digital transformation vision.

1. Establish digital vision – If you are leading your company towards digital transformation, it is very necessary that you have very clear vision and strategy around your business requirement. Give training to all stakeholder to embrace new changes due to digital transformation.  Engage business leadership in developing a business capability roadmap.

2. Seamless experiences – Establish seamless experience via user experience design, mobile, agent, customer and service center. Interface need to be fast and provide self service capability. Enable single source of information available system-wide through the API-based integration layer.

3. 360 degree customer view – Setup some process to get a complete view of customers by aggregating data from the various touch points that a customer may use to contact a company to purchase products and receive service and support. Data assets are buried in the data center. APIs bring these data in front of the people who need it to drive new products and new digital services for customers and provide 360 degree view of this data.

4. Embrace the Cloud – Cloud is providing a platform to accelerate company digital transformation journey.Companies are innovating very fast in cloud. They are taking advantage of lower coast and fast deliverable of cloud without worry about IT infrastructure. They are moving data and enable data for artificial Intelligence and analytics through less roll out time of cloud.

Conclusion – Better digital transformation strategy brings better workspace and increase in stakeholder involvement. It increase productivity and bring more innovation for your business.

Mule 4: Ease Your Integration Challenges

Much awaited Mulesoft 4 was officially announced in Mulesoft Connect 2018 in San Jose. When Mulesoft was born, it was really to create software that helps to interact systems or source of information quickly within or outside company. So the speed is an incredibly important thing over the years to develop and interact within systems. Need of speed for application and development hasn’t change drastically over the years but needs and requirement of customer’s application have changed. The integration landscape has also magnified. There are hundreds of new systems and sources of information to connect to, with more and more integration requirements. This integration landscape gets very messy and very quickly.

            Mule 4 provides a simplified language, simplified runtime engine and ultimately reduces management complexity.  It helps customers, developers to deliver application faster. Mule4 is really radically simplified development. It is providing new tool to simplify your development, deployment and management of your integration/API. It is also providing a platform to reuse Mule component without affecting existing application for faster development. Mule 4 is evolution of Mule3. You will not seem lost in Mule 4, if you are coming from Mule3. But Mule 4 implements fewer concepts and steps to simplify whole development/integration process. Mule 4 has now java skill is optional. In this release Mulesoft is improving tool and making error reporting more robust and platform independent.

Now let’s go one by one with all these new Mule4 features.

1. Simplified Event Processing and Messaging — Mule event is immutable, so every change to an instance of a Mule event results in the creation of a new instance. It contains the core information processed by the runtime. It travels through components inside your Mule app following the configured application logic. A Mule event is generated when a trigger (such as an HTTP request or a change to a database or file) reaches the Event source of a flow. This trigger could be an external event triggered by a resource that might be external to the Mule app.

Mule 4 Event flow

2. New Event and Message structure — Mule 4 includes a simplified Mule message model in which each Mule event has a message and variables associated with it. A Mule message is composed of a payload and its attributes (metadata, such as file size). Variables hold arbitrary user information such as operation results, auxiliary values, and so on.

Mule 4 message

Mules 4 do not have Inbound, Outbound and Attachment properties like  Mule 3. In mule 4 all information are saved in variables and attributes. Attributes in Mule 4 replace inbound properties. Attributes can be easily accessed through expressions.

 These are advantages to use Attributes in Mule 4.

  • They are strongly typed, so you can easily see what data is available.
  • They can easily be stored in variables that you can access throughout your flow
Example :
#[attributes.uriParams.jobnumber]

Outbound properties — Mule 4 has no concept for outbound properties like in Mule 3. So you can set status code response or header information in Mule 4 through Dataweave expression without introducing any side effects in the main flow.

Example:

 
<ee:transform xsi:schemaLocation="http://www.mulesoft.org/schema/mule/ee/core
 http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd">
       <ee:message>
         <ee:set-payload>
           <![CDATA[
                %dw 2.0
                output application/json
                 ---
                 {message: "Bad request"}]]>
           </ee:set-payload>
         </ee:message>
    <ee:variables>
       <ee:set-variable variableName="httpStatus">400</ee:set-variable>
    </ee:variables>
  </ee:transform>

Session Properties –In Mule 4 Session properties are no longer exist. Data store in variables are passes along with  different flow.

3. Seamless data access & streaming – Mule 4 has fewer concepts and steps. Now every steps and task of  java language knowledge is optional. Mule 4 is not only leveraging DataWeave as a transformation language, but expression language as well. For example in Mule 3  XML/CSV data need to be converted into java object to parse or reroute them. Mule 4 gives the ability to parse or reroute through Dataweave expression without converting into java. These steps simplify your implementation without using java.

Mule 4 Data Access

4. Dataweave 2.0 — Mule 4 introduces DataWeave as the default expression language replacing Mule Expression Language (MEL) with a scripting and transformation engine. It is combined with the built-in streaming capabilities; this change simplifies many common tasks. Mule 4 simplifies data iteration. DataWeave knows how to iterate a json array. You don’t even need to specify it is json. No need to use <json:json-to-object-transformer /> to convert data into java object.

Mule 4 vs Mule 3 flow comparison

Here are few points about Dataweave 2.0

  • Simpler syntax to learn
  • Human readable descriptions of all data types
  • Applies complex routing/filter rules.
  • Easy access to payload data without the need for transformation.
  • Performs any kind of data transformation, normalization, grouping, joins, pivoting and filtering.

5. Repeatable Streaming – Mule 4 introduces repeatable streams as its default framework for handling streams. To understand the changes introduced in Mule 4, it is necessary to understand how Mule3 data streams are consumed

Mule 3 data streaming examples

In above three different Mule 3 flows, once stream data is consumed by one node it is empty stream for 2nd node. So in the above first example, in order to log the stream payload , the logger has to consume the entire stream of data from HTTP connector. This means that the full content will be loaded into memory. So if the content is too big and you’re loading into memory, there is a good chance the application might run out of memory.

So Mule 4 repeatable streams enable you to

  • Read a stream more than once
  • Have concurrent access to the stream.
  • Random Access
  • Streams of bytes or streams of objects

As a component consumes the stream, Mule saves its content into a temporary buffer. The runtime then feeds the component from the temporary buffer, ensuring that each component receives the full stream, regardless of how much of the stream was already consumed by any prior component

Here are few points, how repeatable streams works in Mule 4

  • Payload is read into memory as it is consumed
  • If payload stream buffer size is > 512K (default) then it will be persisted to disk.
  • Payload stream buffer size can be increased or decreased by configuration to optimize performance
  • Any stream can be read at any random position, by any random thread concurrently

6. Error Handling — In Mule 4 error handling has been changed significantly. Now In mule 4 you can discover errors at design time with visual interface. You no need to deal with java exception directly and it is easy to discover error while you are building flow. Every flow listed all possible exception which potential arises during execution.

Mule 4 Error Handling

Now errors that occur in Mule fall into two categories

  • Messaging errors
  • System errors

  Messaging errors — Mule throws a messaging error (a Mule error) whenever a problem occurs within a flow. To handle Mule errors, you can set up On Error components inside the scope-like Error Handler component. By default, any unhandled errors are logged and propagated.

System errors — Mule throws a system error when an exception occurs at the system level . If no Mule Event is involved, the errors are handled by a system error handler.

Try catch Scope — Mule 4 introduces a new try scope that you can use within a flow to do error handling of just inner components/connectors. This try scope also supports transactions and in this way it is replacing Old Mule 3 transaction scope.

Mule 4 A new try catch block

7. Class Loader Isolation — Class loader separates application completely from Mule runtime and connector runtime. So, library file changes (jar version) do not affect your application. This  also gives flexibility to your application to run any Spring version without worry about Mulesoft spring version. Connectors are distributed outside the runtime as well, making it possible to get connector enhancements and fixes without having to upgrade the runtime or vice versa

In above pic showing that every component in any application have their own class loader and running independently on own class loader.

8. Runtime Engine — Mule 4 engine is new reactive and non-blocking engine. In Mule 4 non-blocking flow always on, so no processing strategy in flow. One best feature of Mule 4 engine is, It is self-tuning runtime engine. So what does this mean? If Mule 4 engine is processing your applications on 3 different thread pools, So runtime knows  which application should be executed by each thread pool. So operation put in corresponding thread pool based on high intensive CPU processing or light intensive CPU processing or I/O operation. Then 3 pools are dynamic resizing automatically to execute application through self-tuning.


Mule 4 : Self tuning run time engine

So now self-tuning creates custom thread pools based on specific tasks. Mule 4 engine makes it possible to achieve optimal performance without having to do manual tuning steps.

Conclusion

Overall Mule 4 is trying to make application development easy, fast and robust. There are more features included in Mule 4 which I will try to cover in my next blog. I will also try to cover more in depth info in above topic of Mule 4. Please keep tuning for my next blog.

MuleSoft: Salesforce Platform Events
Integration

Summer 2017 Salesforce released new event-driven architect “Platform Events” feature. Salesforce is known for its custom metadata platform, and now it is delivering a custom messaging platform, so Salesforce customers can build and publish their own events. Platform Events enables customer to increase business productivity and efficiency through integration via event. This feature reduces point-to-point integration and expands the existing capability with more integration options like Outbound Messaging, Apex Callouts, and the Streaming API. With platform events, there are two parties to the communication: a sender and a receiver. They are two of the components of an event-driven architecture.

Before going any further, let’s define some of terminology of platform event.

Event — A change in state that is meaningful in a business process. For example, if opportunities are created or updated in salesforce, this action will generate event within salesforce.

Event message – An event message is payload of event. For example, events are generated after creating or updating opportunities. So, this event has all updated data or updated delta of data which comes as payload.

Event producer – Publishing event with event message is event producer. For example, publish opportunities payload after generating event for other system.

Event channel — A stream of events on which an event producer sends event messages and event consumers read those messages.

Event consumer — A subscriber/Event consumer is an event channel that receives messages from the Event Bus. For example, Application which is subscribing event channel to process further is event consumer.

Event-based software architecture

Set Up Platform Events in Salesforce

  1. On the Salesforce page, click the Setup icon in the top-right navigation menu and select Setup.

    Salesforce setup link
  2. Enter Platform Events into the Quick Find box and then select Data > Platform Events.

    Salesforce platform event link
  3. Click New Platform Event.
  4. In the New Platform Event form, please fill all form
    1. Field Label: EnterpriseTestSync
    2. Plural Label: EnterpriseTestSyncs
    3. Object Name: EnterpriseTestSync

      Salesforce platform event configuration
  5. Click Save
  6. It will be redirected to the EnterpriseTestSync Platform Event page. By default, it creates some standard fields.

    Salesforce platform event standard fields
  7. Now you need to create Custom Platform Event fields that correspond to your EnterpriseTestSync. In the Custom Fields & Relationships section, click New to create a field for EnterpriseTestSync.
  8. Make sure that the Enterprise Test Sync API Name is EnterpriseTestSync__e and that Custom Fields & Relationships looks like this.

    Salesforce platform event API name
  9. If you have any trigger for platform event you can create in trigger section.
  10. Click Save.

Save action will create platform event in salesforce. In next section create Mulesoft integration flow

Integration Mulesoft and Plateform Event

To Integrate with Salesforce Platform Events, please download Mulesoft Salesforce connector v8.4.0 or beyond from Anypoint Exchange.

In my example, I am creating application which syncs  salesforce opportunity between two salesforce instances. So, any create or update opportunity will create platform event in salesforce instance. This platform event is subscribed by Mulesoft Salesforce platform event connector in first salesforce instance. Mulesoft  receives platform event and platform message from first salesforce instance. Mulesoft transforms  this platform message into another format of message and publishes into other salesforce platform event. Platform event can be tracked by replay id. Replay id is unique field when Salesforce generates any platform event. Platform event message persist only 24 hrs in platform Event Bus. We can replay this message within 24hrs.

Here are the steps for Mulesoft integration with Salesforce platform event  and  flow to communicate between two Salesforce platform event.

  1. Please configure Salesforce Basic Authentication from global element in Anypoint studio.

    Mulesoft – Salesforce Basic Authentication configuration
    1. Configure Salesforce connector for platform event which listen Salesforce platform event from event channel.
      • Select operation as “Replay streaming channel”
      • Streaming Channel: Add “/event/EnterpriseTestSync__e”. “EnterpriseTestSync__e” is API name from Salesforce platform event. This API listen event with /event/
      • Replay option: There are 3 options
        1. ALL – This option replays all message from event channel
        2. FROM_REPLAY_ID – This option replays only specific event message replay Id
    • ONLY_NEW – This option replay only new event messages from channel.
    • Replay Id: Replay option ALL we pass -1 value. For FROM_REPLAY_ID option we pass specific event message replay Id and for ONLY_NEW we pass -1
    • Check box “Resume from the Last Replay Id” resume from last replay Id and ignore rest.

      Mulesoft – Salesforce platform event subscribe configuration
  2. Once it is configured, it is ready to accept event message from platform event Channel. Add transformation logic to publish platform event into other Salesforce instances.
  3. Configure Salesforce platform event for publish event message into
    • Operation: Publish platform event message
    • Platform Event Name: Opportunity_Event__e
    • Platform Event Message: Default

      Mulesoft – Salesforce platform event publish configuration
    1. Once you configure these ends point application is ready to listen Event from first instance of Salesforce Platform event and publish platform event into other instance of Salesforce.

    Mulesoft – Salesforce platform event flow

    Here is flow of this application

  4. Here is code of this application
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:tracking="https://www.mulesoft.org/schema/mule/ee/tracking" 
xmlns:dw="https://www.mulesoft.org/schema/mule/ee/dw"     
xmlns:sfdc="https://www.mulesoft.org/schema/mule/sfdc" 
xmlns="https://www.mulesoft.org/schema/mule/core" 
xmlns:doc="https://www.mulesoft.org/schema/mule/documentation"     
xmlns:spring="https://www.springframework.org/schema/beans"     
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"     
xsi:schemaLocation="https://www.mulesoft.org/schema/mule/ee/dw https://www.mulesoft.org/schema/mule/ee/dw/current/dw.xsd
https://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans-current.xsd
https://www.mulesoft.org/schema/mule/core https://www.mulesoft.org/schema/mule/core/current/mule.xsd
https://www.mulesoft.org/schema/mule/sfdc https://www.mulesoft.org/schema/mule/sfdc/current/mule-sfdc.xsd
https://www.mulesoft.org/schema/mule/ee/tracking https://www.mulesoft.org/schema/mule/ee/tracking/current/mule-tracking-ee.xsd">

<flow name="bu-vanrish-eventFlow">

<sfdc:replay-streaming-channel config-ref="Salesforce__Basic_Authentication" streamingChannel="/event/EnterpriseTestSync__e" replayOption="ALL" replayId="-1" doc:name="Salesforce (Streaming)"/>

<logger message="Before   Transformation -- #[payload]" level="INFO" doc:name="Logger"/>
<dw:transform-message doc:name="Transform Message">
<dw:set-payload><![CDATA[%dw 1.0
%output application/java
---
{
Name__c: payload.payload.Name__c,
StageName__c:payload.payload.StageName__c,
CreatedById:payload.payload.CreatedById,
Amount__c:payload.payload.Amount__c,
Opp_ID__c:payload.payload.Opp_ID__c
}]]></dw:set-payload>
</dw:transform-message>

<logger message="After transformation -- #[payload]" level="INFO" doc:name="Logger"/>

<sfdc:publish-platform-event-message config-ref="Salesforce__Basic_Authentication_van" platformEventName="Opportunity_Event__e" doc:name="Salesforce"/>

<logger message="BU Vanrish complete flow-- #[payload]" level="INFO" doc:name="Logger"/>
</flow>
</mule>

APIs for IOT and FOG computing

IOT (Internet Of Things) is transforming whole business and bringing new revolution in all kinds of business. These IOT devices generating terabytes of data. To handle unprecedented volume, variety and velocity of data, IOT needs new kind of infrastructure to support whole IOT eco system. FOG computing is a part of IOT eco system to support large volume of data with quick response. I explained in my previous blog, how FOG computing is now becoming major role in IOT devices. FOG is intermediate platform to collaborate between Cloud computing and Edge computing(IOT) to transfer data. Fog can hold small number of data and less computing power. Large data is stored in cloud and heavy computing is done in Cloud.

API (Application Programming Interface) have major role to transfer data from edge device (IOT) to Fog node and from fog node to Cloud (Internet). API is helping to collaborate between edge device to Fog node and Fog node to Cloud. API is playing major role to maintain volume, variety and velocity of data in IOT infrastructure.

API works on HTTP/HTTPS protocol. APIs are light weight and simple. Enabling APIs take very small amount of resource. So, API can enable in small system and consume without losing too much resources.  This API property helps to transfer data from Edge device(IOT) to Fog node and from Fog node to Cloud. API is not part of mechanical role. API is responsible for the optimization of data transfer. Proper enabling of APIs between these nodes increase the efficiency and computational power to all IOT devices. Fog node is intermediate node between IOT device and cloud. So, Fog node will be responsible to receive data from edge(IOT) device and transfer these data to Cloud. Communication between Edge(IOT) device to Fog node is very frequent. Data provided by API is responsible for all intermediate and quick computation on FOG node.

Cloud is still big stake holder for holding all data and large computation from IOT device.  API is providing data to cloud from FOG node in certain interval for heavy computation. As Edge(IOT) system getting more complex Fog computation responsibility will increase and API will come on picture to provide more data to Fog and from fog node to cloud.

API Integration of IOT with Fog and Cloud computing.

These are few benefits by enabling APIs for IOT devices and Fog Nodes

  • API provides flexibility to connect any IOT device to FOG node and FOG node to cloud network.
  • API provides seamless connectivity between these systems.
  • API brings whole IOT system in one seamless environment So, it is very easy to debug these systems.
  • API is very easy to develop and deploy so it’s easy to maintain these systems.
  • Provisioning of IOT device has also become very easy by enabling API.
  • According to Gartner study, Security of IOT is one of big concern. API provides whole one seamless system and network to mitigate this risk.