IOT (Internet of things) is revolutionizing our lives. As per Gartner report by 2025 IOT market will expand a 58-billion-dollar opportunity. It is affecting all parts of our life. In our pandemic era we found more use of IOT device to maintain social distancing.
IOT is also one of the main disruptive technologies in our
businesses. It is affecting all business domain including healthcare, retail, automotive,
security.
There are wide range of IOT benefits in business.
Enhanced productivity
Better customer experience
Cost-effectiveness
CRM system is keeping all your customer relationship like
data, notes, metrics and more – in one place. CRM is helping small business to
take off all burden from the IT management team by automating the business
process. It is also helping employee to keep the focus on the critical business
areas.
API is helping to integrate these two unrelated systems.
APIs are enabling this system to optimize process and streamline whole business
process. API is the main communication channel to build robust process and
keeping real time update to these systems. APIs are allowing to build context-based
application with IOT and CRM to interact with the physical world.
Now here are few areas where IOT is helping CRM system with help of APIs to optimize business process.
Optimize customer service – Before your customer finds any error in your service/product you proactively acting on error and fixing those error. This will help to build relationship with customer.
Increase sales – With help of IOT and CRM system you are finding untouched opportunity and using those opportunity to increase your sale.
Personalize customer experience – You are analyzing data provided by IOT and CRM system and building user based predictive model to enable personalize experience to user.
Customer retention – CRM provide customer data and relationship. IOT data providing customer behavior. This will help any business to personalize and target marketing for their customer.
Omnichannel instore experience – IOT and CRM is helping business to enable 360 omnichannel customer experience. This process will help and suggest the products which the customer might purchase.
APIs integration with
IOT and CRM helping business to enable higher degree of personalization, target
marketing, optimize price model, higher revenue and enhance customer
satisfaction.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Mule 4 introduces DataWeave 2.0 as the default expression language replacing Mule Expression Language (MEL). DataWeave 2.0 is tightly integrated with the Mule 4 runtime engine, which runs the scripts and expressions in your Mule application.
Since Dataweave 2.0 is default expression language for Mule 4, Dataweave can use almost all place within your Mule application. So, In some use-case Dataweave needs to call java method or instantiate java class to execute java complex business logic.
In my previous blog I explained usage of java within Mulesoft flow. In this blog I am explaining usage of java within Dataweave 2.0.
There are 2 ways we can use java within Dataweave code
Calling java method
Instantiate Java class
1. Calling java method — There is restriction with Dataweave when calling to java. you can only call Static methods via DataWeave (methods that belong to a Java class, not methods that belong to a specific instance of a class). Before making a method call from java class, you must import the class.
2. Instantiate Java class – Dataweave allows to instantiate a new object of any java class but you can’t call its instance method through dataweave. You can refer it as variables.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
MuleSoft is a lightweight integration and API platform that allows you to connect anything anywhere and enable your data through API. Mule evolved from java and spring framework. MuleSoft supports multiple language although all Mule module are developed in java.
Since Mule evolved from java it has capability to use direct java class and method in Mule flow. This capability gives flexibility to Mule developer to use java for complex business logic.
There are several ways you can use java within Mule. Here are some of Java modules available to use within MuleSoft application
There are 4 java modules are available in MuleSoft flow
New
Invoke
Invoke static
Validate type
To explain all these components and uses in Mule flow I created Utils.java and AppUtils.java classes
1. New – AppUtils.java
class instantiation can be achieved by calling constructor of this class
through MuleSoft New component within Mule flow.
AppUtils java class defined 2 contractors, So Mule constructor properties for NEW component is showing 2 options.
In above code, Instance of AppUtils class is created and placed into the “appInst” target variables to reuse same instance in Mule flow.
2. Invoke – In new java module we
instantiate AppUtils.java class and placed into “appInst” variable. Now to use
this variable set Invoke module and call one of method define in AppUtils.java
class. In AppUtils.java class, there is one non static method “generateRandomNumber”
defined with String parameter. In example we call this method through Invoke module.
3. Invoke static—Invoke static java
module enable mule flow to call java static method. This is one of the easy ways
to call any java method in Mule flow.
4. Validate type – Validate type java module
use instance of method from java. This module accepts “Accept subtypes”
parameter which indicates if the operation should accept all subclasses of a
class. By default it acceptSubtypes=“true” which means it will accept all sub
class of main class but if it will set as false acceptSubtypes=“false” then during
execution the operation throws an error (JAVA:WRONG_INSTANCE_CLASS)
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Mulesoft Connect 2019 was wrapped last month in North america. These connects are one of the premier conferences for API led connectivity and digital transformation.These conference brought more content for developers, architects, and business executives across different business domain. At MuleSoft CONNECT plethora of market experts, and business executives including industry’s CEO/CTO, discussed their Mulesoft experience and democratization of innovation.
During these conferences, I got an opportunity to talk to some business executives about their Mulesoft experiences and challenges.
One of the biggest challenges is to optimize Mulesoft vCore in cloudhub to keep their project in budget.
Here are few steps in Mulesoft application to keep vCore usage low and project in budget.
1. API Optimization — As per Mulesoft best practices, Mulesoft suggest API led connectivity to expose data to application within or outside of your organization through reusable and purposeful APIs.
The APIs used in an API-led approach to connectivity falls into three categories:
Experience APIs
Process APIs
System APIs
When you are working on API led connectivity, do we really need all three layers of APIs every time?
No, It is not necessary to implement all three layers of APIs every time.
Here are some of API layers use-case to save vCores usage and optimize APIs led connectivity.
Experience APIs — Experience API is similar to process APIs but unlike Process APIs, Experience APIs are more specifically tied to a unique business context, and project data formats, interaction timings, or protocols into a specific channel and context. These APIs simplifies your front end data, based on different GUI. For example if you are working on PC website or Mobile website, we display data based on user experience, so we need different APIs to show these data, but if your application needs only data irrespective of user experience we can skip Experience APIs and application can work only on Process APIs or System APIs. This will save some vCores and keep project in budget.
Process APIs — Process APIs, if you are working on complex business logic based on different organization department then you can incorporate all these business specific data in process layer and expose these data through process APIs. But if APIs are not incorporating any complex business logic and most of datas are processing through System APIs then in this use-case you can skip Process APIs and expose your data through System APIs. In this way you can save some vCore and keep your project within budget.
2. Salesforce Platform Events Integration – Salesforce integration with Mulesoft is one of the very common integration use-cases. In the old days Salesforce synced their data through polling. Poll run couple of time in whole day and sync data between different salesforce org. Since this is polling process, it is not easy to predict the volume of data flowing through Mulesoft application during a certain period of time. So in this case, we go with higher mulesoft vCore to avoid any memory leak.
Salesforce introduced “Platform Events” the Salesforce Enterprise Messaging Platform on June 2017. After introduction of “Platform Event”, integration of Mulesoft and salesforce has become very easy. “Platform Event” enterprise messaging service is event based. So any update for any create Object within salesforce generates event and sends payload to salesforce messaging queue. Mulesoft-Salesforce connector read these payload for data sync from Salesforce messaging queue FIFO based. Since this integration is event based, so as soon as Mulesoft receives event from “Platform event” it is processes Platform event message. So any time we have no large set of data to process. In this integration then we can go for lower vCore and execute project within budget.
3. Batch Process Optimization — Mulesoft allows to process messages in batch. Mule batch process provides a construct for asynchronous processing larger-than-memory data sets that can split into individual records. Mulesoft batch extracting, transforming and loading (ETL) information into a target system like hadoop.
Mulesoft needs large memory/vCore to run large sets of data in batch process. These Mulesoft batch process runs max once or twice a day . These Mulesoft batch hold large number of vCore idle rest of day without any active usage. You can optimize vCore usage and reduce your batching processing cost by following these two steps.
Reuse vCore by deploying multiple batch process applications — As you know, batch run certain time of day once or twice. Suppose one batch application is running every midnight and other batch application is running every morning . Both your batch application is taking 1 vCore. So both applications consuming total 2 vCore.
If you are configuring any CI/CD process like Jenkin/Code build to deploy your batch application into cloud then it is very easy to manage your process to reuse your vCore. Your can configure you CI/CD process to build your application and deploy your application into cloud when you want to run batch. Once batch is done then you un-deploy your application and deploy next batch application on same memory. In this way you can keep reusing your vCore memory and keep your project within budget.
Deploy Batch application in on-premise Mulesoft server — As we all know Batch process is simple and easy to maintain application in most to their use-case. In this case it is very easy to maintain on-premise Mulesoft server and deploy your batch application without much worry about vCore usage.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Mulesoft is all about API strategy and digital transformation of your organization through APIs within cloudHub or in premise. Mulesoft also provides platform for APIs to monitor and analyze the usage, control access and protect sensitive data with security policies. API is at the heart of digital transformation and it enables greater speed, flexibility and agility of any organization.
Exposing of your APIs is one aspect of your digital transformation strategy, but consuming API is also as important as exposing APIs. Consuming API is either application getting data from APIs or create/update data through APIs. Most of APIs are based on HTTP/HTTPS protocol. In Mule 4 consuming APIs is also start with configuration of HTTP/HTTPs protocol.
Configuration of HTTP/HTTPS— HTTP/HTTPS configuration start with selecting protocol. If API is available through HTTP then select protocol HTTP with default port 80 or change port based on expose API document. If APIs are available through secured connection, then select HTTPS protocol with default port 443. Fill the Host with your expose API end point without any protocol. Fill the other field with default value.
Authentication of API are available with five different selection
None – No authentication. Available
for everyone
Expression – Custom or expression-based
authentication
Digest authentication — web server
can use to negotiate credentials, such as username or password, with a user’s
web browser
Ntlm authentication — NT (New
Technology) LAN Manager (NTLM) . Microsoft security protocols intended to
provide authentication, integrity, and confidentiality to users
If you are working on post/patch/put method api to send data into expose api, set some important parameter based on streaming mode. If API are exposed as streaming mode, then you need to mention content-size of streaming otherwise set value as “NEVER”, then you no need to set content-size.
API Get Call – API get call implement GET method of APIs. Implementation of API get call need parameters. Based on these parameters application get set of data. MuleSoft provide 4 ways to pass these parameters or values.
Body
Headers
Query Parameters
URI Parameters
Flow for GET Method
API POST/PUT/PATCH Call –
POST – Create data
PUT/PATCH – Update data
Similar to Get method call, for POST/PUT/PATCH method application send API parameters based on API requirement. Since application is creating/Updating data through POST/PUT/PATCH api call, application sends these data through body parameters with content-type.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Mule 4 introduced APIKit for soap webservice. It is very similar to APIKit for Rest. In SOAP APIKit, it accepts WSDL file instead of RAML file. APIKit for SOAP generates work flow from remote WSDL file or downloaded WSDL file in your system.
To create SOAP APIKit project, First create Mulesoft project with these steps in Anypoint studio.
Under File Menu -> select New -> Mule Project
In above pic WSDL file gets selected from local folder to create Mule Project.
Once you click finish, it generates default APIKit flow based on WSDL file.
In this Mulesoft SOAP APIKit example project, application is consuming SOAP webservice and exposing WSDL and enabling SOAP webservice.
In SOAP Router APIKit, APIKit SOAP Configuration is defined WSDL location, Services and Port from WSDL file.
In above configuration, “soapkit-config” SOAP Router look up for requested method. Based on requested method it reroutes request from api-main flow to method flow. In this example, requested method is “ExecuteTransaction” from existing wsdl, so method flow name is
<flow name=“ExecuteTransaction:\soapkit-config”>
In this example we are consuming same WSDL but end point is different.
To call same WSDL we have to format our request based on WSDL file. In dataweave, create request based on WSDL and sending request through HTTP connector.
Here is dataweave transformation to generate request for existing WSDL file
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Much awaited Mulesoft 4 was officially announced in Mulesoft
Connect 2018 in San Jose. When Mulesoft was born, it was really to create
software that helps to interact systems or source of information quickly within
or outside company. So the speed is an incredibly important thing over the
years to develop and interact within systems. Need of speed for application and
development hasn’t change drastically over the years but needs and requirement
of customer’s application have changed. The integration landscape has also
magnified. There are hundreds of new systems and sources of information to
connect to, with more and more integration requirements. This integration
landscape gets very messy and very quickly.
Mule 4 provides
a simplified language, simplified runtime engine and ultimately reduces
management complexity. It helps
customers, developers to deliver application faster. Mule4 is really radically
simplified development. It is providing new tool to simplify your development,
deployment and management of your integration/API. It is also providing a
platform to reuse Mule component without affecting existing application for
faster development. Mule 4 is evolution of Mule3. You will not seem lost in
Mule 4, if you are coming from Mule3. But Mule 4 implements fewer concepts and
steps to simplify whole development/integration process. Mule 4 has now java
skill is optional. In this release Mulesoft is improving tool and making error
reporting more robust and platform independent.
Now let’s go one by one with all these new Mule4 features.
1. Simplified
Event Processing and Messaging — Mule event is
immutable, so every change to an instance of a Mule event results in the
creation of a new instance.It contains the core
information processed by the runtime. It travels through components inside your
Mule app following the configured application logic. A Mule event is generated when a trigger (such as an
HTTP request or a change to a database or file) reaches the Event source of a
flow. This trigger could be an external event triggered by a resource that
might be external to the Mule app.
2. New
Event and Message structure — Mule 4 includes a
simplified Mule message model in which each Mule event has a message and
variables associated with it. A Mule message is
composed of a payload and its attributes (metadata, such as file size).
Variables hold arbitrary user information such as operation results, auxiliary
values, and so on.
Mules 4 do not have Inbound, Outbound and Attachment
properties like Mule 3. In mule 4 all information
are saved in variables and attributes. Attributes in Mule 4 replace inbound properties. Attributes
can be easily accessed through expressions.
These
are advantages to use Attributes in
Mule 4.
They are strongly typed, so you can easily see
what data is available.
They can easily be stored in variables that you
can access throughout your flow
Example :
#[attributes.uriParams.jobnumber]
Outbound properties— Mule 4 has no concept for outbound properties like in Mule 3. So you can set status code response or header information in Mule 4 through Dataweave expression without introducing any side effects in the main flow.
Session Properties–In Mule 4 Session properties are no longer exist. Data store in variables are passes along with different flow.
3. Seamless data access & streaming – Mule 4 has fewer concepts and steps. Now every steps and task of java language knowledge is optional.Mule 4 is not only leveraging DataWeave as a transformation language, but expression language as well. For example in Mule 3 XML/CSV data need to be converted into java object to parse or reroute them. Mule 4 gives the ability to parse or reroute through Dataweave expression without converting into java. These steps simplify your implementation without using java.
4. Dataweave 2.0 — Mule 4 introduces DataWeave as the default
expression language replacing Mule Expression Language (MEL) with a scripting
and transformation engine. It is combined with the built-in streaming
capabilities; this change simplifies many common tasks. Mule 4
simplifies data iteration. DataWeave knows how to iterate a json array. You
don’t even need to specify it is json. No need to use <json:json-to-object-transformer /> to convert data into java object.
Here are few points about Dataweave 2.0
Simpler syntax to learn
Human readable descriptions of all data types
Applies complex routing/filter rules.
Easy access to payload data without the need for
transformation.
Performs any kind of data transformation,
normalization, grouping, joins, pivoting and filtering.
5. Repeatable
Streaming – Mule 4 introduces
repeatable streams as its default framework for handling streams. To understand
the changes introduced in Mule 4, it is necessary to understand how Mule3 data
streams are consumed
In above three different Mule 3 flows, once stream data is
consumed by one node it is empty stream for 2nd node. So in the above
first example, in order to log the stream payload , the logger has to consume
the entire stream of data from HTTP connector. This means that the full content
will be loaded into memory. So if the content is too big and you’re loading
into memory, there is a good chance the application might run out of memory.
So Mule 4 repeatable streams enable you to
Read a stream more than once
Have concurrent access to the stream.
Random Access
Streams of bytes or streams of objects
As a component consumes the stream, Mule saves its content
into a temporary buffer. The runtime then feeds the component from the
temporary buffer, ensuring that each component receives the full stream,
regardless of how much of the stream was already consumed by any prior
component
Here are few points, how repeatable streams works in Mule 4
Payload
is read into memory as it is consumed
If
payload stream buffer size is > 512K (default) then it will be persisted to
disk.
Payload
stream buffer size can be increased or decreased by configuration to optimize
performance
Any
stream can be read at any random position, by any random thread concurrently
6. Error Handling — In Mule 4 error handling has been changed
significantly. Now In mule 4 you can discover errors at design time with visual
interface. You no need to deal with java exception directly and it is easy to
discover error while you are building flow. Every flow listed all possible
exception which potential arises during execution.
Now errors that occur
in Mule fall into two categories
Messaging errors
System errors
Messaging errors — Mule throws a messaging error (a Mule error) whenever a problem occurs within a flow. To handle Mule
errors, you can set up On Error components inside the scope-like Error Handler
component. By default, any unhandled errors are logged and propagated.
System errors — Mule throws a system error when an exception occurs
at the system level . If no Mule Event is involved, the errors are handled by a
system error handler.
Try catch Scope — Mule 4 introduces a new try scope that you can use within a flow to do error handling of just inner components/connectors. This try scope also supports transactions and in this way it is replacing Old Mule 3 transaction scope.
7. Class Loader Isolation — Class loader separates application completely from
Mule runtime and connector runtime. So, library file changes (jar version) do
not affect your application. This also
gives flexibility to your application to run any Spring version without worry
about Mulesoft spring version. Connectors are distributed outside the runtime
as well, making it possible to get connector enhancements and fixes without
having to upgrade the runtime or vice versa
8. Runtime Engine — Mule 4 engine is new reactive and non-blocking engine. In Mule 4 non-blocking flow always on, so no processing strategy in flow. One best feature of Mule 4 engine is, It is self-tuning runtime engine. So what does this mean? If Mule 4 engine is processing your applications on 3 different thread pools, So runtime knows which application should be executed by each thread pool. So operation put in corresponding thread pool based on high intensive CPU processing or light intensive CPU processing or I/O operation. Then 3 pools are dynamic resizing automatically to execute application through self-tuning.
So now self-tuning creates custom thread pools based on specific tasks. Mule 4 engine makes it possible to achieve optimal performance without having to do manual tuning steps.
Conclusion
Overall Mule 4 is
trying to make application development easy, fast and robust. There are more features
included in Mule 4 which I will try to cover in my next blog. I will also try
to cover more in depth info in above topic of Mule 4. Please keep tuning for my
next blog.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
IOT (Internet Of Things) is transforming whole business and bringing new revolution in all kinds of business. These IOT devices generating terabytes of data. To handle unprecedented volume, variety and velocity of data, IOT needs new kind of infrastructure to support whole IOT eco system. FOG computing is a part of IOT eco system to support large volume of data with quick response. I explained in my previous blog, how FOG computing is now becoming major role in IOT devices. FOG is intermediate platform to collaborate between Cloud computing and Edge computing(IOT) to transfer data. Fog can hold small number of data and less computing power. Large data is stored in cloud and heavy computing is done in Cloud.
API (Application Programming Interface) have major role to transfer data from edge device (IOT) to Fog node and from fog node to Cloud (Internet). API is helping to collaborate between edge device to Fog node and Fog node to Cloud. API is playing major role to maintain volume, variety and velocity of data in IOT infrastructure.
API works on HTTP/HTTPS protocol. APIs are light weight and simple. Enabling APIs take very small amount of resource. So, API can enable in small system and consume without losing too much resources. This API property helps to transfer data from Edge device(IOT) to Fog node and from Fog node to Cloud. API is not part of mechanical role. API is responsible for the optimization of data transfer. Proper enabling of APIs between these nodes increase the efficiency and computational power to all IOT devices. Fog node is intermediate node between IOT device and cloud. So, Fog node will be responsible to receive data from edge(IOT) device and transfer these data to Cloud. Communication between Edge(IOT) device to Fog node is very frequent. Data provided by API is responsible for all intermediate and quick computation on FOG node.
Cloud is still big stake holder for holding all data and large computation from IOT device. API is providing data to cloud from FOG node in certain interval for heavy computation. As Edge(IOT) system getting more complex Fog computation responsibility will increase and API will come on picture to provide more data to Fog and from fog node to cloud.
These are few benefits by enabling APIs for IOT devices and Fog Nodes
API provides flexibility to connect any IOT device to FOG node and FOG node to cloud network.
API provides seamless connectivity between these systems.
API brings whole IOT system in one seamless environment So, it is very easy to debug these systems.
API is very easy to develop and deploy so it’s easy to maintain these systems.
Provisioning of IOT device has also become very easy by enabling API.
According to Gartner study, Security of IOT is one of big concern. API provides whole one seamless system and network to mitigate this risk.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
IOT, Connect car, Automated car getting lot of traction in current word. All big companies want to be part of this process. All kind of sensors are installed in these vehicles. These sensors generate Terabytes of data and computing these data to run vehicle smoothly. Connected car or IOT based devices are completely based on computing power and quick response.
Sending data and computing these data in cloud could be catastrophic. Any network latency and processing delay might end with bad result. For example, your automated car is traversing through busy street. Suddenly a person comes in front of automated car. In this scenario, any network latency, slowness of computation and analysis effects the decision and subsequent action (Apply brake on car).
In IOT based device, any computing near to IOT device can reduce this risk. So how can we make this happen, if your all computing power are in cloud and data is in cloud.
FOG Computing–In context of IOT, if intelligence pushes d down to the local area network (LAN) and compute these data in IOT gateway or FOG node will reduce network latency risk. Fogging or FogNetwork is decentralized computing and stores data in most logical and efficient place between IOT device and the cloud.
In FOG computing, data transported from IOT to Cloud need many steps.
Signals from IOT is transported through wire to I/O point of device programmable automation controller(PLC). PLC execute control system program to automate system.
Control system program sends data to protocol gateway, which convert this data into a protocol, understand internet systems such as MQTT or HTTP.
At the end, data is send to fog node or IOT gateway on the LAN, which collects the data and preform analysis and computing on data. This even stores the data to transfer further to cloud network for later processing and intelligence.
Edge Computing — Edge computing refers to any computing infrastructure near to source of data (i.e. IOT device). So, Making IOT device smart and intelligent enough to take decision near to data gateway. The role of edge computing is to process data, store data in local device and transfer data to fog or cloud network. Above all processes are automated through PAC (Programmable automation controller) by executing board controlled system program. In edge computing, intelligence literally push to edge of network where our IOT device and outside network first connect to each other.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Twilio is a cloud based communication company that enables users to use standard web languages to build voice, VoIP, and SMS apps via a web API. Twilio provides a simple hosted API and markup language for businesses to quickly build scalable, reliable and advanced voice and SMS communications applications. Twilio based telephony infrastructure enable web programmer to integrate real time phone call, SMS or VOIP to their application.
Mulesoft provides cloud connector to integrate Twilio Api within Mulesoft. Mulesoft Cloud connector provides a simple and easy way to integrate with these Twilio APIs, and then use them as services within Mulesoft. Mulesoft-Twilio connector provides a platform for developer to develop and integrate their application easily and quickly with Twilio.
Before start integration of Mulesoft with Twilio, create your Twilio account and get “ACCOUNT SID” and “AUTH TOKEN”.
Now download and install Twilio connector into Anypoint studio.
Anypoint Studio –>Help –>Install New Software
Configure pom.xml to pull Twilio jar dependency in maven based project.
Add plugin in plugin section and dependency in pom.xml file. This section will also add into pom.xml file when Twilio connector drag into AnypointStudio canvas and use it into flow.
In above code TwilioSID and TwilioAuthToken are coming from Twilio account.
Mulesoft Twilio connector provides a number of methods to integrate with your application. Below image show some of methods expose by Mulesoft-Twilio connector.
I am using “send SMS message” method form Mulesoft-Twilio connector for my example.
Now you can integrate Twilio to send SMS with your application. Here is example code.
<logger message="#[payload.recipientPhoneNumber]" level="INFO" doc:name="Logger"/>
<twilio:send-sms-message config-ref="Twilio" accountSid="${TwilioSID}" body="Hello World Sending SMS from Twilio" from="+15555555555" to="#[payload.recipientPhoneNumber]" doc:name="Twilio"/>
Twilio API does not support bulk SMS for recipient. So, to initiate messages to a list of recipients, you must make a request for each number to which you would like to send a message. The best way to do this is to build an array of the recipients and iterate through each phone number.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.