A Journey from Embedded Gen AI Apps to Autonomous Agents
The evolution of AI applications, particularly in generative AI, is shaping an intriguing path for Salesforce. This progression can be divided into three key stages:
Stage 1: Embedded Generative AI Apps
Incorporating generative AI models into current applications marks the beginning of an exciting phase. These models, tailored for specific tasks, elevate the fundamental capabilities of the applications. Initially, the focus was on integrating the first wave of GPT apps, which primarily consisted of embedded Generative AI functionalities like Email Generation, Service Replies, and Work Summaries.
Key characteristics:
Task-specific Apps
Tight integration with existing applications
Limited autonomy
User interface-driven interactions
Stage 2: Conversational Apps with Einstein Copilot
As AI models advance, the focus shifts towards developing conversational applications leveraging Copilot technology. These intelligent assistants are adept at interpreting and addressing Stage 2: user inquiries using natural language. Capable of executing various functions, from offering information to handling intricate tasks.
Stage 3: Agent Force Platform & Autonomous Agent/Agents
In the third stage, the Salesforce platform transitioned from a single Agent to multiple Agents, implementing changes to achieve this through the Agentforce Platform. The Agent Force Platform now includes the Agent RAG feature as a standard offering for all Agents. This update brings enhancements like topic filtering, Agent headless APIs, and other improvements to streamline operations and boost efficiency.
A framework outlines how clouds can build their own autonomous agents using the Agent Force Platform featuring a flexible UI and comprehensive testing capabilities. As part of autonomous agents
It also added how these AI agents operate independently without a user interface, proactively identifying and executing tasks based on predefined goals or real-time data. They seamlessly integrate with diverse systems and applications to streamline operations and attain specific outcomes.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Last month, I attended Dreamforce 2024, the world’s largest software conference, in San Francisco. This massive annual event is always a great learning experience. Dreamforce’s 2024 key announcement was a New AI Era with Agentforce.
Agentforce is synonymous with AI Agent. As I explained in my previous blog about AI agents, I will explain Agentforce in the context of Salesforce/MuleSoft.
The study found that 90% of businesses say that their industry has become more competitive in the last three years, and 48% say it has become much more competitive. This led to decreased margins, force to more productivity, and transformed businesses to remain relevant in the market for any industry.
So the question is, how do we close this gap and become relevant to the market for any industry?
We started the AI journey with Predictive analytics as the first wave of AI. Next, we move into the Generative AI wave. Now we are next inflection point as Agentforce or AI agent. So AI Agent is waiting for us to ultimately close this gap and of course, the way that we’re going to do this is to get more time back, more productivity, and have more business growth with AI agents.
So here are a few queries, I am trying to explain
What is Agentforce?
The newest Salesforce tool allows customers to build and customize autonomous agents to scale their workforce. It is a UX for customers to leverage with their data sources to deliver more human-like interactions.
How does Agentforce help customers achieve business goals?
Agentforce gives companies a 24/7 agent to engage on their behalf to resolve sales, service, and marketing-related.
topics including customer service cases and prospect engagement.
With Agentforce, companies can drive productivity to deliver higher profitability, while building stronger customer relationships.
How does MuleSoft enhance Agentforce?
Salesforce primarily focuses on the front end “human assistant” type of agents with the Agentforce UX, while MuleSoft primarily focuses on back-end domain expert agents who manage domain complexity (inventory, payroll.) and power other prompts or agents.
MuleSoft expands the actionability of the Agentforce agent by providing API actions and other domain assets for
broader context to the role, knowledge, actions, guardrails, and channel.
How are customers accessing data for Agentforce?
The Agentforce messaging encourages customers to use Data Cloud to bring in their data and ground Agentforce. To add MuleSoft into this conversation, leverage our value prop for MuleSoft + Data Cloud; where MuleSoft accelerates value against four use cases (on-premises, transactional, unstructured, activation):
● On-premise data: MuleSoft can run locally and stream data to Data Cloud, giving Agentforce additional context for improved grounding and better decision making.
● Transactional data: Transactional systems will want queuing, error handling, and delivery controls for ingestion
— functionality MuleSoft can easily deliver so that Agentforce agents aren’t slowed down.
● Unstructured data: MuleSoft offers pre-built accelerators for unstructured data ingestion to Google Drive,
Confluence, and SharePoint as well as OCR for images. Agentforce agents can have immediate access to data
from scanned images like government identification.
● Activation: Use MuleSoft to respond to data events in Data Cloud and drive action in real time to any downstream system for full circle updates.
What is the agent use cases that MuleSoft supports?
● Service Agents: Agentforce needs contextual data from external systems in order to deflect cases faster
● Sales Agents: MuleSoft can upload, and share leads from and with partners without compromising data integrity, securely with your governance rules. Near real-time synchronization with external systems ensures that Agentforce can engage with prospects starting at the moment leads come in.
● Commerce Agents: Setting up and managing storefronts requires data from external systems including product information, inventory levels, and pending vendor shipments. MuleSoft connects to external systems for near real-time updates so Agentforce can respond with accurate information.
● Employee Service Agents (Workday): Automating onboarding and provisioning for new hires requires data from external systems, and in some cases is unstructured data found in pdf, jpg, and png files like scanned government I.D.s and manually filled out forms. MuleSoft’s Intelligent Document Processing makes it easier to upload unstructured data so that you can share it faster with Agentforce.
How is Agentforce different from the MuleSoft AI Chain (MAC) Project?
MAC Project mainly targets a technical person, i.e. MuleSoft users and developers. With the MAC Project, customers can create powerful agents, fully composed in the MuleSoft Anypoint Platform and benefit from its End-to-End Lifecycle Governance and Management capabilities. With API Management, you can sprinkle it on top of LLM specific policies, to further implement the security aspects when interacting with LLMs. MAC Project is an open source project, which is currently being productized. Agentforce is more for non-technical users who wants to build powerful agents directly in Salesforce. It is fully integrated into every Salesforce Cloud and provides out-of-the-box integration to the Salesforce ecosystem.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Global uncertainties continue to dominate headlines. Inflation is expected to reach the highest levels of ~3.5% in the US and Europe by the end of 2023. To ease inflation, Central Banks need to dampen demand, by making it expensive (for financial institutions, businesses and households) to borrow by increasing Federal Reserve interest rates . We are expecting a federal rate hike of 4.75% – 5.0% by the end of 2023. These are all data showing we are heading toward recession. The US labor market was robust last quarter but this quarter it is not very promising. Everyday we are hearing layoff news from different sectors.
These inflation and layoff news are impacting our tech market. Many companies have a growth challenge: They expect to get as much as 50 percent of their revenue from new businesses and products by 2026 but are not on a path that will take them there. Current economic conditions are forcing high-growth yet unprofitable tech startups to tighten their financial belts.
There are few realities, software companies are facing for their growth.
US-based Venture capitalists backed software startups slowed down – VC are very clear of high valuation and demanding that companies spend less, improve profit margin and high output. Unicorn creation also slowed in 2022 Q4. This is one of the lowest quarterly count since the first quarter of 2020.
Depressed company valuations – Private company valuations are cooling down. Over the last 4 quarters, we have seen public valuations compressing.
Software companies have three critical revenue streams.
License / Subscription Revenue – When the customer pays for the right to own and use a copy of the software/hardware product or subscribe/access software platform
software or hardware product – Customer pays for ongoing support or premium support.
Cloud based licensed software – Customer pays the software provider for specific deliverables such as software implementation or technical training.
In the current world all these 3 revenue streams are shrinking. Companies are using only essential services to run their business. This is directly impacting software revenue, which is leading these companies into low valuation.
Infrastructure Maintenance – SaaS companies are providing the software as a service. This means the customer does not have to purchase hardware to run the software—that cost is transferred to the SaaS provider. This is implying continuous software running coast. This cost is not going anywhere.So due to inflation this SaaS running cost increases tremendously.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
API is a key component of digital transformation. API is the interface of your legacy and SAAS data. The goal of APIs is to facilitate the transfer and enablement of data between your system and external users. APIs are typically available through public networks like the internet to communicate to external users and expose your data into the public domain.
Since your data is exposed into the public domain through APIs, It can lead to a data breach. APIs can be broken and expose sensitive personal as well as company data. An insecure API can be an easy target for hackers to gain access to your system and network. Rise of IOT devices and usage of APIs by these IOT devices, APIs are now more vulnerable.
According to owasp, these are 10 main API vulnerabilities.
Broken Object Level Authorization – Expose endpoints that handle object identifiers, creating a wide attack surface Level Access Control issue.
Broken User Authentication – Authentication mechanisms are implemented incorrectly.
Excessive Data Exposure – Developers expose all object properties without considering their individual sensitivity
Lack of Resources & Rate Limiting – APIs do not impose any restrictions on the size or number of resources that can be requested by the client/user, lead to Denial of Service (DoS) attack on APIs
Broken Function Level Authorization– Complex access control policies with different hierarchies lead to authorization flaws.
Mass Assignment – Without proper properties filtering based on an allowlist, usually leads to Mass Assignment.
Security Misconfiguration – Misconfiguration or lack of Security configuration is commonly a result of insecure APIs
SQL Injection– SQL Injection occurs when untrusted data is sent to an interpreter as part of a command or query.
Improper Assets Management – APIs tend to expose more endpoints than traditional web applications lead to improper expose APIs.
Insufficient Logging & Monitoring – Insufficient logging and monitoring fail to find your vulnerability and broken integration.
How to mitigate API security risk?
API supports secure sockets layer (SSL), transport layer security (TLS), and Hypertext Transfer Protocol Secure (HTTPS) protocols, which provide security by encrypting data during the transfer process.
Apply Basic Auth minimum with API or if you want to more secure your API then enable 2 way authentication through OAuth framework .
Apply Authorization on each API resource to more control on API security through external Identity and access management provider (IAM).
Use encryption and signatures to all your API exposed personal and organizational sensitive data.
Apply API throttling through API manager to control number of user access per API (Rate Limiting).
Implement best practice of exception handling on your APIs to hide all your internal server and database information to mitigate SQL injection security risk.
Use Service Mesh to manage different layers of API management and control.
Audit your APIs and remove all unused API from your API catalog.
Add proper logging, Monitoring and Alerting on your APIs to keep track of your APIs activity.
Conclusion: APIs are a critical part of modern AI, mobile, SaaS, IOT and web applications. APIs Security should be the main focus on strategies and solutions to mitigate the unique vulnerabilities and security risks .
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Keep the application synchronous if possible. Synchronous flows avoid serialization/deserialization of messages sent through VM queues, do not cause context switches, and do not cause contention when messages move across thread pools.
Store as little as possible in variables. The vars are serialized and deserialized every time a message crosses an endpoint, even if it is a VM endpoint. This will impact performance overhead in direct proportion to the size of variables and the number of endpoints.
Use Dataweave Java payloads whenever possible. The usage of a canonical data model is recommended for projects that deal with data (mapping, transformation etc.). It is also recommended to create them in Java objects as dataweave whenever possible, as this provides the fastest format to access fields and change information and to convert to other formats.
Encourage dataweave languages. For better performance, use Dataweave for simple data extraction from messages, and Java components with dataweave for everything else.
Use flow references instead of VM endpoints. To communicate between flows internally within an application, use flow references instead of VM endpoints. The VM connector, even though it is an in-memory protocol, emulates transport semantics that serialize and deserialize parts of your messages, most notably the vars. This makes it slower than a flow reference, which just injects messages into the referenced flow with no intermediate steps. Please note that in some cases the usage of VM endpoints is preferred (see the chapter on reliability patterns). For example, a Mule cluster can load balance applications that use VM endpoints by deferring execution to another, available node in the cluster.
Cache aggressively. Take advantage of Mule’s caching scope when making requests to external resources like Web services or databases. Also consider caching reusable assets such as security tokens or ephemeral API keys and cookies. Mule’s Notification subsystem can additionally be used to “warm up” a cache when Mule starts. For example, consider doing this for situations where an initial cache miss is not acceptable.
Configure message processors and endpoints at the global level. Some connectors allow you to configure some parameters at both the global and the endpoint/message processor level. We recommend placing the configuration at a global level to avoid repeated initialization of resources.
Avoid creating a large volume of business events. Business events incur performance overhead in Mule and in platform when platform’s internal event buffer overflows. Thus, avoid using either default flow level business events or a large volume of custom business events in a high message volume project.
Consider using message compression. For communicating between Mule applications over the network consider using Mule’s compression processors to compress/decompress the message payloads before they hit the wire if their sizes are large.
Consider using VM queues instead of an external message broker. VM queues are fast and have some guaranteed delivery semantics in a cluster. Consider using these instead of going out to an external messaging broker for inter-application Mule communication.
Use the async scope when appropriate. If a flow is performing processing on a message that is neither modifying the message nor changing how it is routed, then it could be wrapped in an async block. This will cause the processing to occur in a different thread and will avoid adding unnecessary overhead to processing the message.
Use connection pooling for connectors because the performance cost of establishing a connection to another data source, such as a database, is relatively high.
Optimize your logging within your mule flows. Too much logging will slow down your process and too less logging will hard to debug.
Encryption and decryption of data is very costly. Whenever your Mule application really needs then apply encryption/decryption on your data.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
IOT (Internet of things) is revolutionizing our lives. As per Gartner report by 2025 IOT market will expand a 58-billion-dollar opportunity. It is affecting all parts of our life. In our pandemic era we found more use of IOT device to maintain social distancing.
IOT is also one of the main disruptive technologies in our
businesses. It is affecting all business domain including healthcare, retail, automotive,
security.
There are wide range of IOT benefits in business.
Enhanced productivity
Better customer experience
Cost-effectiveness
CRM system is keeping all your customer relationship like
data, notes, metrics and more – in one place. CRM is helping small business to
take off all burden from the IT management team by automating the business
process. It is also helping employee to keep the focus on the critical business
areas.
API is helping to integrate these two unrelated systems.
APIs are enabling this system to optimize process and streamline whole business
process. API is the main communication channel to build robust process and
keeping real time update to these systems. APIs are allowing to build context-based
application with IOT and CRM to interact with the physical world.
Now here are few areas where IOT is helping CRM system with help of APIs to optimize business process.
Optimize customer service – Before your customer finds any error in your service/product you proactively acting on error and fixing those error. This will help to build relationship with customer.
Increase sales – With help of IOT and CRM system you are finding untouched opportunity and using those opportunity to increase your sale.
Personalize customer experience – You are analyzing data provided by IOT and CRM system and building user based predictive model to enable personalize experience to user.
Customer retention – CRM provide customer data and relationship. IOT data providing customer behavior. This will help any business to personalize and target marketing for their customer.
Omnichannel instore experience – IOT and CRM is helping business to enable 360 omnichannel customer experience. This process will help and suggest the products which the customer might purchase.
APIs integration with
IOT and CRM helping business to enable higher degree of personalization, target
marketing, optimize price model, higher revenue and enhance customer
satisfaction.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Anypoint Platform acts as a client provider by default, but you can also configure external client providers to authorize client applications. As an API owner, you can apply an OAuth 2.0 policy to authorize client applications that try to access your API. You need an OAuth 2.0 provider to use an OAuth 2.0 policy. You can configure more than one client provider and associate the client providers with different environments. If you configure multiple client providers after you have already created environments, you can associate the new client providers with the environment.
MuleSoft supports client management by identity providers that implement the OpenID Connect Dynamic Client Registration open standard. MuleSoft explicitly verifies support in Anypoint Platform for Salesforce, Okta, and OpenAM v14 Dynamic Client Registration. The following table contains examples of the URLs you need to supply, depending on your provider, during registration.
URL Name
Okta Example URL
OpenAM Example URL
Salesforce Example URL
Base
https://example.okta.com/oauth2/v1
https://example.com/openam/oauth2
https://example.salesforce.com/services/oauth2
Client Registration
{BASE URL}/clients
{BASE URL}/connect/register
{BASE URL}/register
Authorize
{BASE URL}/authorize
{BASE URL}/authorize
{BASE URL}/authorize
Token
{BASE URL}/token
{BASE URL}/access_token
{BASE URL}/token
Token Introspection
{BASE URL}/introspect
{BASE URL}/introspect
{BASE URL}/introspect
URL Name
Okta Example URL
OpenAM Example URL
Salesforce Example URL
Steps to Create External Client Provider
Log in to Anypoint Platform using an account that has the organization administrator role.
In Anypoint Platform, click Access Management.
In the menu on the left, click Client Providers.
Click Add Client Provider, and then select OpenID Connect Dynamic Client Registration. The Add OIDC client provider page appears.
After obtaining values from your identity provider’s configuration, complete the following required fields in each section:
Dynamic Client Registration
Issuer: URL that the OpenID provider asserts is its trusted issuer.
Client Registration URL: The URL to dynamically register client applications as a client application for your identity provider.
Authorization Header
For Okta, this value is SSWS ${api_token}, where api_token is an API token created through Okta.
For ForgeRock, this value is Bearer ${api_token}, where api_token is an API token created through ForgeRock.
For Salesforce, this value is Bearer ${api_token}, where api_token is an API token created through Salesforce. In Advanced Settings you can also select:
Disable server certificate validation: Disables server certificate validation if your OpenID client management instance presents a self-signed certificate, or one signed by an internal certificate authority.
Enable client deletion in Anypoint Platform: Enables deletion of clients created with this integration.
Enable client deletion and updates in IdP: To use this option, you must also select the Enable client deletion in Anypoint Platform option.
Token Introspection Client
Client ID: The client ID for an existing client in your IdP capable of introspection of all tokens from all clients.
For Okta, this value should be a “Confidential” client.
For ForgeRock, this value should be a “Confidential” client.
For Salesforce, this value should be a “Confidential” client.
Client Secret: The client secret that corresponds to the client ID.
OpenID Connect Authorization URLs
Authorize URL: The URL where the user authenticates and grants OpenID Connect client applications access to the user’s identity.
Token URL: The URL that provides the user’s identity, encoded in a secure JSON Web Token.
Token Introspection URL: endpoint that returns metadata about the access token, including expiration and token active state.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Mule 4 introduces DataWeave 2.0 as the default expression language replacing Mule Expression Language (MEL). DataWeave 2.0 is tightly integrated with the Mule 4 runtime engine, which runs the scripts and expressions in your Mule application.
Since Dataweave 2.0 is default expression language for Mule 4, Dataweave can use almost all place within your Mule application. So, In some use-case Dataweave needs to call java method or instantiate java class to execute java complex business logic.
In my previous blog I explained usage of java within Mulesoft flow. In this blog I am explaining usage of java within Dataweave 2.0.
There are 2 ways we can use java within Dataweave code
Calling java method
Instantiate Java class
1. Calling java method — There is restriction with Dataweave when calling to java. you can only call Static methods via DataWeave (methods that belong to a Java class, not methods that belong to a specific instance of a class). Before making a method call from java class, you must import the class.
2. Instantiate Java class – Dataweave allows to instantiate a new object of any java class but you can’t call its instance method through dataweave. You can refer it as variables.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
MuleSoft is a lightweight integration and API platform that allows you to connect anything anywhere and enable your data through API. Mule evolved from java and spring framework. MuleSoft supports multiple language although all Mule module are developed in java.
Since Mule evolved from java it has capability to use direct java class and method in Mule flow. This capability gives flexibility to Mule developer to use java for complex business logic.
There are several ways you can use java within Mule. Here are some of Java modules available to use within MuleSoft application
There are 4 java modules are available in MuleSoft flow
New
Invoke
Invoke static
Validate type
To explain all these components and uses in Mule flow I created Utils.java and AppUtils.java classes
1. New – AppUtils.java
class instantiation can be achieved by calling constructor of this class
through MuleSoft New component within Mule flow.
AppUtils java class defined 2 contractors, So Mule constructor properties for NEW component is showing 2 options.
In above code, Instance of AppUtils class is created and placed into the “appInst” target variables to reuse same instance in Mule flow.
2. Invoke – In new java module we
instantiate AppUtils.java class and placed into “appInst” variable. Now to use
this variable set Invoke module and call one of method define in AppUtils.java
class. In AppUtils.java class, there is one non static method “generateRandomNumber”
defined with String parameter. In example we call this method through Invoke module.
3. Invoke static—Invoke static java
module enable mule flow to call java static method. This is one of the easy ways
to call any java method in Mule flow.
4. Validate type – Validate type java module
use instance of method from java. This module accepts “Accept subtypes”
parameter which indicates if the operation should accept all subclasses of a
class. By default it acceptSubtypes=“true” which means it will accept all sub
class of main class but if it will set as false acceptSubtypes=“false” then during
execution the operation throws an error (JAVA:WRONG_INSTANCE_CLASS)
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.
Mulesoft Connect 2019 was wrapped last month in North america. These connects are one of the premier conferences for API led connectivity and digital transformation.These conference brought more content for developers, architects, and business executives across different business domain. At MuleSoft CONNECT plethora of market experts, and business executives including industry’s CEO/CTO, discussed their Mulesoft experience and democratization of innovation.
During these conferences, I got an opportunity to talk to some business executives about their Mulesoft experiences and challenges.
One of the biggest challenges is to optimize Mulesoft vCore in cloudhub to keep their project in budget.
Here are few steps in Mulesoft application to keep vCore usage low and project in budget.
1. API Optimization — As per Mulesoft best practices, Mulesoft suggest API led connectivity to expose data to application within or outside of your organization through reusable and purposeful APIs.
The APIs used in an API-led approach to connectivity falls into three categories:
Experience APIs
Process APIs
System APIs
When you are working on API led connectivity, do we really need all three layers of APIs every time?
No, It is not necessary to implement all three layers of APIs every time.
Here are some of API layers use-case to save vCores usage and optimize APIs led connectivity.
Experience APIs — Experience API is similar to process APIs but unlike Process APIs, Experience APIs are more specifically tied to a unique business context, and project data formats, interaction timings, or protocols into a specific channel and context. These APIs simplifies your front end data, based on different GUI. For example if you are working on PC website or Mobile website, we display data based on user experience, so we need different APIs to show these data, but if your application needs only data irrespective of user experience we can skip Experience APIs and application can work only on Process APIs or System APIs. This will save some vCores and keep project in budget.
Process APIs — Process APIs, if you are working on complex business logic based on different organization department then you can incorporate all these business specific data in process layer and expose these data through process APIs. But if APIs are not incorporating any complex business logic and most of datas are processing through System APIs then in this use-case you can skip Process APIs and expose your data through System APIs. In this way you can save some vCore and keep your project within budget.
2. Salesforce Platform Events Integration – Salesforce integration with Mulesoft is one of the very common integration use-cases. In the old days Salesforce synced their data through polling. Poll run couple of time in whole day and sync data between different salesforce org. Since this is polling process, it is not easy to predict the volume of data flowing through Mulesoft application during a certain period of time. So in this case, we go with higher mulesoft vCore to avoid any memory leak.
Salesforce introduced “Platform Events” the Salesforce Enterprise Messaging Platform on June 2017. After introduction of “Platform Event”, integration of Mulesoft and salesforce has become very easy. “Platform Event” enterprise messaging service is event based. So any update for any create Object within salesforce generates event and sends payload to salesforce messaging queue. Mulesoft-Salesforce connector read these payload for data sync from Salesforce messaging queue FIFO based. Since this integration is event based, so as soon as Mulesoft receives event from “Platform event” it is processes Platform event message. So any time we have no large set of data to process. In this integration then we can go for lower vCore and execute project within budget.
3. Batch Process Optimization — Mulesoft allows to process messages in batch. Mule batch process provides a construct for asynchronous processing larger-than-memory data sets that can split into individual records. Mulesoft batch extracting, transforming and loading (ETL) information into a target system like hadoop.
Mulesoft needs large memory/vCore to run large sets of data in batch process. These Mulesoft batch process runs max once or twice a day . These Mulesoft batch hold large number of vCore idle rest of day without any active usage. You can optimize vCore usage and reduce your batching processing cost by following these two steps.
Reuse vCore by deploying multiple batch process applications — As you know, batch run certain time of day once or twice. Suppose one batch application is running every midnight and other batch application is running every morning . Both your batch application is taking 1 vCore. So both applications consuming total 2 vCore.
If you are configuring any CI/CD process like Jenkin/Code build to deploy your batch application into cloud then it is very easy to manage your process to reuse your vCore. Your can configure you CI/CD process to build your application and deploy your application into cloud when you want to run batch. Once batch is done then you un-deploy your application and deploy next batch application on same memory. In this way you can keep reusing your vCore memory and keep your project within budget.
Deploy Batch application in on-premise Mulesoft server — As we all know Batch process is simple and easy to maintain application in most to their use-case. In this case it is very easy to maintain on-premise Mulesoft server and deploy your batch application without much worry about vCore usage.
Rajnish Kumar is CTO of Vanrish Technology with Over 25 years experience in different industries and technology. He is very passionate about innovation and latest technology like APIs, IOT (Internet Of Things), Artificial Intelligence (AI) ecosystem and Cybersecurity. He present his idea in different platforms and help customer to their digital transformation journey.