In these 3 part series first part had covered 7 points an architect should consider for designing solutions on Salesforce. In this part am covering another 4 points essential to be aware.

Salesforce triggers, visual flows, workflows and process builders:

There are several ways in Salesforce to trigger event based actions based and these actions can trigger further actions. If you have medium-large scale Salesforce implantation then am sure you might have more than 1 team working on more than 1 projects. When there is no established architectural guideline then every project makes their own choices of implementing triggers or workflow rules etc. After few years you will realize that it makes so difficult to add new business functionality which requires more testing and design time as you need to consider various aspects of how logic already built into systems triggers various event based actions.

For example there is new business rule implemented in an after trigger to execute when candidate expected salary is greater than max salary for job position then candidate status should change to ‘On Hold’. There is an existing workflow rule which actually changes status to ‘Shortlisted’ if candidate doesn’t required travel. In this situation when new candidate is added where it asks for more than max salary it will set status to ‘On Hold’ and then it will also execute workflow rules and candidate will turn to ‘Shortlisted’ status because candidate doesn’t required travel. This situation occurred the way logic was implemented and logic to handle status was scattered into different places.

As an architect its overwhelming to dive into each and every small piece but its important that you visualize how platform should grow in few years and establish right practices for event based actions through review sessions with teams.

Your review sessions should be focused on how various features have been utilized and whether its right fit or not? Whether they have right naming conventions so without looking into detail it explains what specific rule or process or triggers does. You don’t have much choices if you are designing solutions where there was already things piled up before you join  but you can began establishing those practices at least at that point.  As event based actions are blessing as well as can create nightmares if not implemented correctly and makes your business processes tricky to understand.

I will recommend to use process builder as first option before you consider trigger or workflow rules. As process builder supersedes workflow rules its the go to option for all new functions.Check out my post on where you can handle logic in process builder instead of triggers or workflow rules.

Its important to understand while developing solution about how execution sequence for triggers, validation rules, workflow rules etc gets executed in single DML operation. Adding more before and after triggers on an object can also cause an unexpected result as the sequence of these triggers are not guaranteed. The best practices is to have 2 triggers on an object where one is before and another one after trigger.

Visual flows are more suitable when you have scenario to build complex situation to direct users  to various screens or build wizard. This option is more suitable when your requirement doesn’t meet by workflow rules, triggers etc. I consider this as last option due to limits of out of box page layout which can display fields only in single column and requires vf page to be built for better look and feel of pages.

Considering the complexity and maintenance issues that event based actions can bring its essential to establish appropriate measures in place.

Custom code vs out of box features:

There are times when you will come across situation to decide whether to code or use out of box features. Having custom code you can design your own look and feel pages, write complex business logic, perform event based actions and many more things. There are situations where there is no way you can develop solution only via config and you must have to code.  There needs to be a thoughtful decision about how much should be done in code and whether part of it can be achieved via config or not.

For example lets say you are using standard page layout for viewing each job candidate details. There is new requirement on candidate page to show candidate’s interview progression using various stage images with rest of candidate details. The interview stages which candidate has cleared displays as green and if failed then red. In this situation you will definitely have to write visual force page as it needs to be displayed on top but the only function it should have in visual force page is to displays the candidate progression. You can create a visual force page and add it to standard page layout of an object which will also have rest of candidate details. Now there are few limitations of embedding visual force page in page layout but purpose of example is to evaluate all available options before developing complete custom code.

Not everything can be achieved via config and there are also time when you might have just explain business about any downside of developing custom solution including cost and maintenance. The option of combining config and custom code is great way to achieve better result.

Salesforce synchronous and asynchronous processing:

There are 2 ways to execute asynchronous request in Salesforce. One is using apex batch and another using future method annotation. synchronous processing is like workflow rules, triggers, assignment rules or webservices call which executes immediately on invocation. While deciding between synchronous and asynchronous processing the important factor to consider is governance limits. Governance Limits in asynchronous processing is higher than synchronous.

Apex batch can process up to 50 million records which is quite significant. its quite debatable if you have ETL tool available then whether to use it or write an apex batch. If some thing like mass data update, delete with simple transformation rules can be done  in ETL then it will be my first choice. When you are using ETL tool with bulk or SOAP API it counts against your limit so it still requires careful analysis about how your daily API usage is.

While designing synchronous solution on Salesforce its important to get an overall understanding of how your organization data volume can grow in coming years and whether your solution can be scalable according to that demand or not. For example lets say you have daily feed from an external systems for adding or updating accounts and each request submits about 200 records via SOAP API. When accounts are added or modified there are other related objects like contacts, opportunities etc are updated based on business rules. Your solution worked in first year of launch because not every account had many related records. After a year as volume grew you started to receive DML exception because it started to exceed DML limit of 10,000 records to update or insert in single transaction. In this scenario it requires either to change external system to feed less than 200 accounts per request . Again in coming year as volume grow you will have to further reduce the feed and it affected business revenue as accounts were not syncing on time. Situation like this can be handled with asynchronous way with batch or future methods then solution can grow as business needs grows.

Based on this example its essential for an architect to get engage early and visualize how platform can grow and whether to perform operation synchronous or asynchronous way.

Salesforce Integrations:

You can have an incoming and outgoing connections to and from Salesforce to any system which supports SOAP or REST API. The integrations can be built using either

  1. Enterprise WSDL which exposes all objects in your org and supports select and DML statements
  2. Using bulk API
  3. Using workflow rules outbound messages
  4. streaming API
  5. Build your own SOAP or REST service using apex annotations which can be invoked by external system
  6. Make call to an external SOAP or REST service.
  7. Tooling and metadata API.

Most of time Salesforce enterprise WSDL meets the requirement if your needs are specific to performing Select operation and DML but if you need to perform complex logic which returns an object with bunch of values to caller then it requires to write custom service which can handle this logic.

Certain situation requires external system to perform frequent polling on salesforce objects to retrieve any new updates. In this situation the external system might be polling frequently and consumes your API limits. Salesforce provides built in features to notify listeners with workflow outbound messages or streaming API which can be used to send notification to external system when specific criteria are met.

Streaming API can publish message to multiple subscribers when specific event occurs but there is no retry when subscribers are not available. Its best suited when your notification messages are more generic and doesn’t require retries and durability even when client is unavailable. For example there are recruiters who subscribers to streaming API topics and they get notified when there is new position with status as ‘Approved’. In this case all available subscribers receives the notification message.

The outbound messages provides retry capability and triggered by workflow rule. If the external endpoint is unavailable then it tries to reconnect again and again up to 24 hours before message gets dropped from queue. Workflow rules can only execute whenever record is created or updated so outbound messages are only suitable when you don’t have any requirement to trigger call when records are deleted. Also this option is not suitable when you need to send data from multiple objects to listener.

In the situation where you need to send data from multiple objects when event is executed triggers are best option. If data volume is not high then You can query data from different objects and perform external service call but you can’t make synchronous call to external service from triggers. In this situation only option left is to implement future method and let it make call to external service. Again future methods has their own limitations which needs to be considered as well.

Bulk API is something which will be used by ETL tools and Salesforce provided data loader. There might not be any necessity to  write your own programs to use bulk API until you are writing you own tool. Any mass DML or extract activity can be performed with bulk API.

For An architect its essential to be aware of integration options and shortcomings of each and choose right option based on need.

On certain points there might be feeling that an architect doesn’t need to deep dive into some topics and it might create focus loss from other important design drivers. I think designing traditional system has been evolved quite differently into cloud and its essential to consider various options by being first developer and then an architect.

This is it in this post and let me know your comments/thoughts.