Thursday, May 25, 2017

Cross transactional cache invalidation across WCS and Search applications

Introduction
Cache invalidation is one of the key and tedious part of software development. It is really important that we purge the invalid data out of the cache at the right time so that the customers always see the updated content. After FEP7 IBM has segregated the websphere commerce application into two. WCS and Search. We use DynaCacheInvalidation command to invalidate the cache.

Caching
It is now possible to cache the data in Search as well as WCS dynacache as they are placed in different servers. We can make changes to the cachespec.xml in these applications and define the caching strategy. As both the caches are used we should make sure that whenever the data is updated the cache entries must be cleared in WCS and search in the right order.

Cache invalidation
In WCS,  OOB uses DynacacheinvalidationCmd for clearing the cache entries. As this is a scheduler command, it runs inside the WCS JVM and will clear the cache entries from WCS dynacache. For better performance we can cache the data at search as well.

For Search, OOB uses a filter. Every search request will go through this filter and it will invalidate the relevant cache entires as per the dependency id and insert time in CAHCHEIVL table. The name of the filter is RestInvalidationFilter. The filter will look at the value of "CrossTransactionCache/invalidationJobInterval"  in wc-component.xml under com.ibm.commerce.foundation folder. If the value is 30, it means the filter will trigger the cache invalidation every 30 seconds.

Why Cross transactional invalidations are required betweens WCS and Search? 
For example the jsps can be cached in WCS dynacache and search responses can be cached in Search dynacache. In this scenario we need both the caches to be cleared in the right fashion. Let us take a product page(say product.jsp) for example. To show the product page we make a /detailsByProductId search call to get the product data. The page is cached in commerce and the search response is cached in Search JVM. If any of the product data changes we need to invalidate both caches. The search cache has to be invalidated before the WCS cache because in case if commerce cache entry is cleared (where as search is not yet cleared) and it makes a call to search for getting the details, the search will respond with old data.

How to trigger invalidation between apps
There are many approaches to achieve it. One of it is to customise the DynaCacheInvalidationCmd and to make a search rest call in it.

1. Create a new rest in service in search server and implement it in the same way as what the filter currently does. The changes where it looks for component configuration and all can be overridden. So the rest can call the OOB LocalJVMDynaCacheInvalidationHelper (as the OOB rest filter does). It will automatically remember the last invalidation time and will check for the cacheivl records after that.

2. Change the DynaCaheInvalidationCmd implementation. Extend the OOB command and make sure we call the above rest call before  we trigger the WCS invalidations. That will make sure that the search cache is invalidated prior to WCS cache.

3. Extend the wc-component.xml and override the CrossTransactionCache/invalidationJobInterval value to -1 so that the filter will not invalidate on its own. Alternatively the filter can be removed from web.xml as well.

Clustered environment: In a clustered environments there were issues with cache getting replicated across cells. One of the way to fix it is to schedule two different instances of DynaCacheInvalidation job on one server in each cell using the JVM property com.ibm.commerce.scheduler.SchedulerHostName

Sunday, March 19, 2017

Creating custom rest service using databeans

Introduction
Custom rest services will be required in multiple scenarios. WCS provides three ways to create new REST services.

  • Using BOD mapping framework
  • Using controller command framework
  • Using databean mapping framework

Let us create a new rest service using a databean. The url of the service is https://localhost/wcs/resources/store/{storeId}/myPath/customDetails

Steps
1. Create a custom handler to service your request.
A new java class must be created (say MyCustomHandler) and it should extend the class com.ibm.commerce.rest.classic.core.AbstractConfigBasedClassicHandler.

In the handler set the path and create a method to service the request

Path("store/{storeId}/myPath")
@Encoded
@Description("This class provides RESTful services to get some custom details")
public class MyCustomHandler extends AbstractConfigBasedClassicHandler{
@POST
@Path("customDetails")
@Description("Gets custom details")
public Response findCustomDetails(
@PathParam("storeId")
@ParameterDescription(description = "storeId in parameter", valueProviderClass = StoreIdProvider.class, required = true) String storeId,
@QueryParam("responseFormat") @ParameterDescription(description = "responseFormat description", valueProviderClass = ResponseType.class) String responseFormat) {
//params is a HashMap containing the parameters to be set for bean
Map params = new HashMap<String, String>();
params.put("storeId", storeId);

//call executeConfigBasedBeanWithContext method to execute the databean and to get the response
//calling with profile name Custom_Profile_1
Response result =
executeConfigBasedBeanWithContext("com.mycompany.beans.MyCustomDataBean", "Custom_Profile_1", responseFormat,params);

}
}

2. Add the handler to resource file
Navigate to Rest\WebContent\WEB-INF\config\resources-ext.properties and add the fully qualified name of the handler in the file. If the file is not there, create it

3. Create the data bean mapping xml 
Navigate to Rest\WebContent\WEB-INF\config\beanMapping-ext
Create an xml file with complete name of Databean. com.mycompany.beans.MyCustomDataBean.xml
Configure the input-output profiles in this xmls. We can have multiple profiles in the same xml. A simple example is given below. It will have the input/output parameter name and the corresponding method in data bean used for setting/getting them. The profile name used on the handler will choose what profile will be called.

<?xml version="1.0" encoding="UTF-8"?>
<bean>
<profiles>
<profile name="Custom_Profile_1">
<inputs>
<input inputName="storeId" methodName="setStoreId" />
</inputs>

<outputs>
<output methodName="getDetails" outputName="Details" />
</outputs>
</profile>
                       <!--Add more profiles if needed   -->
</profiles>
</bean>

4. Create the databean
Create the data bean. Use the same steps that we use for creating databeans. The changes specific for rest service bean are

  • It must implement com.ibm.commerce.security.Delegator
  • It must override the getDelegate() method. We can just return null in that method.
  • It must have all methods specified in the mapping xml
Restart and republish is required for reflecting the changes



Friday, October 28, 2016

How to query WCS DB from Search application

Introduction
There will be several scenarios where we have to interact with the WCS DB from the search application. We can use JDBCQueryService for the same. This  class deals with complex SQL queries and can deal with select statements, aggregate functions, update, delete etc.

Example scenario: In a specific search profile boost a set of products which has got the column CATENTRY.FIELD1 set to 1.

Steps

  • Write a custom expression provider CustomProductBoostExpressionProvider.java which extends AbstractSolrSearchExpressionProvider and override the invoke() method.
  • Add the query to a custom .tpl file in the corresponding component configuration. For example Search/xml/config/com.ibm.commerce.catalog-ext/wc-query-utilities.tpl


BEGIN_SQL_STATEMENT
name=getCustomBoostedProducts
base_table=CATENTRY
sql= 
  SELECT catentry_id AS CATENTRY_ID_BOOSTED
FROM CATENTRY
WHERE CATENTRY.FIELD1 = ?param1?
with ur
END_SQL_STATEMENT


  • In the expression provider use the below code to run the query

JDBCQueryService service = new JDBCQueryService("com.ibm.commerce.catalog");
ArrayList<String> paramList = new ArrayList(1);
paramList.add("1");
Map queryParameters = new HashMap(1);
queryParameters.put("param1", paramList );
List<HashMap> results = service.executeQuery("getCustomBoostedProducts", queryParameters);

Iterate over the list to get the catentryIds.
Iterator<HashMap> recordIterator = results.iterator();
StringBuffer sb = new StringBuffer();
List<String, String> ids= new ArrayList<String,String>();
while (recordIterator.hasNext()) {
HashMap<String, Object> record = (HashMap) recordIterator.next();
String catentryId = record.get("CATENTRY_ID_BOOSTED").toString();
                       ids.add(catentryId);
}


  • Now form the solr filter query and add it to the control parameter

for (String catentryId : ids) {

StringBuilder s = new StringBuilder();
s.append("childCatentry_id");
s.append(":\"");
s.append(catentryId);
s.append("\"^");
s.append(25));

// adds the boost query
addControlParameterValue(SearchServiceConstants.CTRL_PARAM_SEARCH_INTERNAL_BOOST_QUERY, s.toString());
}

The above shows an interaction of search app with WCS DB. We can use this for any component configuration like marketing, foundation, promotion etc








Monday, October 3, 2016

Heap dump analysis using memory analyzer for WCS

Introduction
Memory is one of the important area in any application. Java has its own memory management. It is really important to efficiently maintain an optimum memory consumption and total run time in java. Failing to to do so will cause a lot of performance and other issues. Java handles its memory in two areas. Heap and Stack

Heap memory
All the objects created in a java application is stored in heap memory. We create an object using  new operator. The garbage collector can logically separate the heap into different areas so that the GC is faster.

Stack memory
Stack is where the method invocations and the local variables are stored. If a method is called then its stack frame is put onto the top of the call stack. The stack frame holds the state of the method including which line of code is executing and the values of all local variables. The method at the top of the stack is always the current running method for that stack.

The maximum heap size and permgen size can be set during start up of java application using  JVM parameters -Xmx and -XX:MaxPermSize

Memory leak
It is a type of resource leak when a program incorrectly manages memory allocations. That means a failure in a program to release the discarded memory which will cause impaired performance or failure.

OutOfMemory Errors
Java heap space error will be triggered when the application attempts to add more data into the heap space area, but there is not enough room for it.

Heap dump
heap dump is a snapshot of the memory of a Java™ process. The snapshot contains information about the Java objects and classes in the heap at the moment the snapshot is triggered

Eclipse Memory Analyzer(MAT)

It is one of the feature rich heap analyzer that helps us analyse the heap memory. If we want look at a memory related issue, we need to generate the heap dumps during the issue and then analyse the same. Most of the heap issues will be resolved with a restart as it will freshen up the heap but it is just a temporary fix to retain the services. For permanent solution we need to perform a detailed analysis and fix the root cause.

1. Download the memory analyzer and extract the same. The source file can be obtained from below link 
Memory analyzer download

2. Generate the heap dump file from the server.
IBM generates the files with .phd format. It is not recognized by MAT. In order to analyse these files we need to add IBM Diagnostic Tool Framework for Java (DTFJ) plugin to it.

3. Adding DTFJ plugin to MAT
  • Download the DTFJ assets from the below link DTFJ Source file
  • Add DTFJ to MAT. Open MAT and choose Help->Install New Software
  • In the dialog box click "Add" and provide the link. Follow the steps of the wizard. 
  • There are some case where the link does not work. The download the files manually(one by one from the above link) to a folder, select "Local" in Add dialog box and locate the folder.
  • Restart MAT and you are good to go.
4. Choose File -> Open heap dump and locate your file. This will open the heap dump and it will automatically display the overview of the analysis.


Look at the overview and it will display the main problems if any or else there are multiple options available to analyze further.

Below is an example of a heap file where one object takes more than 75% of the memory. This shows that the object is not properly handled and hence we need to have a look at the cause behind such immense object creations.



















The below pic says about the problem. It means the class loader has loaded an immense LRUCache object. We can now look at the application code and check when are these objects created and why the size is this huge.












This is an example of how to nail down root causes of memory issues using MAT.

The are options to look at Top components, Leakage suspects etc.We can also check the path to GC root, thread details, hash entries, duplicate classes and much more which can be explored our self.



Friday, September 23, 2016

Extracting response cookies of a rest call in WCS

Introduction
We all work with cookies and it is one of the important aspect when it comes to session management.

There will many situations where we would need to read the cookies at server side. Usualy we follow the responseWrapper approach to read the resposne cookies which is defined in the link below

ResponseWrapper approach to read cookies

There are times when the above approach doesn't work but we would still need to read them. Some changes to commerce OOB code will get us there.

1. Extend CommerceTokenRequestHandler.java and overwrite handleRequest method. Add the below code

{
HttpServletResponseWrapper resp = (HttpServletResponseWrapper) messageContext.getAttribute(HttpServletResponse.class);
SRTServletResponse srtResponse = getSRTResponseFromResponseWrapper(resp);
responseCookies = srtResponse.getCookies();
}

private SRTServletResponse getSRTResponseFromResponseWrapper(ServletResponseWrapper respWrapper) {

ServletResponse response = respWrapper.getResponse();
if (response == null) {
return null;
} else if (response instanceof SRTServletResponse) {
return (SRTServletResponse) response;
} else if (response instanceof ServletResponseWrapper) {
return getSRTResponseFromResponseWrapper((ServletResponseWrapper) response);
} else {
return null;
}

}


2. Extend CommerceDeploymentConfiguration.java and overwrite initRequestUserHandlers() method


final List<RequestHandler> handlerList = super.initRequestUserHandlers();
if (handlerList != null) {
for (int i = 0; i < handlerList.size(); i++) {
final RequestHandler handler = handlerList.get(i);
if (handler instanceof CommerceTokenRequestHandler) {
handlerList.set(i, new ExtendedCommerceTokenRequestHandler());
}
}
}

The above customisations would suffice to read the response cookies for rest calls

Saturday, May 21, 2016

Populating and retrieving data from Solr

Solr is one of the integral component of WCS and it is a very frequent requirement to update and read the data in Solr. In this write up I will explain it with some examples.

Populating data into Solr
There are multiple ways we can populate solr indexes. The source can be a cvs file, JSON feed, a url (like a web service) that will return the data in the way we need. Here we will have cvs file as the input.
The headings of the input file must be same as the solr fields. Referring to my earlier post, we will use the same data. So the headings will be searchterm and recipes. The file must have data as specified in this post http://exploringwebspherecommerce.blogspot.com.au/2016/02/creating-and-populating-new-core-in.html

To populate the indexes, we can write a java command and can use the solrj libraries. The base example would be something like below. It assumes the data to be populated is properly updated in the file c:/Ranjith/searchRecepies.csv

SolrServer solrServer= new HttpSolrServer("url");//solr url needs to be passed here
NamedList<String> params = new NamedList<String>();
//Sets the file from which data need to be streamed 
updateParams.add(stream.file","c:/Ranjith/searchRecepies.csv"); 
updateParams.add("stream.contentType", "text/csv;charset=utf-8");
//set to false ignored to handle transaction rollbacks. Would need a specific commit query issued at //the end.
updateParams.add("commit", "false");
SolrQuery updateQuery = new SolrQuery();
SolrParams solrParams = updateQuery.toSolrParams(params);
updateQuery.add(solrParams);
QueryRequest solrRequest = new QueryRequest(updateQuery);
solrRequest.setPath("/update");
QueryResponse solrResponse;
solrResponse = solrRequest.process(server);

We must have a separate method which fires a query with just parameter "commit" as true and must call  it at the end so that it is committed. Similarly we can have a different method for rollback as well  so that we can rollback the changes in case of an exception or so.

Retrieving data from Solr

To retrieve data from solr we can use the below sample code. 

SolrServer solrServer= new HttpSolrServer("url");
SolrQuery qry = new SolrQuery();
qry.setQuery("chicken"); //Set the search term here
//You can set all other parameters in Solr query as needed like sort, filter query, fields, facets  etc
QueryResponse response = solrServer.query(query);
List<SolrDocument> data = response.getResults();

Now data object will have the response form Solr. You can manipulate it as required and pass it back to UI.

In case if we want all the GET request to go through a set of logic, it is possible to add an handler in solrconfig.xml and set the requestHandler in the query as well.

Saturday, February 20, 2016

Creating a new core in SOLR for Websphere Commerce

Introduction

A Solr core is basically an index of the text and fields found in documents. A single Solr instance can contain multiple "cores", which are separate from each other based on local criteria. Having multiple cores helps us to segregate the data. Commerce provides a set of cores by default. They are 

  • MC_10001_CatalogEntry_en_US
  • MC_10001_CatalogEntry_Unstructured_en_US
  • MC_10001_CatalogGroup_en_US
Say we want to index a different data. An example would be as below

In order to support recipes we want have a search term to recipes relationship. When a customer searches anything we show a list of recipe names which has that search term.  When they click on one of the name we can make a call to an external system where recipe information is stored and retrieve the specific recipe and show it to the user. So you need the data in SOLR in the below fashion 

SearchTerm     Recipes
milk                 Mascarpone mango lassi, Ultimate breakfast smoothie
cheese              Canadian cheddar melt, Night mac and cheese
chicken            Chicken paprica, Spanish chicken
beef                  Beef Stroganoff, Swiss Sizzler

We will keep the external integration away for now. Our requirement is to store the recipe to search term mapping in SOLR. As this is no way related to catentry or catgroup we don't want to keep it in the above listed cores and hence we will create a new one.

Steps
1. Change the solr.xml
  • This xml has the configurations of solr cores defined. Navigate to WCS_InstalDir/Search/solr/home and open solr.xml
  • To define a new core add the below line to it. "instanceDir" is the name of the folder where the configuration files are present and "name" is the name of new core.
        <core instanceDir="Recipes" name="recipes"/>     

2. Create the instance directory
     Every solr core must have a set of configuration files which will be there in the instance directory. Solr provides a default core. You make a copy of this core and name it according to the solr.xml. So I would make a copy of Default folder and will rename it as "Recipes"

3. Alter schema.xml
    Schema.xml defines the solr schema for the specific core. In order to index a field we need to use either an OOB solr field or create a custom field type of ours. Below is an example

<fieldType name="text_suggest" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.WordDelimiterFilterFactory" 
   generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1"
   catenateAll="1" splitOnCaseChange="1" splitOnNumerics="1" preserveOriginal="1" />
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
<analyzer type="query">
<charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.WordDelimiterFilterFactory" 
   generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0"
   catenateAll="0" splitOnCaseChange="0" splitOnNumerics="1" preserveOriginal="0" />
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>

Also add the fields that we need to create in the schema.xml


<field name="searchterm" type="text_suggest" indexed="true" stored="true" omitNorms="true"/>
<field name="recipes" type="text_suggest" indexed="true" stored="true" omitNorms="true" />

4. Add the new core in  extended wc-search.xml

Add the below line in the <_config:cores> tag

<_config:core catalog="0" indexName="Recipes" language="en_US" name="recipes" path="Recipes" serverName="AdvancedConfiguration_1" />

This has to be done on wc-search xml in Search and WC project.

5. Clean, build, Restart and publish. Hit the below url and you should be able to see new core up and running.
http://localhost/solr/recipes/select?q=*:*

Population of indexes in this custom SOLR core will be covered in the next blog




Cross transactional cache invalidation across WCS and Search applications

Introduction Cache invalidation is one of the key and tedious part of software development. It is really important that we purge the inval...