Oh my god. It's full of code!

Posts tagged “salesforce

Deep Clone (Round 2)

So a day or two ago I posted my first draft of a deep clone, which would allow easy cloning of an entire data hierarchy. It was a semi proof of concept thing with some limitations (it could only handle somewhat smaller data sets, and didn’t let you configure all or nothing inserts, or specify if you wanted to copy standard objects as well as custom or not). I was doing some thinking and I remembered hearing about the queueable interface, which allows for asynchronous processing and bigger governor limits. I started thinking about chaining queueable jobs together to allow for copying much larger data sets. Each invocation would get it’s own governor limits and could theoretically go on as long as it took since you can chain jobs infinitely. I had attempted to use queueable to solve this before but i made the mistake of trying to kick off multiple jobs per invocation (one for each related object type). This obviously didn’t work due to limits imposed on queueable. Once I thought of a way to only need one invocation per call (basically just rolling all the records that need to get cloned into one object and iterate over it) I figured I might have a shot at making this work. I took what I had written before, added a few options, and I think I’ve done it. An asynchronous deep clone that operates in distinct batches with all or nothing handling, and cleanup in case of error. This is some hot off the presses code, so there is likely some lingering bugs, but I was too excited not to share this. Feast your eyes!

public class deepClone implements Queueable {

    //global describe to hold object describe data for query building and relationship iteration
    public map<String, Schema.SObjectType> globalDescribeMap = Schema.getGlobalDescribe();
    
    //holds the data to be cloned. Keyed by object type. Contains cloneData which contains the object to clone, and some data needed for queries
    public map<string,cloneData> thisInvocationCloneMap = new map<string,cloneData>();
    
    //should the clone process be all or nothing?
    public boolean allOrNothing = false;
    
    //each iteration adds the records it creates to this property so in the event of an error we can roll it all back
    public list<id> allCreatedObjects = new list<id>();
    
    //only clone custom objects. Helps to avoid trying to clone system objects like chatter posts and such.
    public boolean onlyCloneCustomObjects = true;
    
    public static id clone(id sObjectId, boolean onlyCustomObjects, boolean allOrNothing)
    {
        
        deepClone startClone= new deepClone();
        startClone.onlyCloneCustomObjects  = onlyCustomObjects;
        startClone.allOrNothing = allOrNothing;
        
        sObject thisObject = sObjectId.getSobjectType().newSobject(sObjectId);
        cloneData thisClone = new cloneData(new list<sObject>{thisObject}, new map<id,id>());
        map<string,cloneData> cloneStartMap = new map<string,cloneData>();
        
        cloneStartMap.put(sObjectId.getSobjectType().getDescribe().getName(),thisClone);
        
        startClone.thisInvocationCloneMap = cloneStartMap;
        return System.enqueueJob(startClone);      
    }
    
    public void execute(QueueableContext context) {
        deepCloneBatched();
    }
        
    /**
    * @description Clones an object and the entire related data hierarchy. Currently only clones custom objects, but enabling standard objects is easy. It is disabled because it increases risk of hitting governor limits
    * @param sObject objectToClone the root object be be cloned. All descended custom objects will be cloned as well
    * @return list<sobject> all of the objects that were created during the clone.
    **/
    public list<id> deepCloneBatched()
    {
        map<string,cloneData> nextInvocationCloneMap = new map<string,cloneData>();
        
        //iterate over every object type in the public map
        for(string relatedObjectType : thisInvocationCloneMap.keySet())
        { 
            list<sobject> objectsToClone = thisInvocationCloneMap.get(relatedObjectType).objectsToClone;
            map<id,id> previousSourceToCloneMap = thisInvocationCloneMap.get(relatedObjectType).previousSourceToCloneMap;
            
            system.debug('\n\n\n--------------------  Cloning record ' + objectsToClone.size() + ' records');
            list<id> objectIds = new list<id>();
            list<sobject> clones = new list<sobject>();
            list<sObject> newClones = new list<sObject>();
            map<id,id> sourceToCloneMap = new map<id,id>();
            list<database.saveresult> cloneInsertResult;
                       
            //if this function has been called recursively, then the previous batch of cloned records
            //have not been inserted yet, so now they must be before we can continue. Also, in that case
            //because these are already clones, we do not need to clone them again, so we can skip that part
            if(objectsToClone[0].Id == null)
            {
                //if they don't have an id that means these records are already clones. So just insert them with no need to clone beforehand.
                cloneInsertResult = database.insert(objectsToClone,allOrNothing);

                clones.addAll(objectsToClone);
                
                for(sObject thisClone : clones)
                {
                    sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
                }
                            
                objectIds.addAll(new list<id>(previousSourceToCloneMap.keySet()));
                //get the ids of all these objects.                    
            }
            else
            {
                //get the ids of all these objects.
                for(sObject thisObj :objectsToClone)
                {
                    objectIds.add(thisObj.Id);
                }
    
                //create a select all query to get all the data for these objects since if we only got passed a basic sObject without data 
                //then the clone will be empty
                string objectDataQuery = buildSelectAllStatment(relatedObjectType);
                
                //add a where condition
                objectDataQuery += ' where id in :objectIds';
                
                //get the details of this object
                list<sObject> objectToCloneWithData = database.query(objectDataQuery);
    
                for(sObject thisObj : objectToCloneWithData)
                {              
                    sObject clonedObject = thisObj.clone(false,true,false,false);
                    clones.add(clonedObject);               
                }    
                
                //insert the clones
                cloneInsertResult = database.insert(clones,allOrNothing);
                
                for(sObject thisClone : clones)
                {
                    sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
                }
            }        
            
            for(database.saveResult saveResult :  cloneInsertResult)
            {
                if(saveResult.success)
                {
                    allCreatedObjects.add(saveResult.getId());
                }
                else if(allOrNothing)
                {
                    cleanUpError();
                    return allCreatedObjects;
                }
            }
              
            //Describes this object type so we can deduce it's child relationships
            Schema.DescribeSObjectResult objectDescribe = globalDescribeMap.get(relatedObjectType).getDescribe();
                        
            //get this objects child relationship types
            List<Schema.ChildRelationship> childRelationships = objectDescribe.getChildRelationships();
    
            system.debug('\n\n\n-------------------- ' + objectDescribe.getName() + ' has ' + childRelationships.size() + ' child relationships');
            
            //then have to iterate over every child relationship type, and every record of that type and clone them as well. 
            for(Schema.ChildRelationship thisRelationship : childRelationships)
            { 
                          
                Schema.DescribeSObjectResult childObjectDescribe = thisRelationship.getChildSObject().getDescribe();
                string relationshipField = thisRelationship.getField().getDescribe().getName();
                
                try
                {
                    system.debug('\n\n\n-------------------- Looking at ' + childObjectDescribe.getName() + ' which is a child object of ' + objectDescribe.getName());
                    
                    if(!childObjectDescribe.isCreateable() || !childObjectDescribe.isQueryable())
                    {
                        system.debug('-------------------- Object is not one of the following: queryable, creatable. Skipping attempting to clone this object');
                        continue;
                    }
                    if(onlyCloneCustomObjects && !childObjectDescribe.isCustom())
                    {
                        system.debug('-------------------- Object is not custom and custom object only clone is on. Skipping this object.');
                        continue;                   
                    }
                    if(Limits.getQueries() >= Limits.getLimitQueries())
                    {
                        system.debug('\n\n\n-------------------- Governor limits hit. Must abort.');
                        
                        //if we hit an error, and this is an all or nothing job, we have to delete what we created and abort
                        if(!allOrNothing)
                        {
                            cleanUpError();
                        }
                        return allCreatedObjects;
                    }
                    //create a select all query from the child object type
                    string childDataQuery = buildSelectAllStatment(childObjectDescribe.getName());
                    
                    //add a where condition that will only find records that are related to this record. The field which the relationship is defined is stored in the maps value
                    childDataQuery+= ' where '+relationshipField+ ' in :objectIds';
                    
                    //get the details of this object
                    list<sObject> childObjectsWithData = database.query(childDataQuery);
                    
                    system.debug('\n\n\n-------------------- Object queried. Found ' + childObjectsWithData.size() + ' records to clone');
                    
                    if(!childObjectsWithData.isEmpty())
                    {               
                        map<id,id> childRecordSourceToClone = new map<id,id>();
                        
                        for(sObject thisChildObject : childObjectsWithData)
                        {
                            childRecordSourceToClone.put(thisChildObject.Id,null);
                            
                            //clone the object
                            sObject newClone = thisChildObject.clone();
                            
                            //since the record we cloned still has the original parent id, we now need to update the clone with the id of it's cloned parent.
                            //to do that we reference the map we created above and use it to get the new cloned parent.                        
                            system.debug('\n\n\n----------- Attempting to change parent of clone....');
                            id newParentId = sourceToCloneMap.get((id) thisChildObject.get(relationshipField));
                            
                            system.debug('Old Parent: ' + thisChildObject.get(relationshipField) + ' new parent ' + newParentId);
                            
                            //write the new parent value into the record
                            newClone.put(thisRelationship.getField().getDescribe().getName(),newParentId );
                            
                            //add this new clone to the list. It will be inserted once the deepClone function is called again. I know it's a little odd to not just insert them now
                            //but it save on redudent logic in the long run.
                            newClones.add(newClone);             
                        }  
                        cloneData thisCloneData = new cloneData(newClones,childRecordSourceToClone);
                        nextInvocationCloneMap.put(childObjectDescribe.getName(),thisCloneData);                             
                    }                                       
                       
                }
                catch(exception e)
                {
                    system.debug('\n\n\n---------------------- Error attempting to clone child records of type: ' + childObjectDescribe.getName());
                    system.debug(e); 
                }            
            }          
        }
        
        system.debug('\n\n\n-------------------- Done iterating cloneable objects.');
        
        system.debug('\n\n\n-------------------- Clone Map below');
        system.debug(nextInvocationCloneMap);
        
        system.debug('\n\n\n-------------------- All created object ids thus far across this invocation');
        system.debug(allCreatedObjects);
        
        //if our map is not empty that means we have more records to clone. So queue up the next job.
        if(!nextInvocationCloneMap.isEmpty())
        {
            system.debug('\n\n\n-------------------- Clone map is not empty. Sending objects to be cloned to another job');
            
            deepClone nextIteration = new deepClone();
            nextIteration.thisInvocationCloneMap = nextInvocationCloneMap;
            nextIteration.allCreatedObjects = allCreatedObjects;
            nextIteration.onlyCloneCustomObjects  = onlyCloneCustomObjects;
            nextIteration.allOrNothing = allOrNothing;
            id  jobId = System.enqueueJob(nextIteration);       
            
            system.debug('\n\n\n-------------------- Next queable job scheduled. Id is: ' + jobId);  
        }
        
        system.debug('\n\n\n-------------------- Cloneing Done!');
        
        return allCreatedObjects;
    }
     
    /**
    * @description create a string which is a select statement for the given object type that will select all fields. Equivalent to Select * from objectName ins SQL
    * @param objectName the API name of the object which to build a query string for
    * @return string a string containing the SELECT keyword, all the fields on the specified object and the FROM clause to specify that object type. You may add your own where statements after.
    **/
    public string buildSelectAllStatment(string objectName){ return buildSelectAllStatment(objectName, new list<string>());}
    public string buildSelectAllStatment(string objectName, list<string> extraFields)
    {       
        // Initialize setup variables
        String query = 'SELECT ';
        String objectFields = String.Join(new list<string>(globalDescribeMap.get(objectName).getDescribe().fields.getMap().keySet()),',');
        if(extraFields != null)
        {
            objectFields += ','+String.Join(extraFields,',');
        }
        
        objectFields = objectFields.removeEnd(',');
        
        query += objectFields;
    
        // Add FROM statement
        query += ' FROM ' + objectName;
                 
        return query;   
    }    
    
    public void cleanUpError()
    {
        database.delete(allCreatedObjects);
    }
    
    public class cloneData
    {
        public list<sObject> objectsToClone = new list<sObject>();        
        public map<id,id> previousSourceToCloneMap = new map<id,id>();  
        
        public cloneData(list<sObject> objects, map<id,id> previousDataMap)
        {
            this.objectsToClone = objects;
            this.previousSourceToCloneMap = previousDataMap;
        }   
    }    
}    

It’ll clone your record, your records children, your records children’s children’s, and yes even your records children’s children’s children (you get the point)! Simply invoke the deepClone.clone() method with the id of the object to start the clone process at, whether you want to only copy custom objects, and if you want to use all or nothing processing. Deep Clone takes care of the rest automatically handling figuring out relationships, cloning, re-parenting, and generally being awesome. As always I’m happy to get feedback or suggestions! Enjoy!

-Kenji


Salesforce True Deep Clone, the (Im)Possible Dream

So getting back to work work (sorry alexa/amazon/echo, I’ve gotta pay for more smart devices somehow), I’ve been working on a project where there is a fairly in depth hierarchy of records. We will call them surveys, these surveys have records related to them. Those records have other records related to them, and so on. It’s a semi complicated “tree” that goes about 5 levels deep with different kinds of objects in each “branch”. Of course with such a complicated structure, but a common need to copy and modify it for a new project, the request for a better clone came floating across my desk. Now Salesforce does have a nice clone tool built  in, but it doesn’t have the ability to copy an entire hierarchy, and some preliminary searches didn’t turn up anything great either. The reason why, it’s pretty damn tricky, and governor limits can initially make it seem impossible. What I have here is an initial attempt at a ‘true deep clone’ function. You give it a record (or possibly list of records, but I wouldn’t push your luck) to clone. It will do that, and then clone then children, and re-parent them to your new clone. It will then find all those records children and clone and re-parent them as well, all the way down. Without further ado, here is the code.

    //clones a batch of records. Must all be of the same type.
    //very experemental. Small jobs only!
    public  Map<String, Schema.SObjectType> globalDescribeMap = Schema.getGlobalDescribe();    
    public static list<sObject> deepCloneBatched(list<sObject> objectsToClone) { return deepCloneBatched(objectsToClone,new map<id,id>());}
    public static list<sObject> deepCloneBatched(list<sObject> objectsToClone, map<id,id> previousSourceToCloneMap)
    {
        system.debug('\n\n\n--------------------  Cloning record ' + objectsToClone.size() + ' records');
        list<id> objectIds = new list<id>();
        list<sobject> clones = new list<sobject>();
        list<sObject> newClones = new list<sObject>();
        map<id,id> sourceToCloneMap = new map<id,id>();
        
        
        if(objectsToClone.isEmpty())
        {
            system.debug('\n\n\n-------------------- No records in set to clone. Aborting');
            return clones;
        }
                
        //if this function has been called recursively, then the previous batch of cloned records
        //have not been inserted yet, so now they must be before we can continue. Also, in that case
        //because these are already clones, we do not need to clone them again, so we can skip that part
        if(objectsToClone[0].Id == null)
        {
            //if they don't have an id that means these records are already clones. So just insert them with no need to clone beforehand.
            insert objectsToClone;
            clones.addAll(objectsToClone);
            
            for(sObject thisClone : clones)
            {
                sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
            }
                        
            objectIds.addAll(new list<id>(previousSourceToCloneMap.keySet()));
            //get the ids of all these objects.                    
        }
        else
        {
            //get the ids of all these objects.
            for(sObject thisObj :objectsToClone)
            {
                objectIds.add(thisObj.Id);
            }
            
            for(sObject thisObj : objectsToClone)
            {
                sObject clonedObject = thisObj.clone(false,true,false,false);
                clones.add(clonedObject);               
            }    
            
            //insert the clones
            insert clones;
            
            for(sObject thisClone : clones)
            {
                sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
            }
        }        

        //figure out what kind of object we are dealing with
        string relatedObjectType = objectsToClone[0].Id.getSobjectType().getDescribe().getName();
        
        //Describes this object type so we can deduce it's child relationships
        Schema.DescribeSObjectResult objectDescribe = globalDescribeMap.get(relatedObjectType).getDescribe();
                    
        //get this objects child relationship types
        List<Schema.ChildRelationship> childRelationships = objectDescribe.getChildRelationships();

        system.debug('\n\n\n-------------------- ' + objectDescribe.getName() + ' has ' + childRelationships.size() + ' child relationships');
        
        //then have to iterate over every child relationship type, and every record of that type and clone them as well. 
        for(Schema.ChildRelationship thisRelationship : childRelationships)
        { 
                      
            Schema.DescribeSObjectResult childObjectDescribe = thisRelationship.getChildSObject().getDescribe();
            string relationshipField = thisRelationship.getField().getDescribe().getName();
            
            try
            {
                system.debug('\n\n\n-------------------- Looking at ' + childObjectDescribe.getName() + ' which is a child object of ' + objectDescribe.getName());
                
                if(!childObjectDescribe.isCreateable() || !childObjectDescribe.isQueryable() || !childObjectDescribe.isCustom())
                {
                    system.debug('-------------------- Object is not one of the following: queryable, creatable, or custom. Skipping attempting to clone this object');
                    continue;
                }
                if(Limits.getQueries() >= Limits.getLimitQueries())
                {
                    system.debug('\n\n\n-------------------- Governor limits hit. Must abort.');
                    return clones;
                }
                //create a select all query from the child object type
                string childDataQuery = buildSelectAllStatment(childObjectDescribe.getName());
                
                //add a where condition that will only find records that are related to this record. The field which the relationship is defined is stored in the maps value
                childDataQuery+= ' where '+relationshipField+ ' in :objectIds';
                
                //get the details of this object
                list<sObject> childObjectsWithData = database.query(childDataQuery);
                
                if(!childObjectsWithData.isEmpty())
                {               
                    map<id,id> childRecordSourceToClone = new map<id,id>();
                    
                    for(sObject thisChildObject : childObjectsWithData)
                    {
                        childRecordSourceToClone.put(thisChildObject.Id,null);
                        
                        //clone the object
                        sObject newClone = thisChildObject.clone();
                        
                        //since the record we cloned still has the original parent id, we now need to update the clone with the id of it's cloned parent.
                        //to do that we reference the map we created above and use it to get the new cloned parent.                        
                        system.debug('\n\n\n----------- Attempting to change parent of clone....');
                        id newParentId = sourceToCloneMap.get((id) thisChildObject.get(relationshipField));
                        
                        system.debug('Old Parent: ' + thisChildObject.get(relationshipField) + ' new parent ' + newParentId);
                        
                        //write the new parent value into the record
                        newClone.put(thisRelationship.getField().getDescribe().getName(),newParentId );
                        
                        //add this new clone to the list. It will be inserted once the deepClone function is called again. I know it's a little odd to not just insert them now
                        //but it save on redudent logic in the long run.
                        newClones.add(newClone);             
                    }  
                    //now we need to call this function again, passing in the newly cloned records, so they can be inserted, as well as passing in the ids of the original records
                    //that spawned them so the next time the query can find the records that currently exist that are related to the kind of records we just cloned.                
                    clones.addAll(deepCloneBatched(newClones,childRecordSourceToClone));                                  
                }                    
            }
            catch(exception e)
            {
                system.debug('\n\n\n---------------------- Error attempting to clone child records of type: ' + childObjectDescribe.getName());
                system.debug(e); 
            }            
        }
        
        return clones;
    }
     
    /**
    * @description create a string which is a select statment for the given object type that will select all fields. Equivilent to Select * from objectName ins SQL
    * @param objectName the API name of the object which to build a query string for
    * @return string a string containing the SELECT keyword, all the fields on the specified object and the FROM clause to specify that object type. You may add your own where statments after.
    **/
    public static string buildSelectAllStatment(string objectName){ return buildSelectAllStatment(objectName, new list<string>());}
    public static string buildSelectAllStatment(string objectName, list<string> extraFields)
    {       
        // Initialize setup variables
        String query = 'SELECT ';
        String objectFields = String.Join(new list<string>(Schema.getGlobalDescribe().get(objectName).getDescribe().fields.getMap().keySet()),',');
        if(extraFields != null)
        {
            objectFields += ','+String.Join(extraFields,',');
        }
        
        objectFields = objectFields.removeEnd(',');
        
        query += objectFields;
    
        // Add FROM statement
        query += ' FROM ' + objectName;
                 
        return query;   
    }

You should be able to just copy and paste that into a class, invoke the deepCloneBatched method with the record you want to clone, and it should take care of the rest, cloning every related record that it can. It skips non custom objects for now (because I didn’t need them) but you can adjust that by removing the if condition at line 81 that says

|| !childObjectDescribe.isCustom()

And then it will also clone all the standard objects it can. Again this is kind of a ‘rough draft’ but it does seem to be working. Even cloning 111 records of several different types, I was still well under all governor limits. I’d explain more about how it works, but the comments are there, it’s 3:00 in the morning and I’m content to summarize the workings of by shouting “It’s magic. Don’t question it”, and walking off stage. Let me know if you have any clever ways to make it more efficient, which I have no doubt there is. Anyway, enjoy. I hope it helps someone out there.



Super Handy Mass Deploy Tool

So I know it has been a while. I’m not dead I promise, just busy. Busy with trying to keep about a thousand orgs in sync, pushing code changes, layout changes, all kinds of junk from one source org to a ton of other orgs. I know you are saying ‘just use managed packages, or change sets’. Manages packages can be risky early in the dev process because you usually can’t remove components and things and you get locked into a bit of  a structure that you might not quite be settled on. Change sets are great, but many of these orgs are not linked, they are completely disparate for different clients. Over the course of the last month or two it’s become apparant that just shuffling data around in Eclipse wasn’t going to do it anymore. I was going to have to break into using ANT and the Salesforce migration tool.

For those unaware, ANT is some kind of magical command line tool that is used by the Salesforce migration tool (or maybe vice versa, not really sure the relationship there) but when they work together it allows you to script deployments which can be pretty useful. Normally though, trying to actually setup the deployment with ANT is a huge pain in the butt because you have to be modifying XML files, setting up build files and stuff, in general it’s kind of slow to do. However, if you could write a script to write the needed files by the deployment script, now that would be handy. That is where this tool I wrote comes in. Now don’t get me wrong, it’s nothing fancy. It just helps make generating deployments a little easier. What it does is allows you to specify a list of orgs and their credentials that you want to deploy to. In the deploy folder you place the package.xml file that contains the definitions of what you want to deploy, and the meta data itself (classes, triggers, objects, etc). Then when you run the program one by one it will log into each org, back it up, then deploy your package contents. It’s a nice set it and forget it way of deploying to numerous orgs in one go.

So here is what we are going to do, first of all, you are going to need to make sure you have a Java Runtime Enviornment (JRE), and the Java Developers Kit (JDK) Installed. Make sure to set your JAVA_HOME environment variable path to wherever the JDK library is installed (for me it was C:\Program Files\Java\jdk1.8.0_05). Then grab ANT and follow it’s guide for install. Then grab the Force.com migration tool and get that installed in your ANT setup. Then last, grab my SF Deploy Tool from bitbucket (https://Daniel_Llewellyn@bitbucket.org/Daniel_Llewellyn/sf-deploy-tool.git)

Now we have all the tools we need to deploy some components, but we don’t have anything to deploy, and we haven’t setup who we are going to deploy it to. So lets use Eclipse to grab our deploy-able contents and generate our package.xml file (which contains the list of stuff to deploy). Fire up Eclipse and create a new project. For the project contents, select whatever you want to deploy to your target orgs. This is why using a package is useful because it simplifies this process. Let the IDE download all the files for your project then navigate to the project contents folder on your computer. Copy everything inside the src folder, including that package.xml file. Then paste it into the deploy folder of my SF deploy tool. This is the payload that will be pushed to your orgs.

The last step in our setup is to tell the deploy tool which orgs to push this content into. Open the orgs.txt file in the SF Deployer folder and enter the required information. One org per line. Each org requires a username, password, token, url and name attribute, separated by semincolons with an equal sign used to denote the key/value. EX

username=xxxx;password=xxxxx;token=xxxxxxxxx;url=https://login.salesforce.com;name=TEST ORG

Now with all your credentials saved, you can run the SalesforceMultiDeploy.exe utility. It will one by one iterate over each org, back up the org, the deploy your changes. The console window will keep you informed of it’s progress as it goes and let you know when it’s all done. Of course this process is still subject to all the normal deploy problems you can encounter, but if everything in the target orgs is prepared to accept your deployment package, this can make life much easier. You could for example write another small script that copies the content from your source org at the end of each week, slaps it into the deploy folder, then invokes the deployment script to have an automated process that keeps your orgs in sync.

Also I just threw this tool together quickly and would love some feedback. So either fork it and change it, or just give me ideas and I’ll do my best to implement them (one thing I really want to do is make this multi threaded so that it can do deployments in parallel instead of serial, which would be a huge bonus for deployment speeds). Anyway as always, I hope this is useful, and I’ll catch ya next time.

-Kenji


Salesforce Orchestra CMS Controller Extensions

So I’ve been working with Orchestra CMS for Salesforce recently, and for those who end up having to use it, I have a few tips.

1) If you intend on using jQuery (a newer version than the one they include) include it, and put it in no conflict mode. Newer versions of jQuery will break the admin interface (mostly around trying to publish content) so you absolutely must put it in no conflict mode. This one took me a while to debug.

2) While not official supported, you can use controller extensions in your templates. However the class, and all contained methods MUST be global. If they are not, again you will break the admin interface. This was kind of obvious after the fact, but took me well over a week to stumble across how to fix it. The constructor for the extension takes a cms.CoreController object. As an alternative if you don’t want to mess with extensions what you can do is use the apex:include to include another page that has the controller set to whatever you want. the included page does not need to have the CMS controller as the primary controller, so you can do whatever you want there. I might actually recommend that approach as Orchestra’s official stance is that they do not support extensions, and even though I HAD it working, today I am noticing it act a little buggy (not able to add or save new content to a page).

3) Don’t be araid to use HTML component types in your pages (individual items derived from your page template) to call javascript functions stored in your template. In fact I found that you cannot call remoting functions from within an HTML component directly, but you can call a function which invokes a remoting function.

So if we combine the above techniques we’d have a controller that looks like this

global class DetailTemplateController
{
    global DetailTemplateController(cms.CoreController stdController) {

    }

    @remoteAction
    global static list<user> getUsers()
    {
        return [select id, name, title, FullPhotoUrl from user ];
    }
}

And your  template might then look something like this

<apex:page id="DetailOne" controller="cms.CoreController" standardStylesheets="false" showHeader="false" sidebar="false" extensions="DetailTemplateController" >
	<apex:composition template="{!page_template_reference}">
		<apex:define name="header"> 
			<link href="//ajax.aspnetcdn.com/ajax/jquery.ui/1.10.3/themes/smoothness/jquery-ui.min.css" rel='stylesheet' />

			<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
			<script> var jqNew = jQuery.noConflict();</script> 
			<script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js"></script> 

			<script>
        	        var website = new Object();
			jqNew( document ).ready(function() {
				console.log('jQuery loaded');
			});

			website.buildUserTable = function()
			{
				//remoting request
				Visualforce.remoting.Manager.invokeAction(
					'{!$RemoteAction.DetailTemplateController.getUsers}', 
					function(result, event){
						if (event.type === 'exception') 
						{
							console.log(event.message);
						} 
						else 
						{
							var cols = 0;

							var tbl = jqNew('#bioTable > tbody');
							var tr;
							for(var i = 0; i < result.length; i++)
							{
								if(cols == 0){tr = jqNew('<tr></tr>');}                              

								var td = jqNew('<td></td>');

								var img = jqNew('<img class="profilePhoto">');
								img.attr('src',result[i].FullPhotoUrl);
								img.attr('title',result[i].Title);
								img.attr('alt',result[i].Name);
								img.data("record", result[i]);
								img.attr('id',result[i].Id);

								td.append(img);

								tr.append(td);

								if(cols == 2 || i == result.length-1){
									tbl.append(tr);
									cols = -1;
								}
								cols++;

							}

						}
					})			
			}
			</script>
		</apex:define>
		<apex:define name="body">
			<div class="container" id="mainContainer">
				<div class="pageContent">
					<div id="header">
						<apex:include pageName="Header"/>
						<div id="pageTitle">
							<cms:Panel panelName="PageTitle" panelController="{!controller}" panelheight="50px" panelwidth="200px"/>
						</div>
					</div>
					<div id="pageBody">
						<p>
							<cms:Panel panelName="PageContentArea" panelController="{!controller}"  panelheight="200px" panelwidth="400px" />
						</p>
						<div class="clearfloat"></div>
					</div>

					<!-- end .content --> 
				</div>
			</div>
			<div id="footer_push"></div>
			<div id="footer">
				<apex:include pageName="Footer"/>
			</div>
		</apex:define>
	</apex:composition>
</apex:page>

Then in our page we can add an HTML content area and include

<table id="bioTable">
	<tbody></tbody>
</table>
<script>website.buildUserTable();</script>

So when that page loads it will draw that table and invoke the website.buildUserTable function. That function in turns calls the remoting method in our detailTemplateController extension that we created. The query runs, returns the user data, which is then used to create the rows of the table that are then appended to the #bioTable’s body. It’s a pretty slick approach that seems to work well for me. Your mileage may vary, but at least rest assured you can use your own version of javascript, and you can use controller extensions, which I wasn’t sure about when I started working it. Till next time.


Visualforce Force Download of PDF or Other Content

Hey everyone,

This next trick is one I’ve kind been keeping under my hat since it’s a nice polishing touch for some of my contest entries, but I figured I should probably share it with the world now (information must be free, etc). So we all know we can create Visualforce pages that render as PDF documents. It’s a pretty cool feature especially because business people love PDF files more than I love being a cynical ass (which is like… a lot). Though the one little annoyance is that normally when you create that PDF visualforce page the user is brought to it to view it where they then can download it. Many times they simply want to download it and attach it to an email or something, the viewing isn’t required and is generally just an extra few wasted seconds waiting for it to load so they can hit file->save as. I have found/built a nifty way to force download of the file using a combination of Apex and some tricky DOM manipulation. As an added bonus I’ll show you how to conditionally render the page as a PDF based on a URL param. Here we go!

The first thing we’ll need of course is our Visualforce page, we’ll keep it simple for this example. So here is our visualforce page

<apex:page controller="forceDownloadPDF" renderAs="{!renderAs}">
<h2>PDF Download example</h2>

<p>This is some content that could be displayed as a PDF or a regular web page depedning on the URL params. The valid URL params are as follows</p>
<table width="100%" cellpadding="5" cellspacing="5">
    <tr>
        <th>Name</th>
        <th>Type</th>
        <th>Default</th>
        <th>Required</th>
        <th>Description</th>
    </tr>
    <tr>
        <td>pdf</td>
        <td>String with a boolean value</td>
        <td>null/false</td>
        <td>false</td>
        <td>if passed in as a true the page will be rendered as a PDF. Otherwise displayed as HTML</td>
    </tr>
    <tr>
        <td>force_download</td>
        <td>String with a boolean value</td>
        <td>null/false</td>
        <td>false</td>
        <td>If true the user will be prompted to download the contents of the page. Suggested to be paired with pdf=true</td>
    </tr>
    <tr>
        <td>filename</td>
        <td>String (valid file name)</td>
        <td>'My PDF Report [todays date].pdf'</td>
        <td>false</td>
        <td>A name for the file. Only used if force_download=true</td>
    </tr>    
</table>

</apex:page>

And now our controller

public class forceDownloadPDF {

    public string renderAs{get;set;}

    public forceDownloadPDF()
    {

        //figure out if the user passed in the pdf url variable and if it is set to true.
        if(ApexPages.currentPage().getParameters().get('pdf') != null && ApexPages.currentPage().getParameters().get('pdf') == 'true') 
        {
            //if so, we are rendering this thing as a pdf. If there were other renderas options that were valid we could consider allowing the user to pass
            //in the actual renderAs type in the url, but as it stands the only options are pdf and null so no reason to allow the user to pass that in directly.
            renderAs = 'pdf';

            //figure out if we are forcing download or not.
            if(ApexPages.currentPage().getParameters().get('force_download') != null && ApexPages.currentPage().getParameters().get('force_download') == 'true') 
            {
                //setup a default file name
                string fileName = 'My PDF Report '+date.today()+'.pdf';

                //we can even get more created and allow the user to pass in a filename via the URL so it can be customized further
                if(apexPages.currentPage().getParameters().get('filename') != null)
                {
                    fileName = apexPages.currentPage().getParameters().get('filename') +'.pdf';
                }
                //here is were the magic happens. We have to set the content disposition as attachment.
                Apexpages.currentPage().getHeaders().put('content-disposition', 'attachemnt; filename='+fileName);
            }               
        }        
    }
}

As noted in the comments the real secret here is setting the content disposition use the Apex getHeaders method. Now you are saying,

‘But Kenji if I call that page from a link it still opens in a  new window it just forces the user to download the file. That’s not much better!’

Oh ye of little faith, of course I got you covered. You think I’d leave you with a half done solution like that? Hell no. Lets take this mutha to the next level. Here is what we are going to do. Using a custom button with onClick javascript we are going to create an iframe with the source set as that visualofrce page (with the force_download=true param) and inject it into the DOM. When the frame loads (which will have 0 width, and height so it’s not visible) that code still runs prompting the user to download the file. They are non the wiser that a frame got injected, all they see is a happy little download dialog prompt. So go create a custom button on an object that you want to prompt the user to download your file from. Make it a detail page button (you could do a list button to, but that’s a topic for another day). Make it onClick javascript. Then slap this code in there.

ifrm = document.createElement("IFRAME"); 
ifrm.setAttribute("src", "/apex/yourPage?pdf=true&force_download=true&filename=My Happy File"); 
ifrm.style.width = 0+"px"; 
ifrm.style.height = 0+"px"; 
document.body.appendChild(ifrm);

Of course replace the ‘yourPage’ with the name of your visualforce page. The filename of course can be changed to be details from the record, or whatever you like. Now when the user clicks that button the javascript creates an invisible iframe and injects it into the DOM. Once it loads the user is prompted to download the file. Pretty slick eh?

Hope you dig it. Catch ya next time.


Salesforce Dashboard Automatic Refresh Bookmarklet

Hey all,

Quick fun little chunk of code here for you. This code when saved as a bookmarklet (javascript saved as a bookmark which runs on the current page when clicked) will cause Salesforce dashboards to automatically refresh every X seconds, where X is a variable near the top of the code (defaults to 90 seconds). It also injects a little timer on the refresh button, and is smart enough to wait for the dashboards to refresh before it continues the next countdown. I haven’t cross browser tested it yet (built in Chrome 25) but as long as the browser supports the DOMSubtreeModified event listener you are probably fine. Just save the code as a bookmarklet, navigate to your dashboard page and click the bookmarklet. You should see a small timer show up on the refresh button. When the timer hits 0 the dashboard should refresh, and the timer will reset back to the default time and being counting down again.

javascript:(
    function() 
    {
        var refreshInterval = 90; //number of seconds between each refresh
        var counter = refreshInterval;
        var timerInterval;
        var button = document.getElementById('refreshInput');
        if(button == null)
        {
            alert('Refresh Button not found! Salesforce may have changed the buttons ID or it may not be visiable for some reason. Please make sure you are on a dashboard page with the Refresh button visible');
            return false;
        }

        document.addEventListener("DOMSubtreeModified", function() {
            if(event.target.id == "componentContentArea")
            {
                startTimer();
            }
        }, true);

        function countDown(){
            counter--;
            button.value = "Refresh ("+formatTime(counter)+")";
            if(counter == 0)
            {
                button.click();
                counter = refreshInterval;
                window.clearInterval(timerInterval);            
                button.value = "Waiting for Refresh";
            }                
        }

        function startTimer()
        {
            window.clearInterval(timerInterval);
            timerInterval = setInterval(countDown, 1000);     
        }    

        function formatTime(seconds)
        {
            var totalSec = seconds;
            hours = parseInt( totalSec / 3600 ) % 24;
            minutes = parseInt( totalSec / 60 ) % 60;
            seconds = totalSec % 60;

            result = (hours < 10 ? "0" + hours : hours) + ":" + (minutes < 10 ? "0" + minutes : minutes) + ":" + (seconds  < 10 ? "0" + seconds : seconds);            

            return result;
        }
        startTimer(); 
    }
)();

One door closes, another one opens

Hey everyone,

As some of you may be aware I have recently accepted a new position as senior developer at RedKite technologies. They are a consulting firm specializing at implementation and custom development of Salesforce, mostly for financial organizations (but not exclusively). While I am extremely excited for this new opportunity to work with an awesome team and continue to grow my skills, it does mean that I will no longer be able to do freelance work (it could be taken as a conflict of interests kind of thing, you understand). So as of now, I am sorry but I have to decline any offers for freelance work, at least until the smoke clears and some details are figured out.

The good news is, that if you would like to leverage my skills and those of some other very talented developers working with me, you can! RedKite is happy to evaluate any Salesforce project and if you ask you may be able to get me tasked on your project. RedKite has an excellent track record, is growing very rapidly and you are sure to be happy with the results of any project you engage us on. I wouldn’t be working there if it wasn’t comprised of some of the most talented and passionate people in the industry. I am also still available to answer questions, give advice, etc I just don’t think I can accept money or undertake entire projects on the side at this point. Thanks for understanding, and I hope we can still do business, if perhaps through a slightly more official channel 😛

-Dan/Kenji


Publicly Hosted Apex REST Class bug (maybe?)

I seem to have run across an odd bug. Custom Apex REST classes hosted via a Salesforce site will not work in a production version. It does work in sandbox and developer versions, so I am fairly convinced the approach is valid and my config is correct. This is a sample class.

@RestResource(urlMapping='/testPublicRest')
global class testPublicRest {
@HttpGet
global static String doGet() {
String name = RestContext.request.params.get('name');
return 'Hello '+name;
}

@isTest
global static void testRespondentPortal()
{
// set up the request object
System.RestContext.request = new RestRequest();
System.RestContext.response = new RestResponse();

//First lets try and create a contact.
RestContext.request.requestURI = '/testservice';
RestContext.request.params.put('name','test');
//send the request
testPublicRest.doGet();
}
}

Sandbox version sans namespace – Works
https://fpitesters.testbed.cs7.force.com/webServices/services/apexrest/testPublicRest?name=dan

Developer version with namespace – Works
https://xerointeractive-developer-edition.na9.force.com/partyForce/services/apexrest/XeroInteractive/testPublicRest?name=dan

Production version sans namespace – Fails
https://fpitesters.secure.force.com/webServices/services/apexrest/testPublicRest?name=dan

It fails saying that it cannot find a resource with that name.

<Errors>
<Error>
<errorCode>NOT_FOUND</errorCode>
<message>Could not find a match for URL /testPublicRest</message>
</Error>
</Errors>

If you attempt to access it via the non secure domain you will get an HTTPS required message, so the resource is at least being located. It throws this error, which makes sense.

<Errors>
<Error>
<errorCode>UNSUPPORTED_CLIENT</errorCode>
<message>HTTPS Required</message>
</Error>
</Errors>

Seems like I found a bug maybe? To test yourself just copy and paste the above code. Host it via a salesforce site. Access it in your sandbox it should work (remember to access it via https. To get to a REST service just include /services/apexrest/yourService at the end of your site url. Then try deploying it to prod and doing the same. It will most likely fail.

I’d love to hear any feedback/ideas on this, as it’s a fairly critical part of a framework I am developing. Thanks!

Also if you do have any info, make sure to post it on the stack exchange. That’s probably the best place for this kind of thing.
http://salesforce.stackexchange.com/questions/6122/custom-rest-service-https-error

UPDATE: Got it figured out. It was due to a permissions error on the guest account the site was using. Somehow an object for the services profile had an impossible permission setup (it had full read write modify all on an child object where it did not have read write modify all on the parent object (an opportunity)). So fixing the permissions and making sure the service had read/write to all objects and fields it required seems to have fixed this error. If you are getting this, make sure to check your object permissions and that everything the service needs is there, and that you don’t have some kind of weird setup issue like I did.


Building a mobile site on Salesforce site.com, with cool menu to mobile list code!

Mobile. Mobile mobile mobile. Seems like the only word you here these days when it comes to technology. That, or social. Point being if you don’t have a mobile site most of the world will figure your company is way behind the times. Problem is designing mobile websites sucks. So many different devices and resolutions, features and capabilities. It’s worse than regular web design by a long shot when it comes to trying to make a website that works correctly across all browsers/devices/configurations. It can really be a nightmare even for the most skilled designers.

Thankfully jQuery mobile is here to help. While not perfect (its definitely still getting some issues worked out) it makes creating mobile websites infinitely more bearable. It takes care of the styling and such for you creating a nice interface and handling most of the junk you don’t want to deal with as far as writing event handlers, dealing with CSS adjustments, creating the various input widgets etc. I’ve used it a fair amount and after the initial learning curve I can safely say it’s way better than trying to do it all yourself.

Site.com is another technology offered by Salesforce that is supposed to make building websites easier. It is mostly used for small websites with limited interactivity (seeing as it doesn’t support sessions, and there is no server side language access aside from a few minimalistic  apex connectors). Great for marketing websites, mini sites, etc. It makes it very easy for your non technical team to create and edit content. It has a great WYSIWYG editor, various automated tools (such as a navigation menu, which we’ll talk about shortly) and some other goodies that generally make it a fairly competent CMS.

So here we are. We want to build a mobile site. We want to use site.com to do it. We would also like our mobile site to take full advantage of the features of site.com including the menu generator/site map. The idea also here is that the same content can be used for both our mobile site and our regular site. Really hoping to utilize that ‘write once, run everywhere’ mentality that I love so much (I don’t care what all the native platform fans say, it can be done!). We’ll need to architect our site in a way that allows for this. That means keeping in mind that our content could be loaded on any kind of device. We’ll also want to try and keep things light weight for our mobile friends lest their little smart phones choke trying to handle our content. I’ve come up with a solution for this, which I like pretty well and I’ll outline below but I’m not claiming it’s the best way by any means.

There are two basic approaches I’ve used for building things on site.com:

One  is to have a single page which contains all the headers, footers, standard elements, etc (I’ll call this the framework page). Then using a bit of javascript to transform all the links into ajax links which load the content from the requested page into a div within the same page. By transforming the links using javascript you ensure that non javascript browsers don’t try and use ajax to load content, and your marketing team doesn’t have to worry about trying to write any javascript either. It’s also good for SEO since the crawlers will load your page and be able to follow the links since they won’t run javascript. Just select all links on the page with a certain class, and enhance them (code for this below). When the content is loading, we run that same script again to enhance all those new links and the cycle continues. This is nice because because the ajax loading is faster and looks slick. Also if you are willing to have a javascript only (as in you aren’t interested in graceful degradation for non javascript client, which there really aren’t any) then your content pages can contain JUST the relevant content. As in no headers, footers, CSS, anything like that. You just grab the page and inject it into your framework page’s content area and you’re done. The problem with this approach is that since the detail pages do not have styling if they are directly linked to, the user will just see plain text and images on a white page. This is bad news unless you have some kind of auto redirect script to get users back to the index page if they have loaded just a detail page. You’ll also have to worry about bookmarking, direct linking, browsers back button, and other such things. I have a post detailing how to deal with these located at https://iwritecrappycode.wordpress.com/2012/07/06/experimental-progressively-enhance-links-for-ajax-with-bookmarking-and-back-button-support/ with the basic idea being your ajax links cause a hash change in the url. That hash change results in a unique URL that users can bookmark and share. Your site just needs to check the URL for any after hash content and try to load the specified page on page load into the content frame instead of whatever your default page is.

Option two is a little safer. Every page has all the headers and footers, and again you have a special div where all the real content goes. Again using javascript to ajax enahnce the links. When the page is requested you ajax load the page, grab the content from just that div (on the fetched page) and inject it. That way if javascript isn’t enabled your link just functions like a regular link, taking the user to that page. You don’t have to worry about the user accidentally getting to a plain detail page without the headers, footers and styles because every page has them. If javascript is enabled the link is enhanced and turns into a ajax loading link. The requested page gets fetched via ajax and the relevant content is extracted from the DOM and inserted into your framework page. Not as a fast and clean as having just the content on your sub pages, but it’s a bit safer. I’m using this approach for now while I decide if I want to use the other.

 

Okay, so we’ve come this far. You’ve decided on a site architecture, created some content and are ready to make it mobile. For example, mine looks like this.

Capture

You can see I’ve got my main menu system with a few sub categories. Also, I have the directions sub menu minimized to make the image smaller, but there are several entries e

First thing is you’ll have to setup your jQuery mobile home page. Just find a basic tutorial online that explains how to get it up and running, not much to it. A special meta tag, include the CSS and JS, create a home page div on your page and you are up and running. jQuery mobile actually has this fairly interesting idea that all content will be contained within a single page, it make it more ‘app like’. It by default uses ajax requests to load content and just shows and hides the stuff relevant to what the user wants to see. So as a user clicks a link to load content, an ajax request fetches it, a new ‘page’ is created on your template and the users view is shifted to it. But how do we build that navigation? We want it to by dynamic so when someone from marketing creates a new page, it just shows up on your site. You also want to maybe use the build in jQuery mobile list view for it, since this is a simple site and list views provide easy navigation on mobile sites.

Site.com as we know does include an automatic menu generator, but it just generates a lame unordered list or ugly dropdown system. How can we use that to build our jQuery mobile list view? Using their built in list maker, from the content above, it’s going to generate code that looks like this.

CaptureYou can see it creates a div, inside of which is an unordered list. Each sub menu is another unordered list inside of a list element. Seems like we could probably use a little jQuery magic to spruce this list up and turn it into a jQuery mobile list. For those who just want the functioning JS, just copy and paste this into a JS file, upload it to site.com and include it in your mobile index page. Make sure your mobile menu has a css class called ‘ajaxMenu’. That is how jQuery finds the menu to enhance.

$(document).ready(function () {
    console.log('Document ready fired');
    sfMenuTojQueryList();
    markupLinks();

    $( document ).live( 'pagecreate',function(event){
        markupLinks();
        setFooters();
    });

});

function sfMenuTojQueryList()
{
    //Special stuff for the mobile site. Enchace the navigation menu into a list few, and turn it's links into ajax links
    $('.ajaxMenu a[href]').each(function(){

        if($(this).parent().children().length == 1)
        {
            $(this).addClass('ajaxLink');
        }
    });
    $('.ajaxMenu > ul').listview({
        create: function(event, ui) { 

        }
    });    
}

function markupLinks() {

    $('.ajaxLink').each(function (index) {
        if($(this).attr('href') != null)
        {
            $(this).attr('href', $(this).attr('href').replace('/', '#'));
        }
    });

    $('.ajaxLink').bind('click', function (event,ui) {
        event.preventDefault();
        loadLink($(this).attr('href'));        
    });
}

function loadLink(pageUrl) {

    console.log('Loading Ajax Content');
    pageId = 'jQm_page_'+pageUrl.replace(/[^a-zA-Z 0-9]+/g,'');
    pageUrl = decodeURIComponent(pageUrl).replace('#','');

    console.log(pageUrl + ' ' + pageId);
    if($('#'+pageId).length == 0)
    {
        console.log('Creating New Page');
        $.get(pageUrl, function (html) {
            //in this case the content I actually want is held in a div on the loaded page called 'rightText'. If you are just loading all your content             //you can just use $(html).html() instead of $(html).find("#fightText").html(). 
            $('body').append('<div id="'+pageId+'" data-role="page"><div data-role="header"><h2>'+pageUrl+'</h2></div><div data-role="content">'+$(html).find("#rightText").html()+'</div></div>');                                

            $.mobile.initializePage();

            $.mobile.changePage( '#'+pageId, { transition: "slideup"}, false, true);    

        }).error(function () {
            loadLink('pageNotFound');
        });
    }
    else
    {
        console.log('Changing to Existing Page #'+pageId);
        $.mobile.changePage( '#'+pageId, { transition: "slideup"} );    
    }

}

So here is what happens. When the page loads, it’s going to find your menu. It will call the jQuery mobile list view on it, to make it into a nifty list view that can be clicked. It looks like this now.

Capture
Each of those things can be clicked, at which time if it has a sub menu the contents of that sub menu will be displayed. If the item is actually a link, the content is loaded via ajax and new jQuery mobile page is created and injected into your main page, which it then changes to. If it finds that the page has already been loaded once, instead of fetching it again, it just changes page to it. It’s a pretty slick system that will allow a very fast loading website since the content is loaded on the fly and pulled completely via ajax.

You now have a mobile version of your website with a dynamic hierarchy enabled menu system that can be totally managed by your marketing team. Cool eh?


New Cloudspokes Entry – Salesforce Global Search

Hey everyone,

I’m just wrapping up my entry for the CloudSpokes Salesforce global search utility challenge, codename Sherlock. I’m fairly pleased with how it turned out and figured I’d post the video for all to enjoy. I can’t post the code since this is for a challenge, but feel free to ask any questions about it.

http://www.screencast.com/t/GrYnfBlJFM

Features:

  • Fast operation using JS remoting. No page reloads. Ajax style searching.
  • Search any objects. Not limited to a single object type search
  • Smart formatting adjusts search result display based on data available about the object
  • Easy customization using callbacks and jQuery binds
  • Flexible, can be modified to suit many different requirements
  • Easy to use jQuery plugin based installation
  • Efficient each search consumes only one database query
  • Reliable return type makes processing search results easy
  • CSS based design makes styling results quick and easy
  • Structured namespaced code means smaller memory footprint and less chance of collisions
  • Deployable package makes for easy install
  • Over 90% code coverage with asserts provides assurance of functionality

You can check out the full documentation and feature list here

https://docs.google.com/document/d/17-SUja_SO_Enhh8LrjzDMB7VIX6S87TmPr9yBW5z-yo/edit

I don’t know why exactly they wanted such a thing, but it was certainly fun to write!


Salesforce Site.com Design Principals for a Web Developer.

So Site.com has been out a while now (and it’s awesome), and while only a handful of companies are using for production websites, many more are playing with it or at least considering it. It’s a really cool product with a lot of power, and it’s kind of a bummer it wasn’t been more widely adopted yet. More adoption would likely mean more features, more developer articles and other cool stuff being built using it. However, there are probably reasons it’s been a little sluggish, and if I had to guess I’d say part of the slow uptake is due to the fact that people;

A) Don’t understand what it does
B) Don’t trust that it can do what it says it will
C) Don’t know how to approach it
D) Insane cost

I know all 4 reasons did slow up a little (even though we ended up launching a production website before site.com was even GA, so I guess we couldn’t have been that slow). However we did end up figuring it out, using it, and are really enjoying it. Though it wasn’t all smooth sailing and the hardest parts weren’t technical, they were design related. So I’d like to share with you a few of the lessons I learned and the stumbling blocks I ran into as a developer trying to write a system marketing could use.

First up, what is Site.com? Site.com is a content management system with a powerful WYSIWYG editor built in. It allows non technical people to create and manage small websites that have limited amounts of interactivity (I mean it doesn’t write javscript for you of course, the menu creator is a bit limited, though some CSS wizardry can make it pretty slick, and there is no native apex access. Maybe in the future though). It uses a template system where you can create templates with defined editable areas which pages can be cloned from. The idea being that your website has a few core layouts (maybe like a landing/splash page, a home page, and some detail pages). Those templates are probably designed by a programmer/designer type with some development background and then the pages cloned from the template should be easily modifiable by your marketing team, or any other non technical people. When changes are made they stay in your sandbox until they are published, similar to doing a commit/push in programmer terms.

This idea of templates is a big deal. For myself as a ‘old fashioned’ developer I had always used large amounts of CSS to control the look and feel of my pages. Of course the idea there being that one simple change in your CSS could easily propagate to all the linked pages. It ensured a consistent look and feel, and also reduced page weight. So this is how I first approached using site.com. I prototype’d my pages on my local machine by following the PSD given to me by our designer. I’d write up a functional example of the PSD using just pure HTML and CSS. After the prototype was as pixel perfect as I cared to get it, I’d being ‘translating’ it into Site.com. Div’s became panels, spans became content areas and basically the result would be perfect, but it sucked.

Why did it suck? Well remember, doing things the old way meant that no style information was contained in the page itself. It was all in the CSS. I usually would even setup images as CSS divs with the background-image property. Now say marketing wants to change that image (as they always do) well guess what, thanks to the way I had things build they’d have to go dig in the css, find the right ID or class and modify the hard coded image path. That’s terrible! Or what if they want to change the padding or margin of a div/panel? Back to the CSS. What was the saving grace of web developers in the past had become the bane of marketing now. Not only was it difficult for them to modify, there was also the serious risk of them breaking something else. CSS is fragile stuff and a fix one place can be an overlapping element somewhere else! So it seems we are in a catch 22 here. We can’t use much CSS, or else marketing can’t really manage the site very well. We can’t just not use it or else the site will become disjointed and have the same issues websites did before CSS. Or so it seemed.

Remember those templates I mentioned. I for a long time under estimated their usefulness. I thought they were a simple toy meant for those ‘non developer types’ so I essentially had been dismissing them. How wrong I was. If you change your thinking into looking at a template as a layout structure with embedded CSS all of a sudden it all becomes easy. Build your page as a template and go ahead and put in the images and divs. Setup as much as you like within the template and made the areas editable that you want marketing to be able to change. Use CSS sparingly to provide easily reusable styles for headers, subscripts, body text and such. Then create pages from your templates and let marketing go nuts. This way you retain some of the benefits of CSS by styling some of the really shared elements (namely text elements, and perhaps some of the container divs and very basic layout structure of your site) but still retain enough flexibility for other users to manage. Here, let’s look at an example.

Say I wanted a a background image for my banner. It’s going to have a background image and a top margin. As a developer who is trying to follow general best practice for regular websites, I might opt to something like this.

<style>
#actionBanner
{
    padding-top:20px;
    padding-left:10px;
    width:290px;
    height:281px;
    background-image:url(actionBanner.png);
    margin-bottom:19px;
    background-repeat:no-repeat;
}
</style>
<div id="actionBanner"></div>

Looks nice right? I mean the HTML has no styling attached to it, it’s only structural as it should be. The CSS is clean, easy to read. But now think of it in terms of site.com. If your users actually want to use an image that isn’t actionBanner.png for the background on only one page, they are going to have to go dig in the CSS file to either create a new class, or change the existing one. Changing the existing one would change it for any page that uses that banner, which in this case they don’t want. So now you are back to modifying CSS and have totally defeated the point of using site.com (which in my eyes is to get marketing off your back when it comes to website changes). So what can we do? Using a site.com template, we can simply insert the image in the template. Set the CSS properties on the image itself, and save the template. Now pages that are cloned from that template have the provided image by default, but the users have the ability to override them since it’s marked as editable. They can’t screw up the template so you always have that as a backup, and each page can be customized by simply modifying the properties of the image element. It’s genius!

Now you might be saying, wait this is just like going back to the old days. You have embedded styles in your pages, this is terrible! Not so fast. Remember the reason we hated embedded styles in the first place was because they were unmanageable. You’d have to modify each individual page if you wanted a change. That is not the case with templates. When you have a non editable area in a template, any change to that is immediately propagated down to any page cloned from it. Like I said, templates can be viewed as a hybrid HTML CSS definition. Modifying the inline CSS on your template is the same as modifying a CSS class.

So in short, these are basically the take aways.

1) Prototype your whole website on your local machine in regular HTML/CSS if you can. This will help you figure out what things really will be shared among all pages (like font faces, basic layout elements, paragraph spacing, etc). Also, you’ll have an easily portable copy of your website in case you decide to migrate to another hosting platform. When crating this prototype, remember it will be going into site.com though, so don’t be afraid to use IMG tags, or even a few font tags if they are one off styles.

2) Converting your HTML pages into site.com pages is pretty easy if you’ve done it the right way. Divs will become panels. Spans and P tags will generally become content areas. Try to avoid having editable content as a direct child of a panel/div as it may make styling a bit more difficult and you’ll lose some control over styling. Your marketing users would be able to adjust the panel/div if they are able to adjust it’s content so they could end moving it around or whatever. Just do a panel/div that is not editable, then inside of it create your content area that is editable.

3) Use CSS sparingly. Embed anything that isn’t totally globally shared within the templates. Otherwise you’re going to have marketing users digging around in your CSS and likely messing it up. Avoid this at all costs.

4) Come up with a reliable naming scheme for your resources, especially images. Site.com is flat, there are no folders. If you are used to using folders to organize your resources, you might be in for a bit of a rough ride, or at least a messy architecture. Before you write a single line of code, decide how everything will be named. Images, SWF files, javascript includes, custom fonts, everything exists in the same “folder” so account for that.

5) Last but not least, work with site.com not against it. I know us developer types frequently have a hard time trusting tools to work right and we often think we know better. In this case, trying to do it your way will likely cause you more work. Use the features available. Understand how site.com ‘wants’ you to work and then follow that design pattern.

I hope this helps some. I’ll probably be doing a few more articles on site.com including how to do ajax links and maybe a few other things. Till next time!


Node.js, Socket.io, and Force.com canvas and you

So I got back from Dreamforce a week ago, and my head hasn’t stopped spinning. So many cool technologies and possibilities, I’ve been coding all week playing with all this new stuff (canvas, node.js, socket.io, nforce, heroku, streaming api, etc). Cloudspokes conveniently also had a challenge asking us to write a force.com canvas application that used a node.js back end. I wanted to take this chance to see if I could put what I had learned into practice. Turns out, it’s not too hard once you actually get your dev environment set up, and get things actually uploading and all the ‘paper work’ done. I wanted to leverage the Salesforce.com streaming API, Node.Js, and Socket.io to build a real time data streaming app. I also wanted to use Force.com canvas to get rid of having to worry about authentication (honestly the best part about canvas, by a wide margin). You can see the end result here:

http://www.screencast.com/t/Qfen94pl (Note the animations don’t show too well in the video due to framerate issues with the video capture software).

You can also grab my source project from here

Demo Source

Getting Set Up

First off, huge word of warning. All this process is just what I’ve found to work from trial and error and reading a ton of shit. I have no idea if this is the recommended process or even a good way to do things. It did/does however work. This was my first ever node.js application, as well as my first time using canvas and only my second time using Heroku. So ya know, definitely not pro level here but it’s at least functional. Also the actual idea for this application was inspired by Kevin O’Hara (@kevinohara80) and Shoby Abdi (@shobyabdi) from their streaming API session in dreamforce. They are also the authors of the kick ass nforce library, without which this app would not be possible, so thanks guys!

So how can you get started doing the same? Well first of course get yourself setup with Heroku. Heroku is where we are going to store our project, it’s a free hosting platform where you can host node.js, python and java applications. So if you don’t have a Heroku account, go get one, it’s free.

You’ll also want to download the Heroku toolbelt. This is going to get us our tools for dealing with Heroku apps (heroku client), as well as testing our stuff locally (foreman), and working with git. You can grab it here. https://toolbelt.heroku.com/. On that page it walks you through creating your project. For a more in depth guide, check out https://devcenter.heroku.com/articles/nodejs. Get a node.js project created and move onto the next step.

So now I assume you have a basic node.js project on Heroku. Now to actually make it do stuff, we’ll need to install some libraries using NPM. Open a command prompt and navigate to your local project folder. Install express (npm install express), socket.io (npm install socket.io) and nforce (npm install nforce). This should add all the required libraries to your project and modify the package.json file that tells Heroku the shit it needs to include.

You’ll also need a winter 13 enabled salesforce org to start building canvas apps. So go sign up for one here (https://www.salesforce.com/form/signup/prerelease-winter13.jsp) and get signed up. Depending when you are reading this you may not need a prerelease org, winter 13 may just be standard issue. Whatever the case, you need at least winter 13 to create canvas apps. As soon as you get your org, you’ll also probably want to create a namespace. Only orgs with namespaces and publish canvas apps, which you may want to do later. Also having a namespace is just a good idea, so navigate to the setup->develop->packages section, and register one for yourself.

In your org, you’ll need to configure your push topic. This is the query that will provide the live streaming data to your application.Open a console or execute anonymous window, and run this:

PushTopic pushTopic = new PushTopic();
pushTopic.ApiVersion = 23.0;
pushTopic.Name = 'NewContacts';
pushtopic.Query = 'Select firstname, lastname, email, id from contact;
insert pushTopic;
System.debug('Created new PushTopic: '+ pushTopic.Id); 

This will create a live streaming push topic in your org of all the new incoming contacts. You could change the query to whatever you want of course but for the purpose of this example, lets keep it simple.

Next, you’ll want to configure your canvas application. In your org, go to setup->create->apps. There should be a section called connected apps. Create a new one. Give it all the information for your Heroku hosted application. Permissions and callbacks here are a bit un-needed (since canvas will be taking care of the auth for us via a signed request) but should be set properly anyway. The callback url can be just the url of your application on Heroku. Remember only https is accepted here, but that’s okay because Heroku supports https without you having to do anything. Pretty sweet. Set your canvas app url to the url of your Heroku application and set access method to post signed request. That means when your app is called by canvas, it’s going to be as a post request, and in the post body is going to be a encoded signed request that contains an oAuth key we can use to make calls on behalf of the user. Save your canvas application.

The actual code (there isn’t much of it)
So we have everything configured now, but no real code. Our app exists, but it doesn’t do shit. Lets make it do something cool. Open up node server file (it’s probably called like web.js, or maybe app.js if you followed the guide above. It’s going to be whatever file is specified in your Procfile in your project). Paste this code. You’ll need to modify the clientId and clientSecret values from your canvas application. They are the consumer key and consumer secret respectively. I honestly don’t know if you’d need to provide your client secret here since the app is already getting passed a valid oAuth token, but whatever, it can’t hurt.

var express = require('express');
var nforce = require('nforce');
var app = express.createServer() , io = require('socket.io').listen(app);

var port = process.env.PORT || 3000;
//configure static content route blah
app.configure(function(){
  app.use(express.methodOverride());
  app.use(express.bodyParser());
  app.use(express.static(__dirname + '/public'));
  app.use(express.errorHandler({
    dumpExceptions: true, 
    showStack: true
  }));
  app.use(app.router);
});

app.listen(port, function() {
  console.log('Listening on ' + port);
});

io.configure(function () { 
  io.set("transports", ["xhr-polling"]); 
  io.set("polling duration", 10); 
});

var oauth;

var org = nforce.createConnection({
      clientId: 'YOUR CAVANAS APPLICATION CONSUMER KEY',
      clientSecret: 'YOUR CANVAS APPLICATION CLIENT SECRET',
      redirectUri: 'http://localhost:' + port + '/oauth/_callback',
      apiVersion: 'v24.0',  // optional, defaults to v24.0
      environment: 'production'  // optional, sandbox or production, production default
});

//on post to the base url of our application
app.post('/', function(request, response){
    //get at the signed_request field in the post body
    var reqBody = request.body.signed_request;   

    //split the request body at any encountered period (the data has two sections, separated by a .)
    var requestSegments = reqBody.split('.'); 

    //the second part of the request segment is base64 encoded json. So decode it, and parse it to JSON
    //to get a javascript object with all the oAuth and user info we need. It actually contains a lot of 
    //data so feel free to do a console.log here and check out what's in it. Remember console.log statments 
    //in node run server side, so you'll need to check the server logs to see it, most likely using the eclipse plugin.   
    var requestContext = JSON.parse(new Buffer(requestSegments[1], 'base64').toString('ascii'));
    
    //create an object with the passed in oAuth data for nForce to use later to subscribe to the push topic
    oauth = new Object();
    oauth.access_token = requestContext.oauthToken;
    oauth.instance_url = requestContext.instanceUrl;
    
    //send the index file down to the client
    response.sendfile('index.html');

});


//when a new socket.io connection gets established
io.sockets.on('connection', function (socket) {
      
    try
    {
      //create connection to the NewContacts push topic.
      var str = org.stream('NewContacts', oauth);
    
      //on connection, log it.
      str.on('connect', function(){
        console.log('connected to pushtopic');
      });
    
      str.on('error', function(error) {
         socket.emit(error);
      });
    
      //as soon as our query has new data, emit it to any connected client using socket.emit.
      str.on('data', function(data) {
         socket.emit('news', data);
      });
    }
    catch(ex)
    {
        console.log(ex);
    }
    
});

Now you’ll also need the index.html file that the server will send to the client when it connects (as specified by the response.sendfile(‘index.html’); line). Create a file called index.html, and put this in there.

<!DOCTYPE html>
<html>
    <head>
        <!-- - does a commit work remotely as well? -->
        <title>New Contacts</title>
        <meta name="apple-mobile-web-app-capable" content="yes" />
        <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent" />
        
        <link href='http://fonts.googleapis.com/css?family=Lato:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
        
        <link rel="stylesheet" href="/reveal/css/main.css">
        <link rel="stylesheet" href="/reveal/css/theme/default.css" id="theme">    
        
        <script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
    
        <script src="/socket.io/socket.io.js"></script>
    
        <script>
          $.noConflict();    
          var socket = io.connect();

          socket.on('news', function (data) {
            jQuery('.slides').append('<section><h2><a href="https://na2.salesforce.com/'+data.sobject.Id+'">'+data.sobject.FirstName+' '+data.sobject.LastName+'</a></h2><br>'+data.sobject.Email+'<br/></section>');
            
            Reveal.navigateNext();
          });



        </script>    
    </head>
    <body>

        <div class="state-background"></div>
        
        <div class="reveal">
            <div class="slides"> 
                <section>New Contacts Live Feed</section>
            </div>

            <!-- The navigational controls UI -->
            <aside class="controls">
                <a class="left" href="#">◄</a>
                <a class="right" href="#">►</a>
                <a class="up" href="#">▲</a>
                <a class="down" href="#">▼</a>
            </aside>

            <!-- Presentation progress bar -->
            <div class="progress"><span></span></div>                
        </div>


            
            <script src="/reveal/lib/js/head.min.js"></script>
            <script src="/reveal/js/reveal.min.js"></script>    
            <script>
                
                // Full list of configuration options available here:
                // https://github.com/hakimel/reveal.js#configuration
                Reveal.initialize({
                    controls: true,
                    progress: true,
                    history: true,
                    mouseWheel: true,
                    rollingLinks: true,
                    overview: true,
                    keyboard: true,
                    theme: Reveal.getQueryHash().theme || 'default', // available themes are in /css/theme
                    transition: Reveal.getQueryHash().transition || 'cube', // default/cube/page/concave/linear(2d)
    
                });

        
            
                    
            </script>            
    </body>
</html>

We are also going to need the CSS reveal framework to create the awesome slideshow. Grab it https://github.com/hakimel/reveal.js. In your Heroku project create a folder called public. In there create a folder called reveal. In that folder dump the css, js, lib and plugin folders from reveal. So it should be like root_folder->public->reveal->js->reveal.js for example. There is probably a more ‘git’ way to include the reveal library, but I don’t know what it is. So for now, moving folders around should work.

Now use git to push this all up to Heroku. I’d really highly recommend using the Heroku plugin for eclipse to make life easier. There is an install guide for it here https://devcenter.heroku.com/articles/getting-started-with-heroku-eclipse. However you do it, either from eclipse or command line, you gotta push your project up to Heroku. If you are using command line, I think it’s something like “git add .” then “git commit” then “git push heroku master” or something like that. Just use the damn eclipse plugin honestly (right click on your project and click team->commit, then right click on the root of your project and click team->push upstream).

If your app pushes successfully and doesn’t crash, it should run when called from Canvas now. Canvas will call your Heroku application using a post request. The post request contains the signed request data including an oAuth token. We use that oAuth token and store it in our node.js app for making subsequent api calls. Node.js returns the index.html file to the client. The client uses socket.io to connect back to the server. The server has a handler that says upon a new socket.io connection, create a connection to the push topic newContacts in salesforce, using the oAuth token we got before. When a new event comes from that connection, use Socket.IO to push it down to the client. The client handler says when a new socket.io event happens, create a new slide, and change to that slide. That’s it! It’s a ton of moving parts and about a million integration, but very little actual code.

Enjoy and have fun!


Displaying and Caching Salesforce Attachment Images in Sites

This time around we are going to be talking about images. How to store them, how to query for them, display them and cache them, in Salesforce, using javascript remoting. We’ll be building a simple application using jQuery, Salesforce and Apex to query for attachments, display them and cache them reduce load times and overhead.

Abstract:
First off, I’m having a bit of a hard time organizing all my thoughts on this topic. It’s kind of big, so please forgive if I skip around a bit. Feel free to ask for clarifications in the comments. So let’s say you are building an application to be hosted on Salesforce. Your application is going to need to publicly accessible (so you are going to be using sites) and the application is going to need to show images that may change frequently and hence would be configured by some non developer types. Your application is going to show all the products you have available, along with pictures of said products.

There is of course many ways you can go about storing your images and relating them to your products but the most straight forward option is to use the notes and attachments feature. That would allow users to easily manage the pictures related to each opportunity without having to go to some central picture repository, or building any additional relationships between objects or URLS. The problem of course is that attachments don’t have a publicly accessible URL to them. You can view them from within Salesforce but you don’t have any way to display them on a site. This could be an issue. Not so fast!

Images as Data
You know those images you uploaded to Salesforce via the attachments feature exist somewhere on Salesforce servers. We also know that Salesforce hates file storage and loves databases. It should come as little surprise that the attachments are actually stored in a table as blob data. That data can be queried for just like any other data. Another little know thing is that in HTML while the img tag normally has it’s src attribute set to a URL, it can in-fact accept base64 encoded image data by specified the data type (). Perhaps we can put all this information together into something useful. Yes, yes we can.

Getting The Image Data
So go ahead and get a visualforce page and controller set up. I’m calling mine productList and productListController respectively. Let’s get the code for our controller in place. Copy and paste this.

global class productListController
{

    //get all the products in the org along with their attachments.
    @remoteAction
    global static remoteObject getProducts()
    {
        remoteObject returnObj = new remoteObject();

        try
        {

            list<Product2> products = [select 
                                                Name,
                                                ProductCode,
                                               Description,
                                               Family,
                                               isActive,
                                               (SELECT Attachment.Name, Attachment.Id FROM Product2.Attachments)
                                               from product2
                                               where isActive = true];
            returnObj.sObjects = products;
        }
        catch(Exception e)
        {
            returnObj.success = false;
            returnObj.message = 'Error getting products';
            returnObj.data = 'Error Type: ' + e.getTypeName() + ' ' + e.getCause() + ' ' + ' on line: ' +e.getLineNumber(); 
        }

        return returnObj;       
    }

    //gets a single attachment (photo) by id. The data is returned as a base64 string that can be plugged into an html img tag to display the image.
    @RemoteAction
    global static remoteObject getAttachment(id attachmentId)
    {   
        remoteObject returnObj = new remoteObject();
        try
        {
            list<Attachment> docs = [select id, body from Attachment where id = :attachmentId limit 1]; 
            if(!docs.isEmpty())
            {
                returnObj.data = EncodingUtil.base64Encode(docs[0].body); 
            }    
        }
        catch(exception e)
        {
            returnObj.success = false;
            returnObj.message = e.getMessage();
            returnObj.data = 'Error Type: ' + e.getTypeName() + ' ' + e.getCause() + ' ' + ' on line: ' +e.getLineNumber();        
        } 
        return returnObj;    
    }   

    global class remoteObject
    {
        public boolean success = true;
        public string message = 'operation successful';
        public string data = null;
        public list<sObject> sObjects = new list<sObject>();
    }    
}

As you can see it’s a pretty simple little controller. We have one method that gets a listing of all the products and the Id’s of the associated attachments using a subquery. That prevents us from having to run another query to get the attachment Id’s. The second function takes a specific attachment id and will return an object with the base64 encoded version of the image. That’s what I was talking about earlier. You can query for an attachment and get it’s raw binary/blob data. Then you can base64 encode it for transfer from the controller back to the requesting page. With that you can get the image data out Salesforce and to your public application.

This does introduce another problem though. Caching. Normally images would be cached by the browser when they are loaded. It uses the filename to create a cached version of the image so next time your browser needs to load it it can just pull it off the hard drive instead of across the internet. The problem with base64 images is they can’t really be cached easily. By the time you have enough data to find it in the cache, you already loaded the whole thing, totally defeating the entire point of the cache. How can we fix this? Caching is too important to just skip in most applications, but yet we need to use base64 encoded images in our app.

Local Storage
With HTML5 we now have something called local storage. Basically it lets us store just about anything we want on the users computer for use at a later time. Basically cookies on steroids. Also where as cookies had to be small little text files, local storage gives us much more flexibility with size. We can leverage this to build own our cache.

Here is the game plan. We’ll run our query to find all the products. We’ll loop over each product we find and create a create an img tag that contains the ID of the image/attachment that needs to go there. After that, we’ll loop over each image tag and populate it with the image. We’ll check to see if we have a local storage item with the ID of the image/attachment. If so, we’ll load that data from the local cache. If not, we’ll make a remoting call to our Apex getAttachment method, and cache the results with local storage then load the data into the img tag. Here is what that looks like.

<apex:page controller="productListController">
    <head>
    <title>Product List</title>
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.js"></script>

    <script>
        $(document).ready(function() {
            getProducts(function(){
                $('.cacheable').each(function(index){
                    var img = $(this);
                     getImage($(this).attr('id'),function(imageId,imageData){
                         $(img).attr('src', 'data:image/png;base64,'+imageData);
                     });
                });               
            });            
        });  

        function getProducts(callback)
        {
                    Visualforce.remoting.Manager.invokeAction(
                        '{!$RemoteAction.productListController.getProducts}',
                        function(result, event)
                        {
                            if (event.status && (result.success == true || result.success == 'true')) 
                            {    
                               var html='';
                               for(var i = 0; i<result.sObjects.length;i++)
                               {
                                   var imageId = 'default image id here';
                                   if(result.sObjects[i].hasOwnProperty('Attachments'))
                                   {
                                        imageId = result.sObjects[i].Attachments[0].Id;
                                   }
                                   html += '<li><img class="cacheable"  id="'+imageId+'">'+result.sObjects[i].Name+'</li>';

                               }
                               $('#products').html(html);
                               callback();
                            } 
                            else
                            {
                                $("#responseErrors").html(event.message);
                            }
                        }, 
                        {escape: true});                   
        } 

        function getImage(imageId,callback)
        {
             var imageData;

              if ( localStorage.getItem(imageId))
              {   
                console.log('Getting image from local storage!');
                imageData = localStorage.getItem(imageId);
                callback(imageId,imageData);    
              }
              else 
              {
                   console.log('Getting image remote server!');
                    Visualforce.remoting.Manager.invokeAction(
                        '{!$RemoteAction.productListController.getAttachment}',
                        imageId,
                        function(result, event)
                        {
                            if (event.status && (result.success == true || result.success == 'true')) 
                            {    
                                 imageData = result.data;
                                 localStorage.setItem(imageId,imageData);      
                                 callback(imageId,imageData);    
                            } 
                            else
                            {
                                $("#responseErrors").html(event.message);
                            }
                        }, 
                        {escape: true});                   
              }      
        } 
    </script>
    </head>

    <body>
            <ul  id="products"></ul>
    </body>            
</apex:page>

So if you are familiar with jQuery and callbacks it’s pretty easy to make sense of what’s going on here. Once the DOM loads we are going to call the getProducts function. getProducts is going to use remoting to run the getProducts apex method. It will iterate over the results and create a list item for each product as well as that empty tag with the id attribute we talked about earlier. It also assigns the img tag the cacheable class so we can easily iterate over them once we are done. Once the looping and list building is complete, we call the callback functions. Since remoting requests are asyncronous we need to use callbacks when we only want to call one function when the other has completed first. Callbacks are a bit beyond the scope of this article, but just know that if we didn’t use them the get $(‘.cachable’).each() loop would run before the list had finished being populated.

So anyway getProducts finishes running and creating the list. Then comes the loop that uses jQuery to find any element that has the ‘cacheable’ class. For each element it finds, it calls the getImage() function on it, passing in the Id of that element. GetImage is where the cacheing magic happens. It will check to see if a local storage item exists with the id it gets passed. If so, it calls back with that content, if not, it queries Salesforce for an attachment with that id, creates a local storage element for it, and then again returns that content. The loop takes the returned content and sets the src tag of the img element with the base64 encoded data and boom! We have an image.

There you have it. Using Salesforce attachments to house images, using Apex and jQuery to query for them and display them, and HTML5 local storage to cache them. Pretty cool eh? I could write more, but I’m tired and I don’t feel like it. Hit me with questions if ya got em.


Extracting Strings with RegEx in Salesforce Apex

Hey everyone. This is just another quick tip type of deal. It’s pretty easy to replace strings with Apex and regular expressions, but it’s a little bit harder to extract that string. In this example I want to extract a project number from an email subject line. The project number will be between { } braces. So how exactly do I got about doing this? Well it looks a bit like this.

string subject = 'this is a test {12312-D} email subject [dfasdfa]';
Pattern p = Pattern.compile('\\{([^}]*)\\}');
Matcher m = p.matcher(subject);
if (m.find()) 
{
   system.debug(m.group(1)); 
}

First I create a string to search. then I create matching pattern by using the pattern class and my nifty little regular expression. Remember to use Java style regular expressions! then I do a match against the subject line. After that I want to check and see if there was a match, and if so I debug out the first grouping. the 0th grouping contains the match with the brackets, and the 1st contains the text without. So there you have it. Just modify the regular expression to suit your needs and you should be on your way. Have fun!


Apex Captcha with Javascript Remoting and jQuery

So at one time or another, we’ll likely all have to create a public facing form to collect data. We will also find about 2 seconds afterwards that it is getting spammed to hell. To stop the flood of crap, we have reCaptcha. An awesome little utility that will prevent bots from submitting forms. You already know what captcha is though, that’s probably how you found this post, by googling for apex and captcha. First off, there is already an awesome post on how to do with by Ron Hess (Here), but his approach is a bit complicated, and visualforce heavy. Of course being kind of anti visualforce, and the complexities of properties and all that, I made my own little approach. So here we go.

This is assuming you already signed up with reCaptcha. You can go here and sign up for recaptcha (yes you can just enter force.com as the domain)
After that, course add an entry for google to your remote sites in the admin setup under security. Disable protocol security.
Then create your visualforce page, and apex class. I called my class utilities, since this is kind of a re-usable function and I wanted to keep it generic.

Now put this crap in your controller. Also, your controller needs to be global (to use javascript/apex remoting)

@RemoteAction
    global static boolean validCaptcha(string challenge, string response)
    {
      boolean correctResponse = false;
      string secret = 'your recaptcha secret key here. Maybe make this into a custom setting?';
      string publicKey = 'your recaptcha public key here. Maybe make this into a custom setting?';
      string baseUrl = 'http://www.google.com/recaptcha/api/verify'; 

      string body ='privatekey='+ secret +  '&remoteip=' + remoteHost() + '&challenge=' + challenge + '&response=' + response + '&error=incorrect-captcha-sol';
      
      HttpRequest req = new HttpRequest();   
      req.setEndpoint( baseUrl );
      req.setMethod('POST');
      req.setBody ( body);
      try 
      {
        Http http = new Http();
        HttpResponse captchaResponse = http.send(req);
        System.debug('response: '+ captchaResponse);
        System.debug('body: '+ captchaResponse.getBody());
        if ( captchaResponse != null ) 
        {  
            correctResponse = ( captchaResponse.getBody().contains('true') );
        }          
       
      } 
      catch( System.Exception e) 
      {
         System.debug('ERROR: '+ e);
      }                             
      return correctResponse;
    }

    global static string remoteHost() 
    { 
        string ret = '127.0.0.1';
        // also could use x-original-remote-host 
        try
        {
            map<string , string> hdrs = ApexPages.currentPage().getHeaders();
            if ( hdrs.get('x-original-remote-addr') != null)
            {
                ret =  hdrs.get('x-original-remote-addr');
            }
            else if ( hdrs.get('X-Salesforce-SIP') != null)
            {   
                ret =  hdrs.get('X-Salesforce-SIP');
            }
        }
        catch(exception e)
        {
        
        }
        return ret;
    }

Ok, great, now your controller is ready. You just need to pass the right info and it will tell you if it’s right or wrong. Lets get a visualforce page set up to do that.

<apex:page controller="utilities" standardStylesheets="false" sidebar="false"  >

<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" />


<script>
$(function() {

    $( "#validateButton" ).click(function(){
        
        validCaptcha(function(valid){
            if(valid)
            {
                $('#validationResultDiv').html('Valid Captcha!');
                
                //Do whatever here now that we know the captcha is good.
            }
            else
            {
                $('#validationResultDiv').html('Invalid Captcha Entered');
            }
           
        });           
    });
});

function validCaptcha(callback)
{
    var challenge = document.getElementById('recaptcha_challenge_field').value;
    var response = document.getElementById('recaptcha_response_field').value;

    utilities.validCaptcha(challenge,response, function(result, event)
    {
        if(event.status)
        {
           callback(result);
        }
    }, {escape:true});
}

</script>

<div id="captchaEnter" title="Form Submission Validation">
    <center>
    <script type="text/javascript" src="https://www.google.com/recaptcha/api/challenge?k=YOUR PUBLIC KEY GOES HERE DONT FORGET IT"></script>
    <noscript>
       https://www.google.com/recaptcha/api/noscript?k=YOUR_PUBLIC_KEY
     </noscript>  
     <div id="validationResultDiv"></div>   
     <button id="validateButton" class="inline">Submit</button>
       
     </center>
</div>


</apex:page>

Boom! Just that easy. Hook up an event handler to the submit button that runs the validCaptcha function. It will get the proper values, and send them to the apex class, which sends them to reCaptcha to verify. Once an answer comes back, it is passed into the callback function, which the can run whatever action you require. Don’t forget to replace the place holder public key in the script line above. Have fun!



Ask Kenji: Cross domain ajax requests?

I was just killing some time, chilling out after karate, and this message popped in my inbox.

Hi Kenji,

I have read some articles about Salesforce in your bolg. So I have a question want to ask you, I think maybe you can give me some advices.

I want to ues jquery.ajax method to invoke the apex class.
These jquery codes are written in a HTML page not a visualforce page. I have some method to get the access_token based on the OAuth 2.0.
Then I follow your article (https://iwritecrappycode.wordpress.com/2011/07/08/salesforce-rest-api-musings/) create a apex to listen my request. I use curl to test this class and successful.
So, I think I can use jquery.ajax do the same thing.

I post the same question on force.com boards, you can see the detail at there.(http://boards.developerforce.com/t5/Apex-Code-Development/Ues-jquery-ajax-to-invoke-apex-class/td-p/394713)

Do you have experience on this?

Thank you!

A valid question. I feel like I might have touched on it before, but hey no harm in writing about it again. It’s a common situation, and one with probably more than one solution. Below is my approach. Take it or leave it.

First off, as far as I know you can’t invoke a rest resource with pure javascript. The cross domain security issue just doesn’t allow for it. The only way to do cross domain ajax stuff is by tricking the browser to loading the remote resource as if it was a script resource, since those can be loaded from anywhere. This technique in jQuery is called jsonP. The problem with this is that you cannot set headers, include authorizations, or anything else that you do with a more complex http request. It’s a simple GET to the url, and that’s is. REST resources typically require an authorization header to be set, and need to support POST, PATCH, PUT, along with just GET. So most REST resources, including the ones you can make in Salesforce can’t be access directly via javascript. If someone can prove me wrong, I love you.

So what are we do to? The method that follows is what I’ve been doing when I need a pure javascript solution. It’s not the most elegant, but it works. Here is what you have to do (this method will also get around having to use REST services, or oAuth). First, setup an visualforce page with your apex class as the controller. Wrap the return data in the callback function provided via the jQuery get request, and print the results out. Host the visualforce page on a publicly accessible salseforce site (don’t forget to set permissions on the page and class to allow the public profile user to get access) jQuery will get the response, pass the data to the inline handler function and you can process the results as you need.

<apex:page showHeader="false" sidebar="false" standardStylesheets="false" contentType="application/x-JavaScript; charset=utf-8" controller="jsonp_controller">{!returnFunction}</apex:page>

Your controller will look something like this

public class jsonp_controller
{
    public string returnFunction{get;set;}
    
    public pageReference getReturnFunction()
    {
        //get the parameters from the get/post request and stash em in a map
        map<string,string> params  = ApexPages.currentPage().getParameters();
        
        //set your data to return here
        string returnData = 'blah';
        
        if(params.containsKey('callback'))
        {
            returnFunction = params.get('callback') + '(' + returnData + ');';
        }
        
        return null;
    }
    
    @isTest
    public static void test_jsonp_controller()
    {
        jsonp_controller controller = new jsonp_controller();
        system.assertEquals(controller.returnFunction,'blah');
        
        
    }
}

And finally your page that actually makes the request would look like this

<html>
<head>
    <title>Cross domain ajax to request to salesforce</title>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.5.2/jquery.min.js"></script>
    <script>
    var url = "http://yourSite.force.com/jsonp_getData";

    function loadData()
    {
        jQuery.getJSON(url+'?callback=?',function(data)
        {
            jQuery('#results').html(data);
        });

    }

    $(document).ready(function() {
        loadData()
    });
    
    </script>
</head>
    <body>
    <div id="results">
    Your remotly fetched data will get loaded here
        </div>
    </body>
</html>

Remember, your visualforce page that serves the content must be publicly available, that means hosting it on a force.com site. Please note I wasn’t able to actually test the above code, because my org is freaking out on me right now (seriously, it’s doing some weird stuff), but it should be pretty close to accurate. Anyway, I hope this helps some people out there.

PS: I knew this topic seemed familiar. It’s because i wrote about it before!
Salesforce SOQL Query to JSON


Salesforce Custom Calendar with jQuery and Visualforce

Hey all,

I know I’ve been promising a new calendar for a while, and I’m sorry it’s taken so long. I didn’t quite know how in depth I wanted to go, and how much stuff I should build. I finally just decided to release a nice simple framework for other developers to build on. This is based on the super awesome excellent jQuery fullCalendar plugin by Adam Shaw. What this allows you to do is create full calendar records (a custom object). Each record represents a calendar. Each calendar has a source object, a start and end field, and a list of detail fields. When the calendar is loaded, it then queries the specified object for all records with a start date and end date falling in the visible range of the calendar. When an event is clicked a popup box appears with further information that is configurable on the fullcalendar record.

All the more configuration that is needed to create a calendar

Sample popup info when event is clicked


Click here for a demo (go to December 2011 to see some sample events).

You can grab the unmanaged package here

Or just grab the raw project and source from here (first time hosting a file on box.net, we’ll see how this goes).

Anyway, I hope this helps some people who are looking for a simple calendar system, or one to build on. I’m happy to review suggestions and ideas, but I can’t commit to getting anything done. Hope ya dig it!


Salesforce jQuery Calendar

Over the last year there has been a lot of people excited about my Salesforce jQuery calendar. Problem is, for one the code isn’t available due to a lack of hosting. Problem two is that it sucks. It uses some goofy visualforce page to pass information off the the apex class, and it can only query against one type of object. Overall, it’s pretty lame and not a good sample of the kind of work that is possible these days. So I am rebuilding it. In fact, I already have the core up and running. But now I want to know what kind of features you guys are interested in. Do you want a super bare bones easy to understand release, or do we want a little more robust full featured kind of thing? Let me know in the comments what you’d like to see in a new Salesforce calendar, and I’ll see what I can do.

For those who just want my basic functional super skeletal framework I’ll be releasing it sometime tomorrow. I need to clean up a few little things, and I’ll probably release it as an unmanaged package for easy install, and I’ll host the code on my new box.net account.


Cloudspokes Challenge jQuery Clone and Configure Records

Hey everyone,
Just wrapped up another CloudSpokes contest entry. This one was the clone and configure records (with jQuery) challenge. The idea was to allow a user to copy a record that had many child records attached, then allow the user to easily change which child records where related via a drag and drop interface. I have a bit of jQuery background so I figured I’d give this a whack. The final result I think was pretty good. I’d like to have been able to display more information about the child records, but the plugin i used was a wrapper for a select list so the only data available was the label. Had I had more time I maybe could have hacked the plugin some to get extra data, or maybe even written my own, but drag and drop is a bit of a complicated thing (though really not too bad with jQuery) so I had to use what I could get in the time available. Anyway, you can see the entry below.

jQuery Clone and Configure Record


Salesforce Call Apex Code From Hompeage Component

So this is something I had to do for a challenge recently, where I wanted to invoke my Apex class/method from a home page component. I had heard some solutions involving a link that called a visualforce page (nice thought, but a little clunky and hard to pass parameters) or loading a visualforce page tied to the controller in an iframe (again, works but not the most elegant). I remembered that there is a connection library that allows javascript to call Apex, that is normally used for custom buttons and back in the day s-controls. I putzed around with the syntax some and realized I was making it way more complicated than it needed to be. Below is my solution that I think is pretty slick.

1) Write an apex method that is set as a webservice so it can be called by the library (example below)

    webservice static string createLinkSimple(string url, string title)
    {
          string returnVar = 'Link added!';
          try
          {
               favLink__c thisLink = new favLink__c();
               thisLink.location__c = url;
               thisLink.name = title;
               thisLink.owner__c = UserInfo.getUserId();
               insert thisLink;
           }
           catch(Exception e)
           {
               returnVar = 'Error adding link: ' + e.getMessage();
           }    
           return returnVar;
    }

2) Create a home page link (setup->customize->home-> custom links)
3) Create a link link. Set the content source as ‘onClick javascript’
4) Enter code similar to the following

{!REQUIRESCRIPT("/soap/ajax/10.0/connection.js")}
{!REQUIRESCRIPT("/soap/ajax/10.0/apex.js")}

var result = sforce.apex.execute("favLinks","createLinkSimple", {url: document.location.href, name: document.title});
alert(result);

(The first param is the class name, the second is the method name, and then pass the arguments in the {} brackets)

5) Save it and add the link to a homepage component and display it in the sidebar. You are done!

The will allow you to invoke Apex classes and store the result in a variable to do whatever you want with. Combine this with my Salesforce – Pushing Custom Buttons to the Limit With jQuery and Apex article and you can build some pretty powerful cool stuff.

Hope this helps someone out there.


Cloudspokes Challenge – Open Social Toolkit Voting App

Hey all,
Another week another CloudSpokes challenge done. This one is the open social toolkit, voting application. It allows users to create topics to vote on, lets other users vote on those things, and have discussions about them. Integrated with facebook, and totally force.com based. I used jQuery mobile here to make sure it works on phones and iPads and whatnot, and the super awesome force.com platform for hosting and schema. Really a match made in heaven if you ask me.

You can see the demo app here

See the videos of it in action too!
Interface and Front end Video
Backend and Schema Video

I’ll be doing a post later about the nifty facebook integration, cause to me, that is the coolest part.


While This is Awesome, There Must be a Better Way.

This is half an interesting post about how I solved a fairly complicated problem, and half me looking for a better way to do it. The problem I was solving in theory is fairly simple.

I want to import answers from an external survey software into our Salsforce org.

Pretty simple right? Standard import functionality. Nothing too complex. But now let’s build on this.

1) It should be automated. No manual triggering of the import.
2) It should be imported as it is created. Closer to real time than to ETL.
3) The place the data is coming from has no API. We only have direct ODBC access to it.
4) Since the process must be closer to real time, we should use a ‘push’ style mentality for import. The survey software should push the data into Salesforce as it is created, instead of Salesforce requesting it.
5) The survey software only gives me access to javascript to add functionality. No server side processing of any kind.
6) Solution should be as cloud based as possible. Ideally no code on our servers (we have a cold fusion server we use for a lot of data brokering, but we would like to get rid of it).

Suddenly got harder eh? Right off the bat, point 3 means we are going to need to use some kind of middle-ware (in this case our Coldfusion webserver that can talk to the database). Salesforce has no method to talk directly to a database, as it shouldn’t. Everything is done with API’s these days, nobody directly queries databases anymore. At least they shouldn’t. So my first task was to write an API that Salesforce could access to get it’s data. So I wrote up a simple ColdFusion webservice that queries the survey database, formats the data in a reliable way, and returns a JSON array. While not ideal because it does use our server, at least it’s fairly transparent. It’s just a webservice call that could possibly be replaced in the future if the survey software does release an API in the future.

With the webservice up and running, now I need some Apex code to call out to it, get the data and handle it. So I wrote a nice little Apex method that uses a simple HTTP get with some URL params to invoke my ColdFusion webservice. Now it can get the JSON data. Using the new awesome JSON parser in winter 12 I am able to parse the data into a list of custom objects. Hooray.

So now I need a Salesforce object to store this data right? Actually I am going to want a few. I ended up creating 3 in total. The first object ‘survey’, is just a simple container object with pretty much no fields. The second object, ‘survey entry’ is just an object that will contain all the answer objects for a person in a given survey. It of course has a lookup to the ‘survey’ object, as well as a lookup to a contact, and some other info (when they took the survey, etc). The third object ‘survey answer’ is where the real juicy data is. It has the ID of the question, the text of the question, the persons answer, and a lookup to the survey entry.

So now I modified my Apex class a bit to create the survey answers objects, and relate them to the proper ‘survey entry’ (and create one if one does not exist for this person in this survey yet). Boom, so now all the ‘hard’ work is done. I have a class that can be called that will import the survey data for a person into Salesforce. But wait, how do I actually call this thing? I don’t want this on a scheduler, I want it to get called when new data is available. So I need to make this class itself into a webservice of some kind.

I have a bit of experience with Apex REST so I decided that would be a fitting way to handle this. This class only needs the ID of the survey, and the person for whom it needs to import data. That information is easily included in the URL or in POST fields, so I quickly modified my class to be an Apex REST service. Now it was ready to begin being accessed from the outside world. The question now is, how do I invoke the service itself?

First I used the apigee app console to make sure it was working as required. Apigee handles the oAuth and lets you specify params so testing your Apex REST service couldn’t be easier. Once I had verified that it worked, I needed some method to allow the survey software to invoke it. Problem is of course, if you remember that the survey software only supports JavaScript. JavaScript is still subject to that cross domain security policy BS. Normally you could use the script injection technique to make a callout to a different domain, but I need to set headers and such in the request, as well as making it a post request, so that wasn’t going to fly. On top of that I have would have no idea how to let JavaScript start using oAuth or get a valid session ID. So here is where things get a little murky.

How could I allow a javascript only application invoke my Apex REST service? Looks like I would again have to turn to my ColdFusion middleware box. I wrote another webservice which can invoke the desired Apex method from ColdFusion. You can call Apex REST services using a session ID instead of having to deal with oAuth so I went that route. I already have integration between Salesforce and ColdFusion through use of the awesome CFC library provided at RIA Forge (I actually helped contribute a bit to that). So I just wrote up what basically amounts to be a wrapper. You can invoke it from a simple get request and it will wrap the request with the required headers (authorization, content-length, content-type) and send it off to Salesforce. ColdFusion web services have the awesome feature of being able to be called via a URL, instead of having to use a WSDL or whatever. Come to think of it, they are almost a forerunner for REST, but I digress.

So now I have a URL that when called (with some arguments/params) will invoke my Apex REST service that goes and gets the survey data and imports it. So now I still need to get the survey software to call this URL. Here I use the classic script injection technique to make my cross domain request (because the survey software and my ColdFusion box live on different domains) and it much to my surprise it all worked. If you are curious, the code to do that looks like this.


function loadJSON(url)
{
var headID = document.getElementsByTagName("head")[0];
var newScript = document.createElement('script');
newScript.type = 'text/javascript';
newScript.src = url;
headID.appendChild(newScript);
}
var survey = '12345';
var token = 'mkwcgvixvxskchn';
var contact = '003GASDFADFAFDAS';
newUrl = 'http://XXXXXXXXXXXXX/webservice.cfc?method=importData&survey='+survey+'&token&='+token+'&contact='+contact;
loadJSON(newUrl);

So in the end, this is the process I came up with.

1) User takes online survey from 3rd party site (lets call it survey.com)
2) survey.com invokes javascript which calls the ColdFusion webservice (which includes survey id and person id in the request)
3) ColdFusion receives the request, and ‘wraps’ it with the needed authorization information to make a valid HTTP request.
4) Salesforce custom Apex REST class receives the request (with survey id and person id still included)
5) Salesforce sends request BACK to a different ColdFusion webservice, which requests the actual question/answer data.
6) ColdFusion receives request. Queries survey database and encodes database info as an array of JSON encoded objects.
7) JSON encoded data is returned to the calling Apex, where it is parsed into custom objects and committed to the database.

Seems kind of obtuse eh? Yet I can’t think of any way to make it leaner. I really would like to eliminate the 2nd and 3rd step and just have the survey software invoke the Apex REST directly somehow, or at least make the call totally cloud based. I suppose I could host a visualforce page that does the same thing, and have the JavaScript call that…

So anyway, here you can see an interesting case study of integrating a lot of different crap using some fairly new methods and techniques. I am totally open to suggestions on how to refine this. Right now there are just so many points of failure that it makes me nervous but again it seems to be about the best I can do. Thoughts and feedback welcome in the comments.