Oh my god. It's full of code!

Uncategorized

Merge Salesforce Package.xml Files

So I’ve recently been asked to play a larger role in doing our version control, which is not my strongest suite so it’s definitely been a learning experience. Our process at the moment involves creating a feature branch in bit bucket for each new user story. Folks do whatever work they need to do, and then create a change set. From that they generate a zip file that contains all the stuff they worked on and a package.xml file (using the super cool ORGanizer chrome plugin I just found out about). Once they get that they send it over to me and I get their changes setup as a feature branch and then request a pull to get it back into the master branch. Now I’m not sure if this is the best way to do things or not but in each new branch we need to append all the contents of the new package.xml into the existing package.xml. Much to my surprise I couldn’t find a quick clean easy way to merge two XML files. Tried a few online tools and nothing really seemed to work right, so me being me I decided to write something to do it for me. I wasn’t quite sure how to approach this, but then in an instant I realized that from my post a couples weeks ago I can convert XML into a javscript object easily. Once I do that then I can simply merge the objects in memory and build a new file. One small snag I found is that the native javacript methods for merging objects actually overwrites any properties of the same name, it doesn’t smash them together like I was hoping. So with a little bit of elbow grease I managed to write some utility methods for smashing all the data together. To use this simply throw your XML files in the packages directory and run the ‘runMerge.bat’ (this does require you to have node.js installed). It will spit out a new package.xml in the root directory that is a merge of all your package.xml files. Either way, hope this helps someone.

UPDATE (5/19): Okay after squashing a few bugs I now proudly release a version of package merge that actually like…. works (I hope). Famous last words I know.
UPDATE (5/20): Now supports automatic sorting of the package members, having XML files in sub-directories in the packages folder, forcing a package version, and merging all data into a master branch package file for continual cumulative add ons.
Download Package Merge Here!


Mass Updating Salesforce Country and State Picklist Integration Values

So it’s Friday afternoon about 4:00pm and I’m getting ready to wrap it up for the day. Just as I’m about to get up I hear the dreaded ping of my works instant messenger indicating I’ve been tagged. So of course I see whats up, it’s a coworker wondering if there is any way I might be able to help with what otherwise will be an insanely laborious chore. They needed to change the ‘integration value’ on all the states in the United States from having the full state name to just the state code (e.g. Minnesota->MN) in the State and Country Picklist. Doing this manually would take forever, and moreover it had to be done in 4 different orgs. I told him I’d see what I could do over the weekend.

So my first thought was of course see if I can do it in Apex, just find the table that contains the data make a quick script and boom done. Of course, it’s Salesforce so it’s never that easy. The state and country codes are stored in the meta data and there ins’t really a great way to modify that directly in Apex (that I know of, without using that wrapper class but I didn’t want to have to install a package and learn a whole new API for this one simple task). I fooled around with a few different ideas in Apex but after a while it just didn’t seem like it was doable. I couldn’t find any way to update the metadata even though I could fetch it. After digging around a bit I decided probably the best way was to simply download the metadata, modify it and push it back. So first I had to actually get the metadata file. At first I was stuck because AddressSettings didn’t appear in the list of meta data object in VScode (I have a package.xml builder that lets me just select whatever I want from a list and it builds the file for me) and didn’t know how to build a package.xml file that would get it. I found a handy stack overflow post that gave me the command

sfdx force:source:retrieve -m Settings:Address

Which worked to pull the data. The same post also showed the package.xml file that could be used to either pull or push that metadata (with this you don’t even need the above command. You can just pull it directly by using ‘retrieve source in manifest from org’ in VS code).

<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>46.0</version>
    <types>
        <members>Address</members>
        <name>Settings</name>
    </types>
</Package>

 
Now that I had the data the only issue really was that there wasn’t an easy way to just do a find and replace or something to update the file. The value for each state (only in the United States) had to be copied from the state code field into the integration value field. So I decided to whip up a quick nodeJS project to do it. You can download it here (it comes with the original and fixed Address.settings-meta.xml files as well if you just want to get those). It’s a pretty simple script, but it does require xml2js because parsing XML is a pain otherwise.

const fs = require('fs');
var xml2js = require('xml2js');
var parseString = xml2js.parseString;

try 
{
    const data = fs.readFileSync('Address.settings-meta.xml', 'utf8')

    parseString(data, function (err, result) {
        

        var root = result.AddressSettings.countriesAndStates[0].countries;
        //console.log(root);

        for(var i = 0; i < root.length; i++)
        {
            
            var countryName = root[i].integrationValue[0];
            if(countryName == 'United States')
            {
                console.log('Found US!');
        
                for(var j = 0; j < root[i].states.length; j++)
                {
                    console.log('Changing ' + root[i].states[j].integrationValue[0] + ' to ' + root[i].states[j].isoCode[0]);
                    root[i].states[j].integrationValue[0] = root[i].states[j].isoCode[0];
                }
            }
        }
        
        var builder = new xml2js.Builder();
        var xml = builder.buildObject(result);
    
        fs.writeFile("Address.settings-meta-fixed.xml", xml, function(err) {
            if(err) {
                return console.log(err);
            }
            console.log("The file was saved!");
        });     
    });
    
} 
catch (err) 
{
    console.error(err)
}
script

Output of my script. Always satisfying when stuff works.

With my fixed address settings file the last step was “simply” to push it back into Salesforce. I’ll be honest, I haven’t used SFDX much, and this last step actually took longer than it should have. I couldn’t decide if I should be using force:source:deploy or force:mdapi:deploy. Seeing as I had to do this in a production org originally I thought I had to use mdapi but a new update made that no longer the case. mdapi wanted me to build a zip file or something and I got frustrated trying to figure it out. I’m just trying to push one damn file why should I need to be building manifests and making zip files and whatever?! So after some trial and error with force:source:deploy I found that it could indeed push to prod and would take just a package.xml as its input. Initially it complained about not running any test so I told it to only run local tests. That also failed because some other code in the org is throwing errors. As a work around I simply provided it a specific test to run (ChangePasswordController, which is in like every org) and that worked. The final command being

sfdx force:source:deploy -x manifest/package.xml -w 10 -l RunSpecifiedTests –runtests ChangePasswordController

deploy

Hooray it finally worked!

And viola! The fixed metadata was pushed into the org and I spared my coworker days of horrific manual data entry. I know in the end this all ended up being a fairly simply process but it did end up taking me much longer than I initially figured mostly just due to not knowing the processes involved or how to reference the data I wanted so I figured maybe this would save someone some time. Till next time.


Simplification

Hey all,

So I wanted to just throw this out there, I’ve moved from Minnesota to VERY rural Montana. I traded in my 3 bedroom rambler for a studio cabin on some ranch near the Canadian border. As such my access to technology is somewhat reduced and I don’t know if I’ll be posting as much interesting stuff on this blog for a while. Odds are I’ll have some cool Salesforce stuff from time to time since I am maintaining my employment remotely but I won’t be doing as much at home hacking. If you are curious how things are going, why this happened or just like my writing style I’ve started a new blog detailing my journey. You can check it out here:

Montana Dan Blog

Anyway, I’ll still post what I can but I figured I’d should at least inform the community why I might not be around quite as much. Till next time.

-Kenji


Deep Clone (Round 2)

So a day or two ago I posted my first draft of a deep clone, which would allow easy cloning of an entire data hierarchy. It was a semi proof of concept thing with some limitations (it could only handle somewhat smaller data sets, and didn’t let you configure all or nothing inserts, or specify if you wanted to copy standard objects as well as custom or not). I was doing some thinking and I remembered hearing about the queueable interface, which allows for asynchronous processing and bigger governor limits. I started thinking about chaining queueable jobs together to allow for copying much larger data sets. Each invocation would get it’s own governor limits and could theoretically go on as long as it took since you can chain jobs infinitely. I had attempted to use queueable to solve this before but i made the mistake of trying to kick off multiple jobs per invocation (one for each related object type). This obviously didn’t work due to limits imposed on queueable. Once I thought of a way to only need one invocation per call (basically just rolling all the records that need to get cloned into one object and iterate over it) I figured I might have a shot at making this work. I took what I had written before, added a few options, and I think I’ve done it. An asynchronous deep clone that operates in distinct batches with all or nothing handling, and cleanup in case of error. This is some hot off the presses code, so there is likely some lingering bugs, but I was too excited not to share this. Feast your eyes!

public class deepClone implements Queueable {

    //global describe to hold object describe data for query building and relationship iteration
    public map<String, Schema.SObjectType> globalDescribeMap = Schema.getGlobalDescribe();
    
    //holds the data to be cloned. Keyed by object type. Contains cloneData which contains the object to clone, and some data needed for queries
    public map<string,cloneData> thisInvocationCloneMap = new map<string,cloneData>();
    
    //should the clone process be all or nothing?
    public boolean allOrNothing = false;
    
    //each iteration adds the records it creates to this property so in the event of an error we can roll it all back
    public list<id> allCreatedObjects = new list<id>();
    
    //only clone custom objects. Helps to avoid trying to clone system objects like chatter posts and such.
    public boolean onlyCloneCustomObjects = true;
    
    public static id clone(id sObjectId, boolean onlyCustomObjects, boolean allOrNothing)
    {
        
        deepClone startClone= new deepClone();
        startClone.onlyCloneCustomObjects  = onlyCustomObjects;
        startClone.allOrNothing = allOrNothing;
        
        sObject thisObject = sObjectId.getSobjectType().newSobject(sObjectId);
        cloneData thisClone = new cloneData(new list<sObject>{thisObject}, new map<id,id>());
        map<string,cloneData> cloneStartMap = new map<string,cloneData>();
        
        cloneStartMap.put(sObjectId.getSobjectType().getDescribe().getName(),thisClone);
        
        startClone.thisInvocationCloneMap = cloneStartMap;
        return System.enqueueJob(startClone);      
    }
    
    public void execute(QueueableContext context) {
        deepCloneBatched();
    }
        
    /**
    * @description Clones an object and the entire related data hierarchy. Currently only clones custom objects, but enabling standard objects is easy. It is disabled because it increases risk of hitting governor limits
    * @param sObject objectToClone the root object be be cloned. All descended custom objects will be cloned as well
    * @return list<sobject> all of the objects that were created during the clone.
    **/
    public list<id> deepCloneBatched()
    {
        map<string,cloneData> nextInvocationCloneMap = new map<string,cloneData>();
        
        //iterate over every object type in the public map
        for(string relatedObjectType : thisInvocationCloneMap.keySet())
        { 
            list<sobject> objectsToClone = thisInvocationCloneMap.get(relatedObjectType).objectsToClone;
            map<id,id> previousSourceToCloneMap = thisInvocationCloneMap.get(relatedObjectType).previousSourceToCloneMap;
            
            system.debug('\n\n\n--------------------  Cloning record ' + objectsToClone.size() + ' records');
            list<id> objectIds = new list<id>();
            list<sobject> clones = new list<sobject>();
            list<sObject> newClones = new list<sObject>();
            map<id,id> sourceToCloneMap = new map<id,id>();
            list<database.saveresult> cloneInsertResult;
                       
            //if this function has been called recursively, then the previous batch of cloned records
            //have not been inserted yet, so now they must be before we can continue. Also, in that case
            //because these are already clones, we do not need to clone them again, so we can skip that part
            if(objectsToClone[0].Id == null)
            {
                //if they don't have an id that means these records are already clones. So just insert them with no need to clone beforehand.
                cloneInsertResult = database.insert(objectsToClone,allOrNothing);

                clones.addAll(objectsToClone);
                
                for(sObject thisClone : clones)
                {
                    sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
                }
                            
                objectIds.addAll(new list<id>(previousSourceToCloneMap.keySet()));
                //get the ids of all these objects.                    
            }
            else
            {
                //get the ids of all these objects.
                for(sObject thisObj :objectsToClone)
                {
                    objectIds.add(thisObj.Id);
                }
    
                //create a select all query to get all the data for these objects since if we only got passed a basic sObject without data 
                //then the clone will be empty
                string objectDataQuery = buildSelectAllStatment(relatedObjectType);
                
                //add a where condition
                objectDataQuery += ' where id in :objectIds';
                
                //get the details of this object
                list<sObject> objectToCloneWithData = database.query(objectDataQuery);
    
                for(sObject thisObj : objectToCloneWithData)
                {              
                    sObject clonedObject = thisObj.clone(false,true,false,false);
                    clones.add(clonedObject);               
                }    
                
                //insert the clones
                cloneInsertResult = database.insert(clones,allOrNothing);
                
                for(sObject thisClone : clones)
                {
                    sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
                }
            }        
            
            for(database.saveResult saveResult :  cloneInsertResult)
            {
                if(saveResult.success)
                {
                    allCreatedObjects.add(saveResult.getId());
                }
                else if(allOrNothing)
                {
                    cleanUpError();
                    return allCreatedObjects;
                }
            }
              
            //Describes this object type so we can deduce it's child relationships
            Schema.DescribeSObjectResult objectDescribe = globalDescribeMap.get(relatedObjectType).getDescribe();
                        
            //get this objects child relationship types
            List<Schema.ChildRelationship> childRelationships = objectDescribe.getChildRelationships();
    
            system.debug('\n\n\n-------------------- ' + objectDescribe.getName() + ' has ' + childRelationships.size() + ' child relationships');
            
            //then have to iterate over every child relationship type, and every record of that type and clone them as well. 
            for(Schema.ChildRelationship thisRelationship : childRelationships)
            { 
                          
                Schema.DescribeSObjectResult childObjectDescribe = thisRelationship.getChildSObject().getDescribe();
                string relationshipField = thisRelationship.getField().getDescribe().getName();
                
                try
                {
                    system.debug('\n\n\n-------------------- Looking at ' + childObjectDescribe.getName() + ' which is a child object of ' + objectDescribe.getName());
                    
                    if(!childObjectDescribe.isCreateable() || !childObjectDescribe.isQueryable())
                    {
                        system.debug('-------------------- Object is not one of the following: queryable, creatable. Skipping attempting to clone this object');
                        continue;
                    }
                    if(onlyCloneCustomObjects && !childObjectDescribe.isCustom())
                    {
                        system.debug('-------------------- Object is not custom and custom object only clone is on. Skipping this object.');
                        continue;                   
                    }
                    if(Limits.getQueries() >= Limits.getLimitQueries())
                    {
                        system.debug('\n\n\n-------------------- Governor limits hit. Must abort.');
                        
                        //if we hit an error, and this is an all or nothing job, we have to delete what we created and abort
                        if(!allOrNothing)
                        {
                            cleanUpError();
                        }
                        return allCreatedObjects;
                    }
                    //create a select all query from the child object type
                    string childDataQuery = buildSelectAllStatment(childObjectDescribe.getName());
                    
                    //add a where condition that will only find records that are related to this record. The field which the relationship is defined is stored in the maps value
                    childDataQuery+= ' where '+relationshipField+ ' in :objectIds';
                    
                    //get the details of this object
                    list<sObject> childObjectsWithData = database.query(childDataQuery);
                    
                    system.debug('\n\n\n-------------------- Object queried. Found ' + childObjectsWithData.size() + ' records to clone');
                    
                    if(!childObjectsWithData.isEmpty())
                    {               
                        map<id,id> childRecordSourceToClone = new map<id,id>();
                        
                        for(sObject thisChildObject : childObjectsWithData)
                        {
                            childRecordSourceToClone.put(thisChildObject.Id,null);
                            
                            //clone the object
                            sObject newClone = thisChildObject.clone();
                            
                            //since the record we cloned still has the original parent id, we now need to update the clone with the id of it's cloned parent.
                            //to do that we reference the map we created above and use it to get the new cloned parent.                        
                            system.debug('\n\n\n----------- Attempting to change parent of clone....');
                            id newParentId = sourceToCloneMap.get((id) thisChildObject.get(relationshipField));
                            
                            system.debug('Old Parent: ' + thisChildObject.get(relationshipField) + ' new parent ' + newParentId);
                            
                            //write the new parent value into the record
                            newClone.put(thisRelationship.getField().getDescribe().getName(),newParentId );
                            
                            //add this new clone to the list. It will be inserted once the deepClone function is called again. I know it's a little odd to not just insert them now
                            //but it save on redudent logic in the long run.
                            newClones.add(newClone);             
                        }  
                        cloneData thisCloneData = new cloneData(newClones,childRecordSourceToClone);
                        nextInvocationCloneMap.put(childObjectDescribe.getName(),thisCloneData);                             
                    }                                       
                       
                }
                catch(exception e)
                {
                    system.debug('\n\n\n---------------------- Error attempting to clone child records of type: ' + childObjectDescribe.getName());
                    system.debug(e); 
                }            
            }          
        }
        
        system.debug('\n\n\n-------------------- Done iterating cloneable objects.');
        
        system.debug('\n\n\n-------------------- Clone Map below');
        system.debug(nextInvocationCloneMap);
        
        system.debug('\n\n\n-------------------- All created object ids thus far across this invocation');
        system.debug(allCreatedObjects);
        
        //if our map is not empty that means we have more records to clone. So queue up the next job.
        if(!nextInvocationCloneMap.isEmpty())
        {
            system.debug('\n\n\n-------------------- Clone map is not empty. Sending objects to be cloned to another job');
            
            deepClone nextIteration = new deepClone();
            nextIteration.thisInvocationCloneMap = nextInvocationCloneMap;
            nextIteration.allCreatedObjects = allCreatedObjects;
            nextIteration.onlyCloneCustomObjects  = onlyCloneCustomObjects;
            nextIteration.allOrNothing = allOrNothing;
            id  jobId = System.enqueueJob(nextIteration);       
            
            system.debug('\n\n\n-------------------- Next queable job scheduled. Id is: ' + jobId);  
        }
        
        system.debug('\n\n\n-------------------- Cloneing Done!');
        
        return allCreatedObjects;
    }
     
    /**
    * @description create a string which is a select statement for the given object type that will select all fields. Equivalent to Select * from objectName ins SQL
    * @param objectName the API name of the object which to build a query string for
    * @return string a string containing the SELECT keyword, all the fields on the specified object and the FROM clause to specify that object type. You may add your own where statements after.
    **/
    public string buildSelectAllStatment(string objectName){ return buildSelectAllStatment(objectName, new list<string>());}
    public string buildSelectAllStatment(string objectName, list<string> extraFields)
    {       
        // Initialize setup variables
        String query = 'SELECT ';
        String objectFields = String.Join(new list<string>(globalDescribeMap.get(objectName).getDescribe().fields.getMap().keySet()),',');
        if(extraFields != null)
        {
            objectFields += ','+String.Join(extraFields,',');
        }
        
        objectFields = objectFields.removeEnd(',');
        
        query += objectFields;
    
        // Add FROM statement
        query += ' FROM ' + objectName;
                 
        return query;   
    }    
    
    public void cleanUpError()
    {
        database.delete(allCreatedObjects);
    }
    
    public class cloneData
    {
        public list<sObject> objectsToClone = new list<sObject>();        
        public map<id,id> previousSourceToCloneMap = new map<id,id>();  
        
        public cloneData(list<sObject> objects, map<id,id> previousDataMap)
        {
            this.objectsToClone = objects;
            this.previousSourceToCloneMap = previousDataMap;
        }   
    }    
}    

It’ll clone your record, your records children, your records children’s children’s, and yes even your records children’s children’s children (you get the point)! Simply invoke the deepClone.clone() method with the id of the object to start the clone process at, whether you want to only copy custom objects, and if you want to use all or nothing processing. Deep Clone takes care of the rest automatically handling figuring out relationships, cloning, re-parenting, and generally being awesome. As always I’m happy to get feedback or suggestions! Enjoy!

-Kenji


Amazon Alexa is going to run/ruin my life

It was my birthday recently, just turned 28. As a gift to myself I finally decided to order an Amazon Alexa cause I’ve wanted one since I heard about it a few months ago. If you aren’t familiar it’s basically like a ‘siri’ or ‘cortana’ thing that is a stand alone personal assistant device that lives in your home. It’s always on and responds to voice commands from surprisingly far away. It can tell you the weather, check your calendar, manage your shopping list and all that kind of nifty stuff. However, it can do more, much more. Thanks to the ability to develop custom ‘skills’ (their name for apps) and out of the box If This Then That (IFTTT) integration you can quickly start making Alexa do just about anything. I’ve owned it only a day now and I’ve already taught it two new tricks.

Also, if you aren’t familiar with IFTTT it’s an online service that basically allows you to create simple rules that perform actions (hence the name, if this then that). They have the ability to integrate all kinds of different services so you no longer have to be an advanced programmer to automate much of your life. It’s a cool free service and I’d highly recommend checking it out.

You may remember a while back I did that whole write about about making an automatic door locking service software to lock and unlock my front door. I figured a good way to jump into making custom commands would be if I could to see if I could teach Alexa to do it for me upon request. Turns out it was surprisingly easy. Since I already had the web service up and running to respond to HTTP post requests, I simply needed to create an IFTTT rule to send a request when Alexa heard a specific phrase. You may recall that I had some problems with IFTTT not seeming to work for me before, but it seems to now, might have been an error on my part for all I know. Here is the rule as it stands currently.

door 1door 2

Every command issued to Alexa starts with the ‘wake word’ in this case I’ve chosen Alexa (since you can only pick between Alexa, Echo, and Amazon). Second is the command to issue so it knows what service to route the request to. For this the command is ‘trigger’ so Alexa knows to send the request to IFTTT. Then you simple include the phrase to match, and what to do. I decided to make the phrase ‘lock the door’ which when that happens will send a post request to my web server is listening with the given JSON payload. Boom done.

The next thing I wanted to do, and this is still just a very rough outline of a final idea is Chromecast integration. Ideally I’d like to be able to say ‘Alexa trigger play netflix [moviename]’ but as of right now triggers created from IFTTT for Alexa can’t really contain variables aside from just the whole command itself. So I could do ‘Alexa trigger netflix bojack horseman’ and create a specific request just for that show, but there is no way to create a generic template kind of request and pass on the details to the web service that is listening. That aside, what I do have currently is a start.

I found a command line tool that can interact with the chromecast (check this guide for  Command Line Chromecast), and then created a execute statment to call that from my web service. My door lock and unlock service already has logic for handling different commands so I just created a new one called ‘play’ that plays my test video.

else if(action == 'play')
{
	console.log('Casting Requested Thing!');
	var exec = require('child_process').exec;
	var cmd = 'castnow c:\\cast\\testVideo.mp4 --device "Upstairs Living Room"';

	exec(cmd, function(error, stdout, stderr) {
	});					
}

So that turned out to be pretty easy. Small caveat being that castnow is more meant to be an application that is kept open and you interact with to control the video. Since it is being invoked via a web service call it doesn’t really get to ‘interact’ with it. I suppose you might be able to do some crazy shit like keeping open a web socket and continue to pass commands to it, but that’s for another day.

The IFTTT command is basically the same as the door lock one. Just change the command to trigger it, and change the JSON payload to have the action as “play” instead of “lock” or “unlock” and the command gets triggered. I also created a corollary rule and bit of code for stopping the casting of the current video by playing another empty video file (since there isn’t an explicit stop command in the castnow software).

There you have it, with Alexa, IFTTT, and a home web server you can start to do some pretty cool customized automation stuff. I think next up is getting it to order my favorite local pizza for me 😀


URL Encode Object/Simple Object Reflection in Apex

Hey all,

Kind of a quick yet cool post for you today. Have you ever wanted to be able to iterate over the properties of a custom class/object? Maybe wanted to read out all the values, or for some other reason (such as serializing the object perhaps) wanted to be able to figure out what all properties an object contained but couldn’t find a way? We all know Apex has come a long way, but it still is lacking a few core features, reflection being one of them. Recently I had a requirement were I wanted to be able to take an object and serialize it into URL format. I didn’t want to have to have to manually type out every property of the object since it could change, and I’m lazy like that. Without reflection this seems impossible, but it’s not!

Remembering that the deserialize json method that Apex has is capable of creating an iteratable version of an object by casting it into a list, or a map suddenly it becomes much more viable. Check it out.

 

    public static string urlEncodeObject(object objectToEncode)
    {
        string urlEncodedString;
        String serializedObject = JSON.serialize(objectToEncode);
        
        Map<String,Object> deserializedObject = (Map<String,Object>) JSON.deserializeUntyped(serializedObject);
        
        for(String key : deserializedObject.keySet())
        {
            urlEncodedString+= key+'='+string.valueOf(deserializedObject.get(key))+'&';
        }
        urlEncodedString = urlEncodedString.substring(0,urlEncodedString.length()-1);
        urlEncodedString = encodingUtil.urlEncode(urlEncodedString,'utf-8');
        return urlEncodedString;
    }       

There you have it. By simply simply serializing an object, then deserializing it, we can now iterate over it. Pretty slick eh? Not perfect I know, and doesn’t work awesome for complex objects, but it’s better than nothing until Apex introduces some real reflection abilities.


Using google forms and sheets as a data source for graphs

Hey all,

Long time no post! I’ve been on vacation and in general just being kind of lazy, but today I’ve got a simple fun project for us. You see, my girlfriend is always right, well almost always. Very rarely I’ll remember something correctly, but in general she’s always correct (and not in the ‘haha men are so dumb, women know everything’ way, actually legit she remembers way more stuff than me). This phenomenon has gotten so pervasive that I just for kicks wanted to create a live chart running in the house display how often either of us was right about stuff (I know I’ll regret this eventually).  So for my mini project I had a few goals

1) Have a live chart that updates automatically on a TV in my house (we have an extra TV that we generally just use a media center/music streaming box via a chomecast)

2) Make an easy interface to add new data to the chart

3) Make the chart slick looking

4) Keep it simple. This is basically a hobby project so I don’t want to go too nuts.

Before we get started, you can see the demo here:
http://xerointeractive-developer-edition.na9.force.com/partyForce/RightChart

Please close it when you are done though, my dev org only gets so many HTTP requests per day (note to self, add some kind of global request caching or something).

I was able to complete this project in about an hour and a half and meet all my goals. So now I’ll show you how.

Right off the bat I had a general idea of how I would do this (though the approach did morph a bit). From a previous project I knew it was possible that store and retrieve data in a google spreadsheet. You can get the raw CSV data by using a special URL, and them import that via an http request from an Apex controller. I figured this was easier than setting up a salesforce object, creating a custom interface for adding data, and hell it’s cool to be able to utilize google forms data for something.

form

My basic form for collecting data

From there it’s just a matter of passing the data to a chart system, and making it poll the sheet occasionally. So anyway, first off we are going to need a google form to collect our data. Head to google docs, and create a new spreadsheet. Use the forms menu to create a new form for your page. In my case, it’s just a simple single question multiple choice (with an other option). Each time the form is submitted it puts the name, and a timestamp into a sheet called ‘Form Responses 1’. This data format works pretty well. I played around with trying to create another sheet that used queryIf to sum all the times various names appeared in the sheet, but that approach had a limiting factor of only working for names I pre-coded it for. It wasn’t dynamic enough. So I decided to just let google collect the data, and I’d handle the summing and formatting in my code.

sheet1

Your form should be gathering data in a way that looks something like this

To actually get the data in a usable form for programming, we need a raw csv version of it. Thankfully google will provide this for you (though they aren’t exactly forthcoming with it). As of this writting, so get the raw CSV of your sheet, go to file and hit publish. Just publish the one sheet. You should be given a shareable url with a long unique looking id string. Take that and put it into this URL format

https://docs.google.com/spreadsheets/d/key/export?format=csv&id=key

Just replace the word key with your documents unique ID. You should be able to put that URL in your browser and it should automatically attempt to download your spreadsheet in CSV format. If so, you are in good shape. If not, make sure you published it, and it’s shared and all that good stuff. Once you have that working we can move to the next step.

publish

Publish your form results sheet and make note of that unique ID, you’ll need it!

So now that the data exists and is accessible we need to GET it. I decided because it’s the easiest publishing platform I know I’d just use Salesforce sites. So that means Apex is going to be my back end. So I’ll need an Apex call to fetch the CSV data from the google sheet, and some code to parse that CSV into some kind of logical structure. Again thankfully from past projects, I had just such a a class.

//gets CSV data from a given URL and parses it into a list of lists
global class RightChartController 
{

    public String getDataSourceUrl() {
        return 'Your google document url here';
    }

   

    //gets CSV data from a given source
    @remoteAction
    global static  List<List<String>> importCSV(string url)
    {
         List<List<String>> result = new List<List<String>>(); 
        try
        {
            string responseBody;
            
            //create http request to get import data from
            HttpRequest req = new HttpRequest();
            req.setEndpoint(url);
            req.setMethod('GET');         
            Http http = new Http();
            
            //if this is not a test actually send the http request. if it is a test, hard code the returned results.
            if(!Test.isRunningTest())
            {
                HTTPResponse res = http.send(req);
                responseBody = res.getBody();
            }
            else
            {
                responseBody = 'Name,Count\ntammy,10\njoe,5\nFrank,0';
            }
            
            //the data should come back in in CSV format, so hand it off the the parsing function which will make a list of a list of strings (each list is one row, each item within that sub list is one column)
            result = RightChartController.parseCSV (responseBody,true);
        }
        catch(exception e)
        {
            system.debug('\n\n\n\n----------------------------- Error importing chart data. ' + e.getMessage() + ' on line ' + e.getLineNumber());
        }
        return result;
    }
    
    //parses a csv file. REturns a list of lists. Each main list is a row, and the list contained is all the columns.
    public static List<List<String>> parseCSV(String contents,Boolean skipHeaders)
    {
        List<List<String>> allFields = new List<List<String>>();
    
        // replace instances where a double quote begins a field containing a comma
        // in this case you get a double quote followed by a doubled double quote
        // do this for beginning and end of a field
        contents = contents.replaceAll(',"""',',"DBLQT').replaceall('""",','DBLQT",');
        // now replace all remaining double quotes - we do this so that we can reconstruct
        // fields with commas inside assuming they begin and end with a double quote
        contents = contents.replaceAll('""','DBLQT');
        // we are not attempting to handle fields with a newline inside of them
        // so, split on newline to get the spreadsheet rows
        List<String> lines = new List<String>();
        try {
            lines = contents.split('\n');
        } catch (System.ListException e) {
            System.debug('Limits exceeded?' + e.getMessage());
        }
        Integer num = 0;
        for(String line : lines) {
            // check for blank CSV lines (only commas)
            if (line.replaceAll(',','').trim().length() == 0) break;
            
            List<String> fields = line.split(',');  
            List<String> cleanFields = new List<String>();
            String compositeField;
            Boolean makeCompositeField = false;
            for(String field : fields) {
                if (field.startsWith('"') && field.endsWith('"')) {
                    cleanFields.add(field.replaceAll('DBLQT','"'));
                } else if (field.startsWith('"')) {
                    makeCompositeField = true;
                    compositeField = field;
                } else if (field.endsWith('"')) {
                    compositeField += ',' + field;
                    cleanFields.add(compositeField.replaceAll('DBLQT','"'));
                    makeCompositeField = false;
                } else if (makeCompositeField) {
                    compositeField +=  ',' + field;
                } else {
                    cleanFields.add(field.replaceAll('DBLQT','"'));
                }
            }
            
            allFields.add(cleanFields);
        }
        if (skipHeaders) allFields.remove(0);
        return allFields;       
    }
}

So now we’ve got the back end code that is required to both get the data and parse it (Don’t forget to add a remote site exception in your Salesforce security controls for docs.google.com!). Now we just need an interface to use that data and display it in a nifty chart. Using highcharts this is pretty easy. Mine ended up looking something like this (You don’t have to tell me the code is kind of sloppy, this was just a quick throw together project).

<apex:page controller="RightChartController" sidebar="false" showHeader="false" standardStylesheets="false">
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
    <script src="https://code.highcharts.com/highcharts.js"></script>
    <script src="https://code.highcharts.com/highcharts-3d.js"></script>
    <script>
        //load the document source  locally incase we want to let the user change it or something later
        var docSource = '{!dataSourceUrl}';
        var chart;
        
        //fetches the data from the google sheet
        function getData(docSource,callback)
        {
           Visualforce.remoting.Manager.invokeAction(
                '{!$RemoteAction.RightChartController.importCSV}', 
                docSource,
                function(result, event){
                    if (event.status) {
                        callback(result);
                    }
                }, 
                {escape: true}
            );   
     
        }
        
        //massages the data from being an array of arrays (one line per form entry) into an array of objects with totals
        //should probably be refactored to make it more efficient, but whatever.
        function translateDataToHighChartFormat(csvData)
        {
            var chartData = new Array();
            var totals = new Object();
            
            for(var i = 0; i < csvData.length; i++)
            {
                var timestamp = csvData[i][0];
                var name = csvData[i][1];
                 
                if(totals.hasOwnProperty(name))
                {
                    totals[name]++;
                }
                else
                {
                    totals[name] = 1;
                }
            }
            
            for(key in totals)
            {
                var thisPoint = new Object();
                thisPoint.name = key;
                thisPoint.y = totals[key];
                chartData.push(thisPoint);
            }
            
            return chartData;
        }
        
        //create the chart on document load
        $(function () 
        {
            chart = new Highcharts.Chart({
                chart: {
                    type: 'pie',
                    options3d: {
                        enabled: true,
                        alpha: 45,
                        beta: 0,
                    },
                    renderTo: 'container'
                },
                title: {
                    text: 'Told You So'
                },                
                plotOptions: {
                    pie: {
                        depth: 25
                    }
                },
                series: [{
                    data: []
                }]
            });
            
            //set interval timer to poll the document every 10 seconds
            setInterval(function(){
                getData(docSource,function(result){
                    chart.series[0].setData(translateDataToHighChartFormat(result));
                    
                });
            },10000);
            
            //get the data one initially so we don't have to wait for the first delay to get data
            getData(docSource,function(result){
                chart.series[0].setData(translateDataToHighChartFormat(result));
                $('#Loading').hide();
            });
        });    
    </script>
    <div id="container" style="height: 400px"></div>
    <div id="Loading" style="text-align:center; font-weight:bold; font-size: 24px">Loading Chart Data Please Wait</div>
</apex:page>

If everything has gone smoothly, you should end up with something that looks like this

chart

With our page alive, it’s a simple matter to add it to a Salesforce site. Anyone can view it, and anyone you give the form link to will be able to add data to it. As data is added the chart will automatically redraw itself every 10 seconds with the new data set. Then it was just a simple matter of having the chart open on some computer and using the chrometab app for chrome to send it to my chromecast. Now we can be reminded of how stupid I am all the time….. what have I done?


Stripping Nulls from a JSON object in Apex

NOTE: If you don’t’ want to read the wall of text/synopsis/description just scroll to the bottom. The function you need is there.

I feel dirty. This is the grossest hack I have had to write in a while, but it is also too useful not to share (I think). Salesforce did us an awesome favor by introducing the JSON.serialize utility, it can take any object and serialize it into JSON which is great! The only problem is that you have no control over the output JSON, the method takes no params except for the source object. Normally this wouldn’t be a big deal, I mean there isn’t a lot to customize about JSON usually, it just is what it is. There is however one case when you may want to control the output, and that is in the case of nulls. You see most of the time when you are sending JSON to a remote service, if you have a param specified as null, it will just skip over it as it should. Some of the stupider APIs try and process that null as if it were a value. This is especially annoying when the API has optional parameters and you are using a language like Apex which being strongly types makes it very difficult to modify an object during run time to remove a property. For example, say I am ordering a pizza, via some kind of awesome pizza ordering API. The API might take a size, some toppings, and a desired delivery time (for future deliveries). Their API documentation states that delivery time is an optional param, and if not specified it will be delivered as soon as possible, which is nice. So I write my little class in apex

    public class pizzaOrder
    {
    	public string size;
    	public list<string> toppings;
    	public datetime prefferedDeliveryTime;
    
    }
    
    public static string orderPizza(string size, list<string> toppings, datetime prefferedDeliveryTime)
    {
    	pizzaOrder thisOrder = new pizzaOrder();
    	thisOrder.size = size;
    	thisOrder.toppings = toppings;
    	thisOrder.prefferedDeliveryTime	= prefferedDeliveryTime;
    	
    	string jsonOrderString = JSON.serialize(thisOrder);
    	
   
    }
    
    list<string> toppings = new list<string>();
    toppings.add('cheese');
    toppings.add('black olives');
    toppings.add('jalepenos');
                     
    orderPizza('large', toppings, null);

And your resulting JSON looks like

{“toppings”:[“cheese”,”black olives”,”jalepenos”],”size”:”large”,”prefferedDeliveryTime”:null}

Which in would work beautifully, unless the Pizza API is setup to treat any present key in the JSON object as an actual value, which in that case would be null. The API would freak out saying that null isn’t a valid datetime, and you are yelling at the screen trying to figure out why the stupid API can’t figure out that if an optional param has a null value, to just skip it instead of trying to evaluate it.

Now in this little example you could easily work around the issue by just specifying the prefferedDeliveryTime as the current date time if the user didn’t pass one in. Not a big deal. However, what if there was not a valid default value to use? In my recent problem there is an optional account number I can pass in to the API. If I pass it in, it uses that. If I don’t, it uses the account number setup in the system. So while I want to support the ability to pass in an account number, if the user doesn’t enter one my app will blow up because when the API encounters a null value for that optional param it explodes. I can’t not have a property for the account number because I might need it, but including it as a null (the user just wants to use the default, which Salesforce has no idea what is) makes the API fail. Ok, whew, so now hopefully we all understand the problem. Now what the hell do we do about it?

While trying to solve this, I explored a few different options. At first I thought of deserialize the JSON object back into a generic object (map<string,object>) and check for nulls in any of the key/value pairs, remove them then serialize the result. This failed due to difficulties with detecting the type of object the value was (tons of ‘unable to convert list<any> to map<string,object> errors that I wasn’t’ able to resolve). Of course you also have the recursion issue since you’ll need to look at every element in the entire object which could be infinity deep/complex so that adds another layer of complexity. Not impossible, but probably not super efficient and I couldn’t even get it to work. Best of luck if anyone else tries.

The next solution I investigated was trying to write my own custom JSON generator that would just not put nulls in the object in the first place. This too quickly fell apart, because I needed a generic function that could take string or object (not both, just a generic thing of some kind) and turn it into JSON, since this function would have to be used to strip nulls from about 15 different API calls. I didn’t look super hard at this because all the code I saw looked really messy and I just didn’t like it.

My solution that I finally decided to go for, while gross, dirty, hackish and probably earned me a spot in programmer hell is also simple and efficient. Once I remembered that JSON is just a string, and can be manipulated as such, I started thinking about maybe using regex (yes I am aware when you solve one problem with regex now you have two) to just strip out nulls. Of course then you have to worry about cleaning up syntax (extra commas, commas against braces, etc) when just just rip elements out of the JSON string, but I think I’ve got a little function here that will do the job, at least until salesforce offeres a ‘Don’t serialize nulls’ option in their JSON serializer.

    public static string stripJsonNulls(string JsonString)
    {

    	if(JsonString != null)   	
    	{
			JsonString = JsonString.replaceAll('\"[^\"]*\":null',''); //basic removeal of null values
			JsonString = JsonString.replaceAll(',{2,}', ','); //remove duplicate/multiple commas
			JsonString = JsonString.replace('{,', '{'); //prevent opening brace from having a comma after it
			JsonString = JsonString.replace(',}', '}'); //prevent closing brace from having a comma before it
			JsonString = JsonString.replace('[,', '['); //prevent opening bracket from having a comma after it
			JsonString = JsonString.replace(',]', ']'); //prevent closing bracket from having a comma before it
    	}
  	
	return JsonString;
    }

Which after running on our previously generated JSON we get

{“toppings”:[“cheese”,”black olives”,”jalepenos”],”size”:”large”}

Notice, no null prefferedDeliveryTime key. It’s  not null, its just non existent. So there you have it, 6 lines of find and replace to remove nulls from your JSON object. Yes, you could combine them and probably make it a tad more efficient. I went for readability here. So sue me. Anyway, hope this helps someone out there, and if you end up using this, I’m sure I’ll see you in programmer hell at some point. Also, if anyone can make my initial idea of recursively spidering the JSON object and rebuilding it as a map of <string,object> without the nulls, I’d be most impressed.


Super Handy Mass Deploy Tool

So I know it has been a while. I’m not dead I promise, just busy. Busy with trying to keep about a thousand orgs in sync, pushing code changes, layout changes, all kinds of junk from one source org to a ton of other orgs. I know you are saying ‘just use managed packages, or change sets’. Manages packages can be risky early in the dev process because you usually can’t remove components and things and you get locked into a bit of  a structure that you might not quite be settled on. Change sets are great, but many of these orgs are not linked, they are completely disparate for different clients. Over the course of the last month or two it’s become apparant that just shuffling data around in Eclipse wasn’t going to do it anymore. I was going to have to break into using ANT and the Salesforce migration tool.

For those unaware, ANT is some kind of magical command line tool that is used by the Salesforce migration tool (or maybe vice versa, not really sure the relationship there) but when they work together it allows you to script deployments which can be pretty useful. Normally though, trying to actually setup the deployment with ANT is a huge pain in the butt because you have to be modifying XML files, setting up build files and stuff, in general it’s kind of slow to do. However, if you could write a script to write the needed files by the deployment script, now that would be handy. That is where this tool I wrote comes in. Now don’t get me wrong, it’s nothing fancy. It just helps make generating deployments a little easier. What it does is allows you to specify a list of orgs and their credentials that you want to deploy to. In the deploy folder you place the package.xml file that contains the definitions of what you want to deploy, and the meta data itself (classes, triggers, objects, etc). Then when you run the program one by one it will log into each org, back it up, then deploy your package contents. It’s a nice set it and forget it way of deploying to numerous orgs in one go.

So here is what we are going to do, first of all, you are going to need to make sure you have a Java Runtime Enviornment (JRE), and the Java Developers Kit (JDK) Installed. Make sure to set your JAVA_HOME environment variable path to wherever the JDK library is installed (for me it was C:\Program Files\Java\jdk1.8.0_05). Then grab ANT and follow it’s guide for install. Then grab the Force.com migration tool and get that installed in your ANT setup. Then last, grab my SF Deploy Tool from bitbucket (https://Daniel_Llewellyn@bitbucket.org/Daniel_Llewellyn/sf-deploy-tool.git)

Now we have all the tools we need to deploy some components, but we don’t have anything to deploy, and we haven’t setup who we are going to deploy it to. So lets use Eclipse to grab our deploy-able contents and generate our package.xml file (which contains the list of stuff to deploy). Fire up Eclipse and create a new project. For the project contents, select whatever you want to deploy to your target orgs. This is why using a package is useful because it simplifies this process. Let the IDE download all the files for your project then navigate to the project contents folder on your computer. Copy everything inside the src folder, including that package.xml file. Then paste it into the deploy folder of my SF deploy tool. This is the payload that will be pushed to your orgs.

The last step in our setup is to tell the deploy tool which orgs to push this content into. Open the orgs.txt file in the SF Deployer folder and enter the required information. One org per line. Each org requires a username, password, token, url and name attribute, separated by semincolons with an equal sign used to denote the key/value. EX

username=xxxx;password=xxxxx;token=xxxxxxxxx;url=https://login.salesforce.com;name=TEST ORG

Now with all your credentials saved, you can run the SalesforceMultiDeploy.exe utility. It will one by one iterate over each org, back up the org, the deploy your changes. The console window will keep you informed of it’s progress as it goes and let you know when it’s all done. Of course this process is still subject to all the normal deploy problems you can encounter, but if everything in the target orgs is prepared to accept your deployment package, this can make life much easier. You could for example write another small script that copies the content from your source org at the end of each week, slaps it into the deploy folder, then invokes the deployment script to have an automated process that keeps your orgs in sync.

Also I just threw this tool together quickly and would love some feedback. So either fork it and change it, or just give me ideas and I’ll do my best to implement them (one thing I really want to do is make this multi threaded so that it can do deployments in parallel instead of serial, which would be a huge bonus for deployment speeds). Anyway as always, I hope this is useful, and I’ll catch ya next time.

-Kenji


Salesforce Orchestra CMS Controller Extensions

So I’ve been working with Orchestra CMS for Salesforce recently, and for those who end up having to use it, I have a few tips.

1) If you intend on using jQuery (a newer version than the one they include) include it, and put it in no conflict mode. Newer versions of jQuery will break the admin interface (mostly around trying to publish content) so you absolutely must put it in no conflict mode. This one took me a while to debug.

2) While not official supported, you can use controller extensions in your templates. However the class, and all contained methods MUST be global. If they are not, again you will break the admin interface. This was kind of obvious after the fact, but took me well over a week to stumble across how to fix it. The constructor for the extension takes a cms.CoreController object. As an alternative if you don’t want to mess with extensions what you can do is use the apex:include to include another page that has the controller set to whatever you want. the included page does not need to have the CMS controller as the primary controller, so you can do whatever you want there. I might actually recommend that approach as Orchestra’s official stance is that they do not support extensions, and even though I HAD it working, today I am noticing it act a little buggy (not able to add or save new content to a page).

3) Don’t be araid to use HTML component types in your pages (individual items derived from your page template) to call javascript functions stored in your template. In fact I found that you cannot call remoting functions from within an HTML component directly, but you can call a function which invokes a remoting function.

So if we combine the above techniques we’d have a controller that looks like this

global class DetailTemplateController
{
    global DetailTemplateController(cms.CoreController stdController) {

    }

    @remoteAction
    global static list<user> getUsers()
    {
        return [select id, name, title, FullPhotoUrl from user ];
    }
}

And your  template might then look something like this

<apex:page id="DetailOne" controller="cms.CoreController" standardStylesheets="false" showHeader="false" sidebar="false" extensions="DetailTemplateController" >
	<apex:composition template="{!page_template_reference}">
		<apex:define name="header"> 
			<link href="//ajax.aspnetcdn.com/ajax/jquery.ui/1.10.3/themes/smoothness/jquery-ui.min.css" rel='stylesheet' />

			<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
			<script> var jqNew = jQuery.noConflict();</script> 
			<script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js"></script> 

			<script>
        	        var website = new Object();
			jqNew( document ).ready(function() {
				console.log('jQuery loaded');
			});

			website.buildUserTable = function()
			{
				//remoting request
				Visualforce.remoting.Manager.invokeAction(
					'{!$RemoteAction.DetailTemplateController.getUsers}', 
					function(result, event){
						if (event.type === 'exception') 
						{
							console.log(event.message);
						} 
						else 
						{
							var cols = 0;

							var tbl = jqNew('#bioTable > tbody');
							var tr;
							for(var i = 0; i < result.length; i++)
							{
								if(cols == 0){tr = jqNew('<tr></tr>');}                              

								var td = jqNew('<td></td>');

								var img = jqNew('<img class="profilePhoto">');
								img.attr('src',result[i].FullPhotoUrl);
								img.attr('title',result[i].Title);
								img.attr('alt',result[i].Name);
								img.data("record", result[i]);
								img.attr('id',result[i].Id);

								td.append(img);

								tr.append(td);

								if(cols == 2 || i == result.length-1){
									tbl.append(tr);
									cols = -1;
								}
								cols++;

							}

						}
					})			
			}
			</script>
		</apex:define>
		<apex:define name="body">
			<div class="container" id="mainContainer">
				<div class="pageContent">
					<div id="header">
						<apex:include pageName="Header"/>
						<div id="pageTitle">
							<cms:Panel panelName="PageTitle" panelController="{!controller}" panelheight="50px" panelwidth="200px"/>
						</div>
					</div>
					<div id="pageBody">
						<p>
							<cms:Panel panelName="PageContentArea" panelController="{!controller}"  panelheight="200px" panelwidth="400px" />
						</p>
						<div class="clearfloat"></div>
					</div>

					<!-- end .content --> 
				</div>
			</div>
			<div id="footer_push"></div>
			<div id="footer">
				<apex:include pageName="Footer"/>
			</div>
		</apex:define>
	</apex:composition>
</apex:page>

Then in our page we can add an HTML content area and include

<table id="bioTable">
	<tbody></tbody>
</table>
<script>website.buildUserTable();</script>

So when that page loads it will draw that table and invoke the website.buildUserTable function. That function in turns calls the remoting method in our detailTemplateController extension that we created. The query runs, returns the user data, which is then used to create the rows of the table that are then appended to the #bioTable’s body. It’s a pretty slick approach that seems to work well for me. Your mileage may vary, but at least rest assured you can use your own version of javascript, and you can use controller extensions, which I wasn’t sure about when I started working it. Till next time.


Convert a string into a Salesforce Fieldname using regular Expressions

Hey all,

Another quick tip sample code type of post here (and probably not even ‘GOOD’ sample code, but at least functional). I had a requirment recently where I needed to create a dynamic query that will be built from some other values. Those values would represent fieldnames that needed to be queried but might not match exactly. So if the value the user selected was ‘My Big! FUN! Account (Acme LLC)’ that wouldn’t work in a query as field name because of all the special chars, spaces, etc. So I needed a bit of logic to clean up strings and attempt to convert them into valid Salesforce field names. I know this isn’t exactly the best approach, but this is within a limited set a values for an internal app, so it’s an understandable trade off. Anyway, enough excuses, let’s see some code.

//regular expressions to be used for replacing
string specialCharPatterns = '[^\\w]+';
string multiUnderscorePattern = '_+';

string fieldName = 'My Str!ng Th@t doesn't represent a valid F!eld   name(!)';

//replace special chars with underscores, and multiple underscores with one
fieldName = fieldName.replaceAll(specialCharPatterns,'_').replaceAll(multiUnderscorePattern,'_');

//remove leading underscores
fieldName = fieldName.left(1) == '_' ? fieldName.substring(1) : fieldName;

//remove trailing underscores
fieldName = fieldName.right(1) == '_' ? fieldName.substring(0,fieldName.length()-1) : fieldName;

//append custom field suffice
fieldName = fieldName + '__c';

string queryString = '';

queryString = 'select id, ' + fieldName + ' from Account';

list<sObject> sobjects = database.query(queryString);

if(!sobjects.isEmpty())
{
    objectValue = (string) sobjects[0].get(fieldName);
}

So there ya go. A simple way to take a field, clean it up into a fieldname plug it into a query, and as a bonus extract the result. Have fun!


Cloudspokes Challenge jQuery Clone and Configure Records

Hey everyone,
Just wrapped up another CloudSpokes contest entry. This one was the clone and configure records (with jQuery) challenge. The idea was to allow a user to copy a record that had many child records attached, then allow the user to easily change which child records where related via a drag and drop interface. I have a bit of jQuery background so I figured I’d give this a whack. The final result I think was pretty good. I’d like to have been able to display more information about the child records, but the plugin i used was a wrapper for a select list so the only data available was the label. Had I had more time I maybe could have hacked the plugin some to get extra data, or maybe even written my own, but drag and drop is a bit of a complicated thing (though really not too bad with jQuery) so I had to use what I could get in the time available. Anyway, you can see the entry below.

jQuery Clone and Configure Record


Fun with SQL

I recently had to write a webservice that grabs data from our time tracking system (timeforce) and wraps it in an easily consumable manner for importing into Salesforce. The data had to be grouped by department with, and have any department that isn’t represented have all the hours dumped into a catch all category. Each department charges their own rate, so I had to do some math on the fly to find the the totals for each department. Also had to make sure there were no nulls, they had to be replaced with zeros. It involves 2 subqueries (the top selects from a select that selects from another select…). The query below is what I finally ended up with just through trial, error, and reverse engineering the timeforce database. They really aren’t very helpful when it comes to trying to understand their data model so I am pretty proud that I managed to get this deep an understanding of it through just poking around. This is mostly just a reminder post for me, so if in the future I’m like ‘how the hell did I do that?’ I can find it. It might help others who work with timeforce, or just want to gawk at an absurd query. There is a little bit of coldfusion at the the end to apply an optional filter.

SELECT 
    ISNULL(Min(Case when DEPARTMENTNAME = 'Account Managers' then AMOUNT  end), 0.00) as 'Cost_Account_Managers__c',
    ISNULL(Min(Case when DEPARTMENTNAME = 'Data Managers' then AMOUNT  end), 0.00) as 'Cost_Data_Managers__c',
    ISNULL(Min(Case when DEPARTMENTNAME = 'Field Managers' then AMOUNT  end), 0.00) as 'Cost_Field_Managers__c',
    ISNULL(Min(Case when DEPARTMENTNAME = 'Information Technology' then AMOUNT  end), 0.00) as 'Cost_Administration_Information_Techno__c',   
    ISNULL(Min(Case when DEPARTMENTNAME = 'Operations-Office' then AMOUNT  end), 0.00) as 'Cost_Operations_Office__c',
    ISNULL(Min(Case when DEPARTMENTNAME = 'Project Mgmt' then AMOUNT  end), 0.00) as 'Cost_Operations_Office__c',
    ISNULL(Min(Case when DEPARTMENTNAME = 'Recruiting-Coordinators' then AMOUNT  end), 0.00) as 'Cost_Recruiting_Coordinators__c',
    ISNULL(Min(Case when DEPARTMENTNAME = 'Recruiting-Callers' then AMOUNT  end), 0.00) as 'Cost_Recruiting_Callers__c',
    ISNULL(Min(Case when DEPARTMENTNAME = 'Relationship Manager' then AMOUNT end), 0.00) as 'Cost_Relationship_Manager__c',
    ISNULL(Min(Case when DEPARTMENTNAME = 'Sales' then AMOUNT  end), 0.00) as 'Cost_Sales__c',
    ISNULL(Min(Case when DEPARTMENTNAME = 'Finance' then AMOUNT  end), 0.00) as 'Cost_Administration_Finance__c',
    ISNULL(Min(Case when DEPARTMENTNAME = 'Operations' then AMOUNT  end), 0.00) as 'CostOperations_Site__c',
    ISNULL(Min(Case when DEPARTMENTNAME NOT In ('Operations,Account Managers, Cost_Administration_Information_Techno__c, Data,Information Technology Managers,Operations-Office,Project Mgmt,Recruiting-Coordinators,Recruiting-Callers,Relationship Manager,Sales,Finance') then AMOUNT  end), 0.00) as 'CostOperations_Site__c',
    JOBNAME AS TimeForce_Job__c
    
    FROM (SELECT SUM(Amount) as Amount, 
                 DepartmentName, 
                 JobName
                    FROM (SELECT 
                                Job.jobname, 
                                Job.jobNumber,
                                SUM(total_hr) as TotalHours,  
                                Amount = ROUND(SUM(total_hr) *  Task.BillRate,2),
                                DepartmentName,
                                Task.billrate    
                                            
                                FROM timeCard 
                                INNER JOIN Job 
                                    ON timecard.job_id = Job.job_id  
                                INNER JOIN tblDepartment
                                    ON timeCard.DEPARTMENT_ID = tblDepartment.DEPARTMENT_ID    
                                LEFT OUTER JOIN Task
                                    ON timeCard.task_id = Task.task_id  
                                INNER JOIN empMain 
                                    ON timecard.employee_id = empMain.employee_id
                                
                                Where Job.complete_yn = 0 and 
                                      Task.BillRate > 0 and
                                      Job.jobname != 'None'
                                      
                                       
                                GROUP BY DepartmentName, 
                                         Job.jobNumber,
                                         Job.jobname,
                                         Task.billrate) AS Sub
            <cfif isdefined("arguments.jobList")>
                and jobName in (<cfqueryparam list="yes" value="#arguments.joblist#" cfsqltype="cf_sql_varchar">)
            </cfif>    
            group by JobName,
                     DepartmentName) AS Totals
    group by jobName

Apex – Sorting a map

So this is going to be one of the more apex heavy posts. This is a challenge I think many developers have come across, and while what I propose is by no means the most elegant thing ever, it does do the job until hopefully salesforce implements a native map sorting method. So here is the basic approach

1) populate map with information
2) create another map with the keys as the values you want to sort by, and the value as the key for the other map.
3) create a list whos values are the keyset of the map created in step 2
4) sort the list
5) iterate over the list which now has your values in order, get the value from map 2 using the current loop iterator as the key.

Sounds pretty complicated eh? It’s not SO bad once you kinda get the hang of it, but there is one gotcha that kind of sucks, but we’ll cover that in a minute.

Here is a sample using a simple object called contestEntry. We’ll create a bunch of them with random ordering, and then sort them and loop over the sorted result. You should be able to run this in any org so you can see the principals in action.

        public class contestEntry
        {
            public decimal rank{get;set;}
            public string name{get;set;}
        }
        
        map<string,contestEntry> entries = new map<string,contestEntry>();
        
        contestEntry entry1 = new contestEntry();
        entry1.rank = 5;
        entry1.name = 'Frank';
        entries.put(entry1.name,entry1);
        
        contestEntry entry2 = new contestEntry();
        entry2.rank = 3;
        entry2.name = 'Bob';
        entries.put(entry2.name,entry2);
        
        contestEntry entry3 = new contestEntry();
        entry3.rank = 1;
        entry3.name = 'Jones';
        entries.put(entry3.name,entry3);
        
        contestEntry entry4 = new contestEntry();
        entry4.rank = 4;
        entry4.name = 'Sandy';
        entries.put(entry4.name,entry4);
        
        contestEntry entry5 = new contestEntry();
        entry5.rank = 2;
        entry5.name = 'Felix';
        entries.put(entry5.name,entry5);
        
        //oh no, these entries are all out of order. 
        system.debug(entries) ;
        
        //lets get sorting these guys. First we'll need a map to store the rank, and the contestEntry that rank is 
        //associated with
        map<decimal,string> rankToNameMap = new map<decimal,string>();
        for(contestEntry entry : entries.values())
        {
            rankToNameMap.put(entry.rank,entry.name);
        }
        //now lets put those ranks in a list
        list<decimal> ranksList = new list<decimal>();
        ranksList.addAll(rankToNameMap.keySet());
    
        //now sort them
        ranksList.sort();
        
        //ok, so now we have the ranks in order, we need to figure out who had that rank
        for(decimal rank : ranksList)
        {
            String thisEntryName = rankToNameMap.get(rank);    
            contestEntry thisEntry = entries.get(thisEntryName);
            system.debug(thisEntry);
        }

Walking through it first we just make a sample object to use here. Normally this would be whatever you are actually trying to sort, but for the sake of easyness I just created an object called contestEntry. It just holds a name and a rank.

So then I make a map of those things, keyed by the persona name, and containing the contestEntry object. You might in real life have a map of sObjects keyed by their Id and containing the sObject itself. So then I make a bunch of those and add them to the map in random order so my sorting actually has some work to do 😛

The next thing is creating a map with the key of the value I want to sort this list by. The value is key of the first map. So in real life this might be a dollar amount on an opportunity and then the opportunity Id if the original map was a list of opportunities keyed by their Id.

We loop over all the objects in the original map, and add them to our temporary sorting map, again keyed by value to sort by, and value with key from original map.

Then we create a list of the type of the key of the temporary sorting map. Your key was a decimal? Then your list is of decimals as well, etc. Then add all the keys from the sorting map to the list you just made.

Sort the list.

Iterate over the sorted list, each entry in this list will be a key you can use to get the entry from the sorting map, which will contain the key to the original map. You now have a reference to your original map value by whatever value you sorted on.

There is however one gotcha with this approach. If you have duplicates in the value you are going to sort by, you are going to end up with collisions in your sorting map, which will end up with values overwriting each other and ultimately values missing in your final iteration. This happens because say in my example I have two people with rank 1. First person comes through, their rank is 1, and their name is Sandy. So the sorting map has 1=sandy. Then another person comes through, they also have rank 1 and their name is jones. Now the map has 1=jones. Sandy just fell out of the list. How do you deal with this? The best hack-ish fix I could come up with is to see if the key you are attempting to write to already exists, if so, then write to a slightly higher value key. Basically replace

        for(contestEntry entry : entries.values())
        {
            rankToNameMap.put(entry.rank,entry.name);
        }

with

        for(contestEntry entry : entries.values())
        {
                decimal rank = entry.rank;
                while(rankToNameMap.containsKey(rank))
                {
                    system.debug('------ INCRIMENTING Rank TOTAL FOR ' + entry.name);
                    rank += 0.001;
                }

                rankToNameMap.put(rank,entry.name);
        }

This won’t affect the value displayed when you retrieve and display the object, it just changes the sort order. I haven’t tested this throughly though so don’t rely on it extensively.

Anyway, there you have it Apex fans. A method to reliable (if not quickly or efficiently) sort Apex maps by just about anything. Once you understand the concepts it’s fairly easy to expand to sort to your hearts desire.


Organizing Your Salesforce Apex Codebase

Hey all,
So this is a topic that is constantly on my mind. I am a little bit OCD and I am always looking for the cleanest, fastest, overall best way to store and organize things. It’s borderline obsessive.

When I first started coding in Apex, I had no idea how you were supposed to structure things. I wrote a trigger and a class for every single task I wanted to do (if I wrote a class at all, often I would just jam everything in a trigger). It was an awful mess, but I was also a total n00b so I didn’t know any better. I couldn’t really find any suggestions on how to organize your code base, or what best practice were for triggers, classes and the ever popular unit tests.

I eventually started to realize you can in fact have multiple actions in one trigger, have multiple methods in your classes, reference classes from each other, and some of of those other things that seem so very obvious now. So after about 3 or 4 years, here are some suggestions for you newbies, or those who still don’t really know how they want to set things up. This is just my approach and it’s totally ‘home brewed’. I have no idea these ideas are best practice or not, but they seem to work really well for me.

1) One trigger file for every object type that needs Apex logic

When you first start, you may think it is good to be super modular and have a sepearte trigger file for every action (I thought that was a pretty slick idea at first too). And honestly, functionally it’s fine. However it does lead to a very cluttered code base, and it’s hard to turn individual triggers on and off. It’s also hard to see all the logic attached to an object. That is why I suggest making ONE trigger that has some if statements that control which logic gets fired when. Here, I’ll show you what I mean. We have an object in our org called a respondent (they basically power our whole org, and there is a hell of a lot of logic attached to them).

trigger RespondentTriggers on Respondent__c (after delete, after insert, after undelete, 
after update, before delete, before insert, before update) 
{
    
    Respondent__c[] respondents = Trigger.new;
    Respondent__c[] oldRespondents = Trigger.old;
    //This set of triggers is responsible for all actions related to respondent__c objects.
    
    //Before execution Triggers
    if(Trigger.isBefore)
    {
        if(Trigger.isInsert)
        {
            respondentClasses.populateRespondentData(respondents);
            respondentClasses.checkDupeHouseholdContacts(respondents,respondents);
            
        }
        else if(Trigger.isUpdate)
        {
            respondentClasses.populateRespondentData(respondents);
            respondentClasses.checkDupeHouseholdContacts(respondents,oldRespondents);
            
        }
    
        else if(Trigger.isDelete)
        {
            
        }
    
        else if(Trigger.isUnDelete)
        {
            
        }    
    }

    //After execution Triggers
    else if(Trigger.isAfter)
    {
        if(Trigger.isInsert)
        {
            respondentClasses.checkChildCampaignSpotsAvailable(respondents);
            respondentClasses.createPayments(respondents);
            respondentClasses.updateCounters(respondents);
            respondentClasses.flagRecentlyTested(respondents);
            respondentClasses.updateCampaignMembers(respondents);
            respondentClasses.setFirstRecruitDate(respondents);
            
        }
        else if(Trigger.isUpdate)
        {
            respondentClasses.updateCounters(respondents);
            respondentClasses.updateCampaignMembers(respondents);
            respondentClasses.flagRecentlyTested(respondents);
            respondentClasses.setFirstRecruitDate(respondents);
            respondentClasses.updateRelatedPayment(respondents);
            respondentClasses.clearPastParticipationOnCancel(respondents);
            
        }
    
        else if(Trigger.isDelete)
        {
            respondentClasses.updateCounters(oldRespondents);
        }
    
        else if(Trigger.isUnDelete)
        {
            respondentClasses.updateCounters(respondents);
        }    
    }            
}

Here you can easily see ALL the logic attached to my object (thanks in part to descriptive method names). You can also see it is very easy to control the order of execution of my triggers (which I have no idea how to do if you have separate trigger files), and you can easily turn off chunks by commenting out one line. It makes for a very clean setup in my opinion.

2) Group all logic for objects in one class if possible

Just like how I want all my trigger logic for each object in its own file, it is nice to keep all my methods for an object in it’s own class file. I know sometimes methods will be shared between objects (in that case I recommend trying to abstract them and put them in their own utilities class) but in general you can keep things pretty straight forward if you group similar methods into one class. Some methods will touch multiple types of objects, in that case I generally just put the method in the class for the object which triggers the method to fire. Example if I have a method that updates accounts and contacts, you might not be sure where to put it, but if you are updating all contacts as a result of an account update, but that method in your account methods class.

3) Use standardized naming

As programmers we all know the importance of descriptive variable and method names. However beyond just description I think it is important to be standardized. I want to look at a method or class and be able to know what it does, and what I can expect it to contain by looking at the name. For me, my classes fall into two categories generally.

A) Classes that contain logic for triggers
B) Classes that contain logic for custom visualforce pages

I have taken to using the simple naming convention of either [object]Classes or [pageName]Controller. Example my class with methods for my account trigggers is called accountClasses, while my class with methods for my online scheduling software is called schedulingController. It’s maybe not perfect, but it’s helpful and it looks nice when I look at the list of files in Eclipse 😛

4) Create a utilities class

There are some methods that are handy to have all over the place. Don’t recreate them in every class, simple create a single utilities class and reference its methods when you need. For example my utilities file has methods for reading an XML attribute from an XML string, breaking an serialized string (sent from an html form, serialized with jQuery) into a map, generating a random string of a given length, converting a sObject into JSON (though this won’t be needed after winter 12), and doing simple encryption/description. These methods don’t really belong to any particular object, they are just helpers that are handy to have.

5) Put your unit tests in the class they are testing

Unit tests have been a sticking point for me since day one. At first I didn’t understand them, then I didn’t find them useful, then I couldn’t figure out how I wanted to organize them, now I just… hate them. Anyway, I went through a lot of refactoring before I came up with a way of doing unit tests I liked. Like most people I started out putting the unit tests direction in the class. Querying for data I needed, maybe creating it right there whatever. Then I realized, wait I am creating a ton of the same testing data in every stupid class. What if I made one class, broke each class I am testing into it’s own method and just shared the data among the tests. Now I don’t need to make so much testing data and all my tests are compacted into one file. Genius! Or so I thought. Turns out this is an aweful idea. It is the opposite of modular and can make it very hard to deploy code if your sandbox and prod get too badly out of sync (too many schema or code changes). Don’t do this.

Instead, do what I am in the progress of migrating to. Create a class that only creates test data. Make a method in it for creating some of each kind of object you might want. Then put your testing code in each class, and simple call out to your data generating class to make the data you need for testing.

For example lets say I need an account to manipulate for my unit test. In my testDataGenerator class I have this method.

    
global class testDataGenerator 
{
    private static Account testAccount; 
    public static Account createTestAccount()   
    {
        if(testAccount == null)
        {
            //make an account so we can attach a contact to it
            Account localTestAccount = new Account(name='My Test Account', Tax_ID__c='99-9999999');
            insert localTestAccount;
            return localTestAccount;
        }
        else
        {
            return testAccount;
        }   
    }
}

It will make account on it’s first invocation, and simply return the existing account on any subsequent invocations, making it faster. Now if I need an account in my unit test I can just say

Account thisTestAccount = testDataGenerator.createTestAccount();

Boom, now I have an account I can use without having to write a ton of code, and it keeps my actual unit test clean. Of course this also means that since all my unit tests are using this class I only have to update the testDataGenerator if a new field becomes required on the account object, or some other such change is needed.

Anyway, those are just some things I learned from the school of hard knocks. I hope this helps some of you devs out there get your code organized. Till next time!


You know, I think I might hate technology

Something has been brewing recently. Some kind of change in my consciousness, and I think I finally know what it is. I hate technology, or more, I hate what it has become, and for one overarching reason. Everything is too damn complicated.

I’ve been working with computers since I could reach the keyboard. I’ve modified games, done 3D modeling, programmed websites, hacked servers, got my A+ at 16, MCSA at age 18. I worked as programmer when others had jobs flipping burgers in high school. I’m number one on the cloudspokes leaderboards, am sought after for technology consulting and have even owned my own technical support company. Point being, I’m not an idiot (at least when it comes to computers). Just trying to build some cred here for when you inevitably want to call me dumb in about a minute.

So, as I was saying recently I’ve felt a little frustrated, like I’m just not getting things. oAuth, Federation, apps, heroku, half the vendors at dreamforce,a dozen different prominent languages, the list goes on and on, and the same nagging question always hanging in the back of my mind: Why? Why does it seem like half of the things I hear about are answers to questions nobody asked. Solutions to problems that shouldn’t even be problems. Why are there so many steps to everything, many of which are obtuse and complicated enough to make me want to give up. Here for example, lets check out a recent coding contest I’m looking into.

http://www.cloudspokes.com/challenge_detail.html?contestID=295

Basic premise being, watch a users google calendar, and if there is a meeting on it, set their status to busy. Now this, in the ‘old’ world would be an easy proposition. Write a small tray application that lives on the users system, polls their calendar every 5 minutes or so (using some API), and then update their status (using some other API). This is something you could probably bang out in an hour or so (not counting reading a few API docs). Now lets compare to the ‘new better cloud based’ style of development.

1) Download Python
2) Install google app engine launcher
3) Register for a google apps business account
4) validate your account using a phone number
5) Create your application in google
6) Attempt to write actual application
7) Include oAuth
8) Include single sign on so app can be in marketplace
9) Create domain name
10) Deploy app to domain name
11) Pray you don’t violate some quota or rack up a huge bill
*Steps 3 and 4 done using intensely cumbersome interface that took me an hour or two to actually accomplish anything with.

Notice anything odd here? Thats right, only one step of 11 here is actually you know… writing an application. It’s also arguably one of the easiest. The rest ranges from tedious to herculean. I understand the benefits of cloud development, I really do but there has to be a better way. Seriously, the above process is bad enough where I am likely going to abandon the cause not because the coding is too technically difficult but because all the crap around trying to make the application deployable.

It’s not just google either (though they do seem to be one of the worst offenders). There is too much technology out there that just exists because, it was a cool idea. Us nerds love to develop stuff, it’s what we do, but many of them seem to be intent on creating as many problems as they solve or replacing one complex process with another. The tools aren’t making life easier, they are just making it more abstract. It’s to the point that half the time I don’t even understand what it is I am technically doing when a follow a guide, I’m just following the steps and praying that it works. This can all be summarized really in one statement.

“It used to be when I heard about a new technology or service I was excited. Now, I am just depressed.”

Do you know why? Because it means that it’s another temporary solution to a minor problem that I am going to expected to learn or be considered outdated. It means that my life likely just got one layer more complicated as I try to grasp what it is this new technology even does and then struggle to try and implement it because it’s the newest fad. I’m 23 and I’m completely exhausted. Technology used to be fun, now it’s a chore.

Another example, let’s talk mobile. So it used to be you could develop one website that had a fairly flexible style and some reliable javascript and you’d be fine on all the major browsers (well of course excluding IE). You’d write up a site in your favorite languages (for me it was always ColdFusion and the standard front end languages) and rent some server space somewhere and you were done. Now you have to

1) Find a cloud provider
2) Learn whatever language it is they support (it’s always some language I don’t know too, which sucks for me)
3) Learn about their quotas and rates, what calls are and are not allowed.
4) Develop the website once for regular browsers.
5) Develop the website again for mobile
– Figure out how to access mobile device features
– Make sure it works on every size screen imaginable
6) integrate with every fucking site on the planet (if you are linked to twitter and facebook and flikr and every other tool on the planet you might as well kill yourself now because it’s completely worthless)
7) oAuth, can’t have enough oAuth. oAuth all the things!
8) Make an app for your website, make a tile, put it in the app store, put it in the market put it everywhere.

It’s just too much. I can’t keep up. By the time I get all that stuff working, those fads will be long gone and it will be the next thing. Seriously the amount of stuff you seem to have to know and keep track of to be a developer worth anything borders on insanity. I mean I work with this stuff 8 hours a day, and generally read about it at home, and am fairly immersed in it, and I just can’t keep up. You are made to feel dumb if you don’t know everything about everything but it’s just so overwhelming. What is the most aggravating is most of this shit doesn’t even really add any real value. Most of those extra steps don’t give any extra features to your application, or make the user experience better. It’s all just for cost and reliability which as a developer I hardly give a shit about. It feels like I’m taking on finances job of cutting costs and the sys admins job of developing reliable systems on top of my regular job of writing software. Sure those guys are happy now, but I got the butt end of the stick.

I’m tired of feeling behind. I’m tired of feeling like everyone else ‘gets it’ and I don’t. I’m tired of fad technology. I’m tired of simplicity being a cuss word. I’m tired of having to feel like all my software is just glue holding things together.

I could go on and on about useless gadgets, phone systems, firewalls, VPNs, etc but really… I just want to go outside.


Woot, happy work write up

I just got a bit of recognition from work. I’m pretty happy about it.

This week, we are going outside the Recruiting Department to give a special MVP award to Dan Llewellyn. Dan is responsible for so many of the programs and processes we rely on, and he has a lot of pressure on him from all departments to make sure these processes and programs run efficiently and properly. Since there is only one Dan, we sometimes have to wait for things we know are important, realizing that he cannot do everything at once.

Yesterday, our project manager went to Dan with some issues callers have reported, regarding the call lists. Dan set aside other projects he was working on, and spent four hours working with her to make these lists function better for the callers. As a result, we have several new features that will make the callers’ processes more efficient. These new call list features include:

• Inline editing – this allows callers to edit contact information without having to open another window.
• Progress bar – allows callers to see what is happening when they load and save
• An account feature – callers can click on the account name right from their list and it pulls up the account detail information right in their list. They can scroll down to see all the contacts in the household. This will be very helpful when working kid’s studies. Callers will not have to open up another screen to see this data.

Dan went well beyond what we were asking for, to provide value-added services for our calling team. We know his services and skills are in high demand throughout the company, and we appreciate him for taking the time to resolve some of our issues and add some nice features for us.

Thanks Dan!


ಠ_ಠ

Here’s looking at you.


Building the Check In Application Part 1

So I am undertaking my largest visualforce/apex project ever. I need to migrate a very complicated piece of software into the cloud. At it’s core, it is an application that acts as a sign in sheet. We schedule people to come to events we organize, they RSVP beforehand, and this application tracks the actual response rate, and in turn creates payment data. It is powered by one type of object, but creates another, one of the reasons it is a bit tricky. At it’s core though, it’s really just a big pageBlockTable with some counters and things. Other challenges I see are as follows.

1) Dynamic column names
It isn’t the same application every. There are 10 columns that MUST be renamed based on other conditions (the selected event). So when a user selects a new event, the names of the columns must change as well. Never had to deal with that before.

2) Filter as you type system
I need to allow my users to quickly locate an incoming person in the large list of people. A filter as you type system makes the most sense for this. Problem is, I don’t know exactly how to make one. I’ve done it in other languages, but never apex. The hardest part is I don’t know how to filter against a query that is already in the heap, as opposed to just running a new SOQL statement based on the filter criteria (which would be insanely inefficient in a filter as you type system).

3) Running Tally’s
As my users use the application, there are various counters that must be continually updated. If a person represents a certain demographic and checks in, that counter needs to be updated. Our events are only allowed to have so many of certain categories of people so these tally’s need to be accurate and fast. They also don’t exist anywhere besides in memory. So as soon as a record is changed, they need to recalculated. Probably not super hard, but again, it’s something I haven’t done on this platform before.

4) Column Sorting
The application needs to allow the users to sort the data in the table however they see fit. Of course this functionality does not exist nativity. I know there are articles on how to add it, but they look a bit tricky. I’ll figure it out I’m sure, but just one more bridge to cross.

5) Cross object creation/preventing dupes
This is maybe one of the hardest things to deal with. The application is powered by an object called respondents. These are people who are SUPPOSED to show up to the event (they told us they would). We need to track which of these people actually show up. For every one that does, we need to create a different kind of record called a payment. One payment per person, no exceptions. Even no shows get a payment (just a placeholder one though). Since I am taking one kind of data, and creating another, and not just saying the same data back, the risk of duplicates is high. I’m going to need to be careful about ensuring I don’t mess that up. My first though is as payments are created, have a lookup field on the respondent object that is a lookup to a payment. When the payment is created, populate that field on the associated respondent. So then in the future I can just see if that field is null, or populated. If it is populated, I update the associated payment, if it is null, I create a new one. At least in theory I think that should fly.

6) Usability
Of course, as always, trying to get the users all the features they need and not having it turn into a steaming pile of buttons and links is challenging. Going to have to work diligently to create a user interface that doesn’t bite, and since I’m a programmer, well that is going to be hard.

So anyway, I just wanted to start posting about this project. As I tackle each of these challenges I’ll update and talk about how it went, and what I did to resolve it (if I did). If you have any tips or tricks for me, feel free to drop my a line in the comments.


Simple List Editor VisualForce Page

Hey everyone. So I just finished a real simple visual force application. Basically it’s a list view editor. Just gets a large list of data that can be edited inline, that saves as you work. It can also be easily modified to refine the data providing query based on URL data. By itself, it isn’t super useful, but it’s a good building block for other projects. This particular instance tool was for our home callers. They needed an easy way to see everyone they were supposed to call, and needed a way to track the outcomes of those calls. A report wouldn’t do the trick since you can’t modify data, and clicking from record to record was too slow. Campaign members get assigned a caller, so I just have the query refine the results based on caller (and campaign if desired) and display the results for editing. So without further ado, here ya go.

Visualforce Page

<apex:page controller="callListController" action="{!getData}" sidebar="false" >
    <apex:sectionHeader title="My Call List"></apex:sectionHeader>
    <apex:form >
        <apex:pageBlock title="">

          <!-- To show page level messages -->
          <apex:pageMessages ></apex:pageMessages>        
            <apex:actionFunction action="{!UpdateRecords}" name="updateRecords" rerender="pageBlock" status="status"></apex:actionFunction>
            
            <apex:pageBlockTable value="{!CampaignMembers}" var="cm">
                <apex:column headerValue="PID">
                    <apex:outputLink value="https://na2.salesforce.com/{!cm.contactid}" target="_blank">{!cm.PID__c}</apex:outputLink>
                </apex:column>

               <apex:column value="{!cm.Contact.Account.name}"/>
               
               <apex:column value="{!cm.Contact.name}"/>
               <apex:column value="{!cm.Contact.phone}"/>
                <apex:column headerValue="Notes">
                    <apex:inputField value="{!cm.Notes__c}"  onchange="updateRecords();" />
                </apex:column>    
                
                <apex:column headerValue="Status">
                    <apex:inputField value="{!cm.Status}"  onchange="updateRecords();" />
                </apex:column>
                <apex:column headerValue="Assign To Study">
                    <apex:outputText escape="false" value="{!cm.Add_To_Study__c}">
                    </apex:outputText>
                </apex:column>    
                
                <apex:column headerValue="Study">
                    <apex:outputLink value="https://na2.salesforce.com/{!cm.campaignid}" target="_blank">{!cm.Campaign.name}</apex:outputLink>
                </apex:column>                  
            </apex:pageBlockTable>
        </apex:pageBlock>
        <apex:actionStatus startText="Saving..." id="status"/>
    </apex:form>    
</apex:page>

Controller

public class callListController 
{
    private List<CampaignMember> CampaignMembers;

    public List<CampaignMember> getCampaignMembers() 
    {
       return CampaignMembers;
       
    }        
    public void getData()
    {
            Map<string,string> params = ApexPages.currentPage().getParameters();
            Id campaignId = params.get('campaignId');
            Id userId = params.get('userid');
            
            String query = 'Select Status, Add_To_Study__c, PID__c, Notes__c, Contact.Phone, Contact.Account.Name, Contact.Name, Campaign.name, Caller__c From CampaignMember where campaign.Status in (\'Recruiting\',\'Caller Recruit\') and campaign.isActive = true';
            
            if(userId != null)
            {
                query += ' and caller__c = \'' + userId + '\'';
            }
            
            if(campaignId != null)
            {
                query += ' and campaignID = \'' + campaignId + '\'';
            }
            query += ' Order By Contact.Account.name LIMIT 1000';
            
            CampaignMembers = Database.query(query);    
          
    }

    public PageReference UpdateRecords()
    {
        update CampaignMembers;
        return null;
    }    
}

As usual, any questions, hit me up in the comments. Hope this helps!


Nope, still a total failure

So here I was getting all excited after my training thinking I knew shit about anything. I was wrong, very wrong. This week’s failure de jour’ was trying to replicate an existing object as a visual force page so it could be hosted in Salesforce sites we could avoid buying more costly licenses. Sounds easy enough, except for the laundry list of problems.

1) All transactions would be run through the same guest account.
2) You can only do portal based authentication, so we need more licenses anyway.
3) The portal user cannot have read/modify all permissions on standard objects.
4) The object I am trying to replicate has a master detail relationship with a standard object, hence making it so it can also not have read/modify all permissions.
5) Because the users are unauthenticated, lookup relationship dialog boxes don’t work.
6) The save button leads to the wrong url, and apparently can only be fixed by creating a custom button and override the save action. That means writing more code, and more god damned testing classes.
7) An attempted improvment turned out to be a massive failure when I wanted to make real time recalculation of formula fields work. Essentially when you are selecting the master detail relationship, there are some other fields on the object that depend on those. I wanted those to refresh as you chose different values for that relationship, but nope. I have no idea how the actionRegion or supportAction tags are supposed to work, but obviously not how I figured they would.
8) Working around most of these issues requires more code. That means more controllers, extending functionality, and worst of all writing testing code. I can’t put into words how much I had test classes.

I’ve been working on this project less than 3 hours, and I have already found 8 crippling failures. Of course the docs are too advanced/obtuse for me to make sense of, and the forums provide next to no help. So once again I’m completely stalled. Everything I try fails and I don’t know how to progress any further. I have nobody to ask, and nowhere to go. Man isn’t programming fun? I’m pretty sure I just suck.


Salesforce going overboard on governor limits

This is more just a venting post than anything truly useful, so if you are looking for some helpful information, tips or tricks, your are just as out of luck as I am.

First let me say, I understand governor limits, and I think they are a decent idea. They force you to code efficiently, and make sure the platform remains stable for everyone. It’s a good practice and I think with some more refining it would be an awesome tool. There is however, a problem.

Salesforce has gone completely over the deep end with their governor limits. It’s to the point where even very efficient code that makes as few database hits as possible still cannot run because of arbitrary restrictions. If you are writing your code as lean as possible, governor limits shouldn’t even really need be considered. They should be there to stop insane loops, or out of control code. More and more I am finding that even well written, bulk safe code is butting up against these limits. There are some projects I have simply had to scrap because there was no way to make them run within the rules.

For the amount of money we pay Salesforce it seems they could scale their architecture to handle some more advanced queries and larger data sets. Hell google can stream video to every last internet connection on earth free to the end user, but yet Salesforce, backed by Oracle, paid handsomely by subscribers can’t get users the data they own in a reliable easy fashion. I’m not talking about moving gigs at a time here either. 100,000 rows shouldn’t be anything. A million rows should still be within reason. I’m just tired of fighting all the time to get anything done. Some companies deal with large volumes of data. It happens. Deal with it. We pay you to deal with it. If it was just me having problems I’d say fine, I suck I’m a moron, and maybe I am. At the same time, I don’t think I’m the only one tired of fighting, sneaking, tricking and compromise to get work done. Programming is hard enough ya know? For being an enterprise application, they sure seem to have problems handling enterprise levels of data.

Anyway, that’s my rant. I’m done. Please remember I am understand governor limits and I am okay with them, I think they just need to be loosened up a little. Maybe let the user set the limits. Say okay, this trigger should never pull more than 500,000 records, if it does, then you can throw an error. Am I crazy? I am the only one having these issues? If so, I’ll suck it up and admit I suck. But I really don’t think that is the case, this time.


Non-selective query against large object type

So recently one of our triggers started throwing this error.

caused by: System.QueryException: Non-selective query against large object type (more than 100000 rows). Consider an indexed filter or contact salesforce.com about custom indexing.
Even if a field is indexed a filter might still not be selective when:
1. The filter value includes null (for instance binding with a list that contains null) 2. Data skew exists whereby the number of matching rows is very large (for instance, filtering for a particular foreign key value that occurs many times)

Trigger.PaymentDuplicatePreventer: line 23, column 2

It was coming from a trigger that had been running flawlessly for many months so originally I was confused as to why this was happening. It then became obvious that the type of object I was querying (Payment__c) now had over 100,000 rows, and the query was just too damn big. It’s kind of odd becuase my actual query would normally only return a few rows, but the dataset it was reading from is too large, or some nonsense like that. After some reading I found the following ideas for fixed.

1) Mark the field as an external identifier, this would force indexing to be enabled for this field. I couldn’t do this though because it was a formula field.

2) Ask Salesforce to enable custom indexing on the field. I asked them, they said they couldn’t because it was a formula field.

3) Break the large query into smaller queries. No, just no. I’m not doing that. That’s dumb and would cause governor limit problems.

4) Add more where statements to your query. At first this seemed insane as well as my query was already pretty tight.

Below you can see the code for the trigger I’m taking about. It’s basically straight out of the Salesforce cookbook.

trigger PaymentDuplicatePreventer on Payments__c(before insert)
{
    //Create a map to hold all the payments we have to query against
    Map<String, Payments__c> payMap = new Map<String, Payments__c>();
    
    //Loop over all passed in payments
    for (Payments__c payment : System.Trigger.new)
    {
    
        // As long as this payment has a payment code and either it's an insert or it's doesn't conflict with another payment in this batch */
        if ((payment.UniquePaymentCode__c != null) && (System.Trigger.isInsert || (payment.UniquePaymentCode__c != System.Trigger.oldMap.get(payment.Id).UniquePaymentCode__c)))
        {
        
            // Make sure another new payment isn't also a duplicate. If it is, flag it, if not, add it
            if (payMap.containsKey(payment.UniquePaymentCode__c))
            {
                payment.UniquePaymentCode__c.addError('Another new payment has the same unique identifier.');
            }
            
            else
            {
                payMap.put(payment.UniquePaymentCode__c, payment);
            }
        }
    }
    
    /* Using a single database query, find all the payments in
    the database that have the same uniquepaymentcode as any
    of the payments being inserted or updated. */
    if(payMap.size() > 0)
    {
        for (Payments__c payment : [SELECT Id,UniquePaymentCode__c FROM Payments__c WHERE UniquePaymentCode__c IN :payMap.KeySet()])
        {
            try
            {
                Payments__c newPay = payMap.get(payment.UniquePaymentCode__c);
                if(newPay != null)
                {
                    newPay.UniquePaymentCode__c.addError('A payment for this person in this study already exists.');
                }
            }
            catch ( System.DmlException e)
            {
                payment.adderror('Payment' + payment.UniquePaymentCode__c + 'Error ' + e);
            }
        }    
    }
} 

The problem is the query
SELECT Id,UniquePaymentCode__c FROM Payments__c WHERE UniquePaymentCode__c IN :payMap.KeySet()

How could I refine that any further and still find all my duplicates? Well thankfully in this case, the only way duplicates can really happen is within a short time span. Someone getting multiple checks for the same event. So really I only need to look in the last few weeks to look for any duplicates. Anything older isn’t really a duplicate. So I changed my query to

SELECT Id,UniquePaymentCode__c FROM Payments__c WHERE UniquePaymentCode__c IN :payMap.KeySet() and CreatedDate = LAST_90_DAYS

And that did the trick. So the moral of the story, if you have a formula field that you are using as a unique key to prevent duplicates, eventually you are going to hit the problem described above. The only fix I have found is to try and have a condition that reduces the number of records the query has to look at. While it may not always be feasible to use the date, perhaps there are other ways you can narrow the result set while still finding any duplicates. Think outside the box a little, and you should be able to come up with a way to do what you need to do.


jQuery, Apex, and Visualforce – 3 Continuing the madness

So we are back, to continue the calendar project. I know it’s been a while since I wrote, and to be honest my memory is a bit foggy on how exactly I did everything, so please excuse if some things seem a bis disjointed. Please feel free to ask any questions in the comments.

If I recall the last thing we were working on was getting the index page set up, then writing the data fetching components. One thing you may notice about my code is that I include a fair amount of the CSS and javascript in the document instead of linking to it like you would normally see a well developed page do. The reasons for this are simple, if I wanted them to be external, they would probably have to be a static resource. To modify static resources you need to download them, modify them, and upload them again. That is kind of a pain, and in this project, I don’t see any of this code being terribly re-usable, so I simplify my life by including more stuff in the index page than you might normally. It also cuts down on HTTP requests, theoretically increasing page load speed. In the end though, you can structure your code in whatever way makes sense to you.

If you downloaded the calendar project from the first post (which I hope you did) and you are looking at my index file, you are probably seeing a lot of javascript functions, and saying “what the hell is this crap?”. I’m not going to explain in great detail, cause you probably don’t really care, but just to gloss over here is a basic breakdown.

The first part in the onReady function basically says, when the page is loaded, create the calendar. Most of the code there is pretty much straight from the full calendar website. You will notice this line as well

events: “http://fpitesters.force.com/FPIPortal/opscalendarjsonresponder&#8221;,

That one is important. That line tells the fullCalendar where to get the JSON data that will be populating the calendar itself. This could be a text file, or a dynamic page. Of course since we want it dynamic and I am awesome, that is a dynamic component that generates JSON on the fly. I’ll show ya how that is built a little later. Next you run into a block of code for handling the loading. That basically says, “if the data is loading, use the lighbox plugin to make a fancy little loading modular window”. Next comes the event renderer. One of the cool features I added to the calendar, is that when you click an event, a neat little draggable popup is created that has info about the event. To do this, I create a hidden div for every event and the data contained for it. This function says, for every event you render (all events in the month) create a new div and make it draggable (anything mentioning drs in my code is for the draggable windows). It assigns the draggable window class, creates the toolbar and appends it to the body.

Next is the event click function. That just says, when a person clicks an event, launch the showmessage function. Lastly there is a call to the updateVisible function. Since our grid supports filtering, I quickly call to the updateVisible to make sure only things that should be visible on load (controlled by checkboxes) is in fact shown. After that are the showMessage and hideMessage functions. Those are simply the functions that control the showing and hiding of the extra info divs that appear when you click an event. There is where you can customize the content of those windows and make them say whatever you want. When the showmessage function is called, it gets a full copy of the even object, so all event info is accessible by using event.datapoint. So anything that is included in your JSON that created the even in the first place is available by using the event object.

After that is a rollover image function. I can’t remember why that is there. It might be useless, a failed and forgotten experiment. I don’t know. It looks like it changes the given image to another given image. Just a simple rollover image changer.

The next function is kind of a biggie, it is the filtering engine. Here is where you can probably really see that I suck at programming. I am sure this is fairly inefficient, violates several programming laws, and may in fact have insulted your mother at some point. It does however at least work. Based on the values below the calender and the status checkboxes it decides what events should be shown. Any events that should not be shown are then hidden using jQuery animations. Same for showing matching events.

The last javascript function is the jump feature. It allows you to enter any date and using the full calender API. Nothing fancy here.

Now we get into the body code for this page. The first part is the loading div, the little light box modular window thing that pops up when data is loading. It has a fancy little ajax loading animation and just adds some polish to the the app. After that is a noticeDiv, you can see it has some of code for the draggable windows, I am not entirely sure why that thing is there, again may be leftover code. I should probably try removing it. Then there is the div where the calendar actually loads. Below that are my form controls for data filtering, refreshing, and jumping to a different date. And that’s it, that is all the code for the main display page. The only other things there are is the components for fetching and returning the data to the calendar.

Now lets talk about getting the data into the calendar. Fullcalender uses JSON (javascript simple object notation). If you aren’t familiar with it, take XML make it a lot simpler, and that’s JSON. Just a data interchange format. So the question is, how do we take salesforce data, and turn it into JSON for the calendar? Sounds kinda crazy right? Well actually it isn’t too bad. The method I have devised is fairly straight forward. Have a web service that actually creates the JSON. It queries the requested salesforce objects, pulls the requested fields, and converts the data into JSON. Then you have another visual force page that invokes that webservice, passing the required info, and simple prints the generated JSON on the screen.

So lets start working on our back end component that will actually create the JSON for us. I’ll preface this by saying there is in fact a full blown JSON component that you can download and use to translate SF data into JSON. However, in my experience it didn’t work quite right. It did not generate valid JSON, didn’t know how to handle boolean values, and in general just didn’t agree with the way FullCalendar wanted things done. So I use it to do a bulk of the work, but then do some post formatting, you’ll see what I mean.

Make a new class, call it getJsonEvents. Copy the code from the download and put it in there. I won’t post here cause it’s kinda long and crummy to look at. Again, I’ll just kinda walk you through my code roughly and explain how it is working.

First off we import the JSON object and have this weird {get;set} command in there. Not sure what that does, but it’s important. Next create the getEvents method. It doesn’t take any parameters directly because all the info it needs will be passed into the URL of the calling page. So we take all the url parameters (the start date and end date, and save them into a map. Oh yeah, forgot to mention fullCalender when it requests data, only requests the data for the view you are currently seeing. So it contacts the page you specify in the “events:” attribute and passes two parameters; start and end. Those are the starting date, and ending date in unix epoch time format (number of seconds since 1970 or something like that). So you click March in fullCalender, it contacts your JSON responder page and passes the two times stamps. The receiving page takes those times stamps and passes them into this function we are writing now. Make sense? So anyway, the next few lines are some conversions to get that unix timestamp format turned into a valid salesforce date we can filter against. Again, my code for that probably isn’t terribly efficient (3 lines per conversion with an intermediary variable) but it works.

Moving on we run in my records query. This query basically fetches the data we want from salesforce using the start and end dates we created comparing them to the start and end dates on the objects. Then we create a new list that we will store the JSON values in. Loop over the result set, and being populating our JSON string. Every line you see that looks like this

cjson.putOpt('"allDay"', new JSONObject.value(c.All_Day__c));

is just adding another key-value pair to the JSON string. Don’t let it confuse you, its just taking a value from the query, and sticking it into the JSON string. c is the reference to this row in the query that we are looping over, and the JSONObject.value() is just some magical function that puts the data in the string. I dunno, its a little weird, and seems over engineered to me, but I didn’t write that component (I did write my own query to JSON parser which I’ll probably post later, I like it better personally). So anyway, we populate our JSON string and pack it all up. There are some debugging lines for…. helping to debug, you can remove them if you want. There is also some handling for if no records are returned for a given query (no events for a provided timespan), nothing fancy there. Some error handling in case stuff blows up (ugly hackish error handling, but it does the job).

Then we break into the last function, that does some of the extra parsing I was talking about and actually returns the data. I don’t really know why class is composed of two methods to be honest. Seems to me it could be just one, but I tried consolidating them before and things went to hell. While I did create this process, I build on a framework I found elsewhere, and the two method/function style is the way they had working so I’ve stuck with it. Anywho, you’ll see some replace statements that make the JSON valid (at least for fullcalendar) and then simple returns it. That’s it, that is the whole back end component. Now you just make another apex page, call it whatever you want (make sure it is the same name as the events: attribute in your full calendar config) and put this code in it.

<apex:page controller="getJsonEvents"  action="{!getEvents}"
contentType="application/x-JavaScript; charset=utf-8" showHeader="false" standardStylesheets="false" sidebar="false">
{!result}
</apex:page>

We just have to set up security for these two new things. Under the listing of apex pages, click security next to your new page. Add all the profiles to the enabled profiles area. Probably overkill, but I don’t need to hide this app from anyone, so it doesn’t bother me, You can refine your security as required. Go to sites and find your Salesforce site.Hit public access settings. Make sure your class and visual force pages are listed under “enabled Apex class access” and “enabled visualforce page access” respectively. Also make sure the account your site is using has access to events and any related data. With all that adjusted, you should now be able to access your JSON responder page. Your calendar may even be working at this point as well. Now we just need some extra polish and finishing touches.

You now have an Apex page that can be called, given a start and end time in unix epoch time format, and it will return event data in a JSON string. It probably won’t load now though due to security issues. Next time we’ll talk about Salesforce site security and how to adjust it.

EDIT: New download link http://xerointeractive.com/calendar.zip