Oh my god. It's full of code!

Author Archive

Merge Salesforce Package.xml Files

So I’ve recently been asked to play a larger role in doing our version control, which is not my strongest suite so it’s definitely been a learning experience. Our process at the moment involves creating a feature branch in bit bucket for each new user story. Folks do whatever work they need to do, and then create a change set. From that they generate a zip file that contains all the stuff they worked on and a package.xml file (using the super cool ORGanizer chrome plugin I just found out about). Once they get that they send it over to me and I get their changes setup as a feature branch and then request a pull to get it back into the master branch. Now I’m not sure if this is the best way to do things or not but in each new branch we need to append all the contents of the new package.xml into the existing package.xml. Much to my surprise I couldn’t find a quick clean easy way to merge two XML files. Tried a few online tools and nothing really seemed to work right, so me being me I decided to write something to do it for me. I wasn’t quite sure how to approach this, but then in an instant I realized that from my post a couples weeks ago I can convert XML into a javscript object easily. Once I do that then I can simply merge the objects in memory and build a new file. One small snag I found is that the native javacript methods for merging objects actually overwrites any properties of the same name, it doesn’t smash them together like I was hoping. So with a little bit of elbow grease I managed to write some utility methods for smashing all the data together. To use this simply throw your XML files in the packages directory and run the ‘runMerge.bat’ (this does require you to have node.js installed). It will spit out a new package.xml in the root directory that is a merge of all your package.xml files. Either way, hope this helps someone.

UPDATE (5/19): Okay after squashing a few bugs I now proudly release a version of package merge that actually like…. works (I hope). Famous last words I know.
UPDATE (5/20): Now supports automatic sorting of the package members, having XML files in sub-directories in the packages folder, forcing a package version, and merging all data into a master branch package file for continual cumulative add ons.
Download Package Merge Here!


Mass Updating Salesforce Country and State Picklist Integration Values

So it’s Friday afternoon about 4:00pm and I’m getting ready to wrap it up for the day. Just as I’m about to get up I hear the dreaded ping of my works instant messenger indicating I’ve been tagged. So of course I see whats up, it’s a coworker wondering if there is any way I might be able to help with what otherwise will be an insanely laborious chore. They needed to change the ‘integration value’ on all the states in the United States from having the full state name to just the state code (e.g. Minnesota->MN) in the State and Country Picklist. Doing this manually would take forever, and moreover it had to be done in 4 different orgs. I told him I’d see what I could do over the weekend.

So my first thought was of course see if I can do it in Apex, just find the table that contains the data make a quick script and boom done. Of course, it’s Salesforce so it’s never that easy. The state and country codes are stored in the meta data and there ins’t really a great way to modify that directly in Apex (that I know of, without using that wrapper class but I didn’t want to have to install a package and learn a whole new API for this one simple task). I fooled around with a few different ideas in Apex but after a while it just didn’t seem like it was doable. I couldn’t find any way to update the metadata even though I could fetch it. After digging around a bit I decided probably the best way was to simply download the metadata, modify it and push it back. So first I had to actually get the metadata file. At first I was stuck because AddressSettings didn’t appear in the list of meta data object in VScode (I have a package.xml builder that lets me just select whatever I want from a list and it builds the file for me) and didn’t know how to build a package.xml file that would get it. I found a handy stack overflow post that gave me the command

sfdx force:source:retrieve -m Settings:Address

Which worked to pull the data. The same post also showed the package.xml file that could be used to either pull or push that metadata (with this you don’t even need the above command. You can just pull it directly by using ‘retrieve source in manifest from org’ in VS code).

<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>46.0</version>
    <types>
        <members>Address</members>
        <name>Settings</name>
    </types>
</Package>

 
Now that I had the data the only issue really was that there wasn’t an easy way to just do a find and replace or something to update the file. The value for each state (only in the United States) had to be copied from the state code field into the integration value field. So I decided to whip up a quick nodeJS project to do it. You can download it here (it comes with the original and fixed Address.settings-meta.xml files as well if you just want to get those). It’s a pretty simple script, but it does require xml2js because parsing XML is a pain otherwise.

const fs = require('fs');
var xml2js = require('xml2js');
var parseString = xml2js.parseString;

try 
{
    const data = fs.readFileSync('Address.settings-meta.xml', 'utf8')

    parseString(data, function (err, result) {
        

        var root = result.AddressSettings.countriesAndStates[0].countries;
        //console.log(root);

        for(var i = 0; i < root.length; i++)
        {
            
            var countryName = root[i].integrationValue[0];
            if(countryName == 'United States')
            {
                console.log('Found US!');
        
                for(var j = 0; j < root[i].states.length; j++)
                {
                    console.log('Changing ' + root[i].states[j].integrationValue[0] + ' to ' + root[i].states[j].isoCode[0]);
                    root[i].states[j].integrationValue[0] = root[i].states[j].isoCode[0];
                }
            }
        }
        
        var builder = new xml2js.Builder();
        var xml = builder.buildObject(result);
    
        fs.writeFile("Address.settings-meta-fixed.xml", xml, function(err) {
            if(err) {
                return console.log(err);
            }
            console.log("The file was saved!");
        });     
    });
    
} 
catch (err) 
{
    console.error(err)
}
script

Output of my script. Always satisfying when stuff works.

With my fixed address settings file the last step was “simply” to push it back into Salesforce. I’ll be honest, I haven’t used SFDX much, and this last step actually took longer than it should have. I couldn’t decide if I should be using force:source:deploy or force:mdapi:deploy. Seeing as I had to do this in a production org originally I thought I had to use mdapi but a new update made that no longer the case. mdapi wanted me to build a zip file or something and I got frustrated trying to figure it out. I’m just trying to push one damn file why should I need to be building manifests and making zip files and whatever?! So after some trial and error with force:source:deploy I found that it could indeed push to prod and would take just a package.xml as its input. Initially it complained about not running any test so I told it to only run local tests. That also failed because some other code in the org is throwing errors. As a work around I simply provided it a specific test to run (ChangePasswordController, which is in like every org) and that worked. The final command being

sfdx force:source:deploy -x manifest/package.xml -w 10 -l RunSpecifiedTests –runtests ChangePasswordController

deploy

Hooray it finally worked!

And viola! The fixed metadata was pushed into the org and I spared my coworker days of horrific manual data entry. I know in the end this all ended up being a fairly simply process but it did end up taking me much longer than I initially figured mostly just due to not knowing the processes involved or how to reference the data I wanted so I figured maybe this would save someone some time. Till next time.


Apex list all fields on page layout

Hey everyone, I know it’s been a while but I am in fact still alive. Anyway, I’ve got something new for ya. I’ve been asked to describe fields that are actually being used by evaluating page layouts. Fields that are actually being used will need have their values exported from a legacy SF instance and imported into a new one. So instead of having to manually go through and log every field and find it’s data type which would be crazy slow an error prone, I wrote a nifty little script. You simply feed it page layout names and it will get all the fields on them, describe them, create suggested mappings and transformations then email you the results with each layout as a separate csv attachment so you can use it as a starting point for an excel mapping document. It can also describe the picklist values for every field on any object described. Hopefully your sandbox is the same as your production so you can just save this in there and run it without having to deploy to prod. Remember to turn on email deliverability when running from a sandbox! This is still pretty new, as in my first time using it immediately after building it so if you find errors with it’s output or have any suggestions I’m definitely open to hearing about them in the comments.

UPDATE: After adding some more features it became too large to be an execute anonymous script. It’s now been converted to a class. So save this into a new apex class then from execute anonymous call LayoutDescriber.SendLayoutInfo() to run with default settings or pass in a list of page layoutnames and if you want to get picklist values or not. If you want to run it as a script you can remove the picklist value builder lines 156-210 and the check for valid page layout names lines 26-41. That should get it small enough to run.

public class LayoutDescriber
{
    /**
    *@Description gets all the fields for the provided page layouts and emails the current user a csv document for each. It
                  also gets related field data and provides suggested mapping configuration for import. Ooptionally can get picklist values for objects.
    *@Param pageLayoutNames a list of page layout names. Format is [obectName]-[namespace]__[Page layout name]. 
            Omit namespace and underscores if layout is not part of a managed package.
            EX: Account-SkienceFinSln__Address 
            OR
            EX: Account-Account Layout
    @Param getPicklistValues flag that controls whether picklist values for described objects should be included.
    **/
    public static void sendLayoutInfo(list<string> pageLayoutNames, boolean getPicklistValues)
    { 
        List<Metadata.Metadata> layouts = Metadata.Operations.retrieve(Metadata.MetadataType.Layout, pageLayoutNames);
        
        for(string layOutName : pageLayoutNames)
        {
            boolean layoutFound = false;
            for(integer i = 0; i < layouts.size(); i++)
            {
                Metadata.Layout layoutMd = (Metadata.Layout) layouts.get(i);
                if(layoutMd.fullName == layOutName)
                {
                    layoutFound = true;
                }
            }
            if(layoutFound == false)
            {
                throw new applicationException('No layout with name' + layoutName + ' could be found. Please check and make sure namespace is included if needed');
            }
        }
        map<string,map<string,list<string>>> objectPicklistValuesMap = new map<string,map<string,list<string>>>();
        
        map<string,list<string>> objectFieldsMap = new map<string,list<string>>();
        
        for(integer i = 0; i < layouts.size(); i++)
        {
            Metadata.Layout layoutMd = (Metadata.Layout) layouts.get(i);
        
            list<string> objectFields = new list<string>();
            
            for (Metadata.LayoutSection section : layoutMd.layoutSections) 
            {        
                for (Metadata.LayoutColumn column : section.layoutColumns) 
                {
                    if (column.layoutItems != null) 
                    {
                        for (Metadata.LayoutItem item : column.layoutItems) 
                        {
                            if(item.field == null) continue;
                            objectFields.add(item.field);
                        }
                    }
                }
            }
            objectFields.sort();
            objectFieldsMap.put(pageLayoutNames[i].split('-')[0],objectFields);
        }
        
        system.debug(objectFieldsMap);
        
        Map<String, Schema.SObjectType> globalDescribe = Schema.getGlobalDescribe();
        
        Map<String, Map<String, Schema.SObjectField>> objectDescribeCache = new Map<String, Map<String, Schema.SObjectField>>();
        
        String userName = UserInfo.getUserName();
        User activeUser = [Select Email From User where Username = : userName limit 1];
        String userEmail = activeUser.Email;
        
        Messaging.SingleEmailMessage message = new Messaging.SingleEmailMessage();
        message.toAddresses = new String[] { userEmail };
        message.subject = 'Describe of fields on page layouts';
        message.plainTextBody = 'Save the attachments and open in excel. Fieldnames and types should be properly formated.';
        Messaging.SingleEmailMessage[] messages =   new List<Messaging.SingleEmailMessage> {message};
        list<Messaging.EmailFileAttachment> attachments = new list<Messaging.EmailFileAttachment>();
        
        integer counter = 0;    
        for(string thisObjectType : objectFieldsMap.keySet())
        {
            list<string> fields = objectFieldsMap.get(thisObjectType);
            
            Map<String, Schema.SObjectField> objectDescribeData;
            if(objectDescribeCache.containsKey(thisObjectType))
            {
                objectDescribeData = objectDescribeCache.get(thisObjectType);
            }
            else
            {
                objectDescribeData = globalDescribe.get(thisObjectType).getDescribe().fields.getMap();
                objectDescribeCache.put(thisObjectType,objectDescribeData);
            }
        
        
            string valueString = 'Source Field Name, Source Field Label, Source Field Type, Source Required, Source Size, Is Custom, Controlling Field, Target Field Name, Target Field Type, Target Required, Transformation \r\n';
            for(string thisField : fields)
            {
                if(thisField == null || !objectDescribeData.containsKey(thisField))
                {
                    system.debug('\n\n\n--- Missing field! ' + thisField);
                    if(thisField != null) valueString+= thisField + ', Field Data Not Found \r\n';
                    continue;
                }
                
                Schema.DescribeFieldResult dfr = objectDescribeData.get(thisField).getDescribe();
                
                if( (dfr.getType() == Schema.DisplayType.picklist || dfr.getType() == Schema.DisplayType.MultiPicklist) && getPicklistValues)
                {
                    List<String> pickListValuesList= new List<String>();
                    List<Schema.PicklistEntry> ple = dfr.getPicklistValues();
                    for( Schema.PicklistEntry pickListVal : ple)
                    {
                        pickListValuesList.add(pickListVal.getLabel());
                    }     
        
                    map<string,list<string>> objectFields = objectPicklistValuesMap.containsKey(thisObjectType) ? objectPicklistValuesMap.get(thisObjectType) : new map<string,list<string>>();
                    objectFields.put(thisField,pickListValuesList);
                    objectPicklistValuesMap.put(thisObjectType,objectFields);
                }
                boolean isRequired = !dfr.isNillable() && string.valueOf(dfr.getType()) != 'boolean' ? true : false;
                string targetFieldName = dfr.isCustom() ? '' : thisField;
                string targetFieldType = dfr.isCustom() ? '' : dfr.getType().Name();
                string defaultTransform = '';
                
                if(dfr.getType() == Schema.DisplayType.Reference)
                {
                    defaultTransform = 'Update with Id of related: ';
                    for(Schema.sObjectType thisType : dfr.getReferenceTo())
                    {
                        defaultTransform+= string.valueOf(thisType) + '/';
                    }
                    defaultTransform.removeEnd('/');
                }    
                if(thisField == 'LastModifiedById') defaultTransform = 'Do not import';
                valueString+= thisField +',' + dfr.getLabel() + ',' +  dfr.getType() + ',' + isRequired + ',' +dfr.getLength()+ ',' +dfr.isCustom()+ ',' +dfr.getController() + ','+ 
                              targetFieldName + ',' + targetFieldType +',' + isRequired + ',' + defaultTransform +'\r\n';
            }
        
            Messaging.EmailFileAttachment efa = new Messaging.EmailFileAttachment();
            efa.setFileName(pageLayoutNames[counter]+'.csv');
            efa.setBody(Blob.valueOf(valueString));
            attachments.add(efa);
            
            counter++;
        }
        //if we are getting picklist values we will now build a document for each object. One column per picklist, with it's rows being the values of the picklist
        if(getPicklistValues)
        {
            //loop over the object types
            for(string objectType : objectPicklistValuesMap.keySet())
            {
                //get all picklist fields for this object
                map<string,list<string>> objectFields = objectPicklistValuesMap.get(objectType);
                
                //each row of data will be stored as a string element in this list
                list<string> dataLines = new list<string>();
                integer rowIndex = 0;
                
                //string to contains the header row (field names)
                string headerString = '';
                
                //due to how the data is structured (column by column) but needs to be built (row by row) we need to find the column with the maximum amount of values
                //so our other columns can insert a correct number of empty space placeholders if they don't have values for that row.
                integer numRows = 0;
                for(string fieldName : objectFields.keySet())
                {
                    if(objectFields.get(fieldName).size() > numRows) numRows = objectFields.get(fieldName).size();
                }
                
                //loop over every field now. This is going to get tricky because the data is structured as a field with all its values contained but we need to build
                //our spreadsheet row by row. So we will loop over the values and create one entry in the dataLines list for each value. Each additional field will then add to the string
                //as required. Once we have constructed all the rows of data we can append them together into one big text blob and that will be our CSV file.
                for(string fieldName : objectFields.keySet())
                {
                    headerString += fieldName +',';
                    rowIndex = 0;
                    list<string> picklistVals = objectFields.get(fieldName);
                    for(integer i = 0; i<numRows; i++ )
                    {
                        string thisVal = i >= picklistVals.size() ? ' ' : picklistVals[i]; 
                        if(dataLines.size() <= rowIndex) dataLines.add('');
                        dataLines[rowIndex] += thisVal + ', ';
                        rowIndex++;        
                    }
                }
                headerString += '\r\n';
                
                //now that our rows are constructed, add newline chars to the end of each
                string valueString = headerString;
                for(string thisRow : dataLines)
                {            
                    thisRow += '\r\n';
                    valueString += thisRow;
                }
                
                Messaging.EmailFileAttachment efa = new Messaging.EmailFileAttachment();
                efa.setFileName('Picklist values for ' + objectType +'.csv');
                efa.setBody(Blob.valueOf(valueString));
                attachments.add(efa);        
            }
        }
        
        
        message.setFileAttachments( attachments );
        
        Messaging.SendEmailResult[] results = Messaging.sendEmail(messages);
         
        if (results[0].success) 
        {
            System.debug('The email was sent successfully.');
        } 
        else 
        {
            System.debug('The email failed to send: ' + results[0].errors[0].message);
        }
    }
    public class applicationException extends Exception {}
    
    public static void sendLayoutInfo()
    {
        list<string> pageLayoutNames = new List<String>();
        pageLayoutNames.add('Account-Account Layout');
        pageLayoutNames.add('Contact-Contact Layout');
        pageLayoutNames.add('Opportunity-Opportunity Layout');
        pageLayoutNames.add('Lead-Lead Layout');
        pageLayoutNames.add('Task-Task Layout');
        pageLayoutNames.add('Event-Event Layout');
        pageLayoutNames.add('Campaign-Campaign Layout');
        pageLayoutNames.add('CampaignMember-Campaign Member Page Layout');
        sendLayoutInfo(pageLayoutNames, true);
    }
}


The result is an email with a bunch of attachments. One for each page layout and one for each objects picklist fields (if enabled).

mmmmm attachments

For example this is what is produced for the lead object.

Nicely formatted table of lead fields and suggested mappings.

Nicely formatted table of lead fields and suggested mappings.

 

And here is what it built for the picklist values

Sweet sweet picklist values. God a love properly formatted data.

Anyway, I hope this might help some of ya’ll out there who are given the painful task of finding what fields are actually being used on page layouts. Till next time.


Salesforce development is broken (and so am I)

Before I begin this is mostly a humor and venting post. Don’t take it too seriously.

So I’m doing development on a package that needs to work for both person accounts and regular accounts. Scratch orgs didn’t exist when this project was started so we originally had a developer org, then a packaging org which contained the namespace for the package (this ended up being a terrible idea because all kinds of weird bugs start to show up when you do your dev without a namespace and then try to add one. Any dynamic code pretty much breaks and you have to remove the namespace from any data returned by apex controllers that provide data to field inputs in lightning, field set names, object names, etc all get messed up.

Still after adding some work arounds we got that working. However since the developer org doesn’t have person accounts we need another org that does to add in the extra bits of logic where needed. We wanted to keep the original dev org without person accounts as it’s sort of an auxiliary feature and didn’t want it causing any problems with the core package.

Development of the core package goes on for about a year. Now it’s time to tackle adding the extra logic for person accounts which in themselves are awful. I don’t know who thought it was a good idea to basically have two different schemas with the second being a half broken poorly defined bastardization of the original good version. Seriously they are sometimes account like, sometimes contact like, the account has the contact fields but a separate contact object kind of exists but you cannot get to it without directly entering the Id in the URL. The whole thing barely makes any sense. Interacting with them from apex is an absolute nightmare. In this case account and contact data are integrated with a separate system, which also has concepts of accounts and contacts. So normally we create an account, then tie contacts to it. In the case of person accounts we have to create some kind of weird hybrid of the data, creating both an account and contact from one object, but not all the data is directly on the account. For example we need to get the mailing address off the contact portion and a few other custom fields that the package adds. So we have to like smash the two objects together and send it. It’s just bizarre. Anyway at this point scratch orgs exist but we cannot create one from our developer org for some reason, the dev hub options just doesn’t exist. The help page says dev hub/scratch orgs are available in developer orgs, but apparently not in this specific one for no discernible reason.

We cannot enable them in our packaging org either as you cannot enable dev hub from an org with namespaces. So my coworker instead enables dev hub from his own personal dev org and creates me a scratch org into which I install the unmanaged version of the package to easily get all the code and such. Then I just manually roll my changes from that org into dev, and from dev into packaging. That works fine until the scratch org expires, which apparently it just did. Now I cannot log into it, and my dev is suddenly halted. There were no warning emails received (maybe he did, but didn’t tell me) and no way to re-enable the org. It’s just not accessible anymore. Thank goodness I have local copies of my code (we haven’t really gotten version control integrated into our workflow yet) or else I’d have lost any work.

I now have to set out to get a new org setup (when I’m already late for a deadline on some fixes). Fine, so I attempt to create a scratch org from my own personal dev org (which itself is halfway broken, it still has the theme from before ‘classic’. Enabling lightning gives me a weird hybrid version which looks utterly ridiculous).

I enable dev hub and set out to create my scratch org from VS code (I’ve never done this so I’m following a tutorial). So I create my project, authorize my org, then lo and behold, an error occurs while trying to create my scratch org “ERROR running force:org:create: Must pass a username and/or OAuth options when creating an AuthInfo instance.” I can’t find any information on how to fix this, I tried recreating the project, reauthorizing and still nothing. Not wanting to waste anymore time, I say fine I’ll just create a regular old developer org, install the un-managed package and enable person accounts.

I create my new dev org (after some mild annoyance and not being able to end my username with a number) and get it linked to my IDE. So now I need to enable person accounts, but wait you cannot do that yourself. You have to contact support to enable that and guess what Salesforce no longer allows you to create cases from a developer org. Because this package is being developed as an ISV type package I don’t have a prod org to login to create a case from. So now I’m mostly stuck. I’ve asked a co-worker who has access to a production org to log a case, and giving them my org ID, I’m hoping support will be willing to accept a feature request for an org other than the one the case is coming from. Otherwise I don’t know what I’ll do.


I’m sure once things mature more it’ll get better, and a good chunk of these problems are probably my own fault somehow but still, this is nuts.


Image

Salesforce Lightning DataTable Query Flattener

So I was doing some playing around with the Salesforce Lightning Datatable component and while it does make displaying query data very easy, it isn’t super robust when it comes to handling parent and child records. Just to make life easier in the future I thought it might be nice to make a function which could take a query returned by a controller and ‘flatten’ it so that all the data was available to the data table since it cannot access nested arrays or objects. Of course the table itself doesn’t have a way to iterate over nested rows so the child array flatted function is not quite as useful (unless say you wanted to show a contacts most recent case or something). Anyway, hopefully this will save you some time from having to write wrapper classes or having to skip using the data table if you have parent or child nested data.

Apex Controller

public with sharing class ManageContactsController {

    @AuraEnabled
    public static list<Contact> getContacts()
    {
        return [select firstname, name, lastname, email, phone, Owner.name, Owner.Profile.Name, (select id, subject from cases limit 1 order by createdDate desc ) from contact];
    }
}

Lightning Controller

({
   init: function (component, event, helper) {
        component.set('v.mycolumns', [
                {label: 'Contact Name', fieldName: 'Name', type: 'text'},
                {label: 'Phone', fieldName: 'Phone', type: 'phone'},
                {label: 'Email', fieldName: 'Email', type: 'email'},
            	{label: 'Owner', fieldName: 'Owner_Name', type: 'text'},
            	{label: 'Most Recent Case', fieldName: 'Cases_0_Subject', type: 'text'}
            ]);
        helper.getData(component, event, 'getContacts', 'mydata');
    }
})

Helper

({
    flattenObject : function(propName, obj)
    {
        var flatObject = [];
        
        for(var prop in obj)
        {
            //if this property is an object, we need to flatten again
            var propIsNumber = isNaN(propName);
            var preAppend = propIsNumber ? propName+'_' : '';
            if(typeof obj[prop] == 'object')
            {
                flatObject[preAppend+prop] = Object.assign(flatObject, this.flattenObject(preAppend+prop,obj[prop]) );

            }    
            else
            {
                flatObject[preAppend+prop] = obj[prop];
            }
        }
        return flatObject;
    },
    
	flattenQueryResult : function(listOfObjects) {
        if(typeof listOfObjects != 'Array') 
        {
        	var listOfObjects = [listOfObjects];
        }
        
        console.log('List of Objects is now....');
        console.log(listOfObjects);
        for(var i = 0; i < listOfObjects.length; i++)
        {
            var obj = listOfObjects[i];
            for(var prop in obj)
            {      
                if(!obj.hasOwnProperty(prop)) continue;
                if(typeof obj[prop] == 'object' && typeof obj[prop] != 'Array')
                {
					obj = Object.assign(obj, this.flattenObject(prop,obj[prop]));
                }
                else if(typeof obj[prop] == 'Array')
                {
                    for(var j = 0; j < obj[prop].length; j++)
                    {
                        obj[prop+'_'+j] = Object.assign(obj,this.flattenObject(prop,obj[prop]));
                    }
                }
        	}
        }
        return listOfObjects;
    },
    getInfo : function(component, event, methodName, targetAttribute) {
        var action = component.get('c.'+methodName);
        action.setCallback(this, $A.getCallback(function (response) {
            var state = response.getState();
            if (state === "SUCCESS") {
                console.log('Got Raw Response for ' + methodName + ' ' + targetAttribute);
                console.log(response.getReturnValue());
                
                var flattenedObject = this.flattenQueryResult(response.getReturnValue());
                
                component.set('v.'+targetAttribute, flattenedObject);
                
                console.log(flattenedObject);
            } else if (state === "ERROR") {
                var errors = response.getError();
                console.error(errors);
            }
        }));
        $A.enqueueAction(action);
    }
})

Component (Sorry my code highlighter didn’t like trying to parse this)

<aura:component controller=”ManageContactsController” implements=”forceCommunity:availableForAllPageTypes” access=”global”>
<aura:attribute name=”mydata” type=”Object”/>
<aura:attribute name=”mycolumns” type=”List”/>
<aura:handler name=”init” value=”{! this }” action=”{! c.init }”/>
<h3>Contacts (With Sharing Applied)</h3>
<lightning:datatable data=”{! v.mydata }”
columns=”{! v.mycolumns }”
keyField=”Id”
hideCheckboxColumn=”true”/>
</aura:component>

Result

Hope this helps!


Lightning Update List of Records Workaround (Quick Fix)

I’ve been doing some work with Salesforce Lightning, and so far it is certainly proving… challenging. I ran into an issue the other day to which I could find no obvious solution. I was attempting to pass a set of records from my javascript controller to the Apex controller for upsert. However it was throwing an error about ‘upsert not allowed on generic sObject list’ or something of that nature when the list of sObjects was in-fact defined as a specific type. After messing around with some various attempts at casting the list and modifying objects in the javascript controller before passing to the apex to have types I couldn’t find an elegant solution. Instead I found a workaround of simply creating a new list of the proper object type and adding the passed in records to it. I feel like there is probably a ‘proper’ way to make this work, but it works for me, so I figured I’d share.

//***************** Helper *************************//
	saveMappingFields : function(component,fieldObjects,callback)
	{

        var action = component.get("c.saveMappingFields");
        action.setParams({
            fieldObjects: fieldObjects
        });        
        action.setCallback(this, function(actionResult){
         
            if (typeof callback === "function") {
            	callback(actionResult);
            }
        });  
        
        $A.enqueueAction(action);             
	}
	
//**************** Apex Controller **********************//
//FAILS
@AuraEnabled
global static string saveMappingFields(list<Mapping_Field__c> fieldObjects)
{
	list<database.upsertResult> saveFieldResults = database.upsert(fieldObjects,false);	
}

//WORKS
@AuraEnabled
global static string saveMappingFields(list<Mapping_Field__c> fieldObjects)
{
	list<Mapping_Field__c> fixedMappingFields = new list<Mapping_Field__c>(fieldObjects);
	
	list<database.upsertResult> saveFieldResults = database.upsert(fixedMappingFields,false);	
}

Simplification

Hey all,

So I wanted to just throw this out there, I’ve moved from Minnesota to VERY rural Montana. I traded in my 3 bedroom rambler for a studio cabin on some ranch near the Canadian border. As such my access to technology is somewhat reduced and I don’t know if I’ll be posting as much interesting stuff on this blog for a while. Odds are I’ll have some cool Salesforce stuff from time to time since I am maintaining my employment remotely but I won’t be doing as much at home hacking. If you are curious how things are going, why this happened or just like my writing style I’ve started a new blog detailing my journey. You can check it out here:

Montana Dan Blog

Anyway, I’ll still post what I can but I figured I’d should at least inform the community why I might not be around quite as much. Till next time.

-Kenji


Dynamic Apex Invocation/Callbacks

So I’ve been working on that DeepClone class and it occurred to me that whatever invokes that class might like to know when the process is done (so maybe it can do something with those created records). Seeing as the DeepClone is by it’s very nature asynchronous that presents a problem, since the caller cannot sit and wait for process to complete. You know what other language has to deal with async issues a lot? Javascript. In Javascript we often solve this problem with a ‘callback’ function (I know callbacks are old and busted, promises are the new hotness but bare with me here), where in you call your asynchronous function and tell it what to call when it’s done. Most often that is done by passing in the actual function code instead of just the name, but both are viable. Here is an example of what both might look like.

var someData = 'data to give to async function';

//first type of invocation passes in an actual function as the callback. 
asyncThing(someData,function(result){
	console.log('I passed in a function directly!' + result);
});

//second type of invocation passes in the name of a function to call instead
asyncThing(someData,'onCompleteHandler');

function onCompleteHandler(result)
{
	console.log('I passed in the name of a function to call and that happened' + result);
}

function asyncThing(data,callback)
{
	//async code here, maybe a callout or something.
	var data = 'probably  a status code or the fetched data would go here';
	
	//if our callback is a function, then just straight up invoke it
	if(typeof callback == 'function')
	{
		callback(data);
	}
	//if our callback is a string, then dynamically invoke it
	else if(typeof callback == 'string')
	{
		window[callback](data);
	}
}

So yeah, javascript is cool, it has callbacks. What does this have to do with Apex? Apex is strongly typed, you can’t just go around passing around functions as arguments, and you sure as hell can’t do dynamic invocation… or can you? Behold, by abusing the tooling api, I give you a basic implementation of a dynamic Apex callback!

public HttpResponse invokeCallback(string callback, string dataString)
{
	HttpResponse res = new HttpResponse();
	try
	{
		string functionCall = callback+'(\''+dataString,',')+'\');';
		HttpRequest req = new HttpRequest();
		req.setHeader('Authorization', 'Bearer ' + UserInfo.getSessionID());
		req.setHeader('Content-Type', 'application/json');
		string instanceURL = System.URL.getSalesforceBaseUrl().getHost().remove('-api' ).toLowerCase();
		String toolingendpoint = 'https://'+instanceURL+'/services/data/v28.0/tooling/executeAnonymous/?anonymousBody='+encodingUtil.urlEncode(functionCall,'utf-8');
		req.setEndpoint(toolingendpoint);
		req.setMethod('GET');
		
		Http h = new Http();
		res = h.send(req);
	}
	catch(exception e)
	{
		system.debug('\n\n\n\n--------------------- Error attempting callback!');
		system.debug(e);
		system.debug(res);
	}
	return res;
} 

What’s going on here? The Tooling API allows us to execute anonymous code. Normally the Tooling API is for external tools/languages to access Salesforce meta-data and perform operations. However, by accessing it via REST and passing in both the name of a class and method, and properly encoding any data you’d like to pass (strings only, no complex object types) you can provide a dynamic callback specified at runtime. We simply create a get request against the Tooling API REST endpoint, and invoke the execute anonymous method. Into that we pass the desired callback function name. So now when DeepClone for example is instantiated the caller can set a class level property of class and method it would like called when DeepClone is done doing it’s thing. It can pass back all the Id’s of the records created so then any additional work can be performed. Of course the class provided has to be public, and the method called must be static. Additionally you have to add your own org id to the allowed remote sites under security->remote site settings. Anyway, I thought this was a pretty nice way of letting your @future methods and your queueable methods to pass information back to a class so you aren’t totally left in the dark about what the results were. Enjoy!


Deep Clone (Round 2)

So a day or two ago I posted my first draft of a deep clone, which would allow easy cloning of an entire data hierarchy. It was a semi proof of concept thing with some limitations (it could only handle somewhat smaller data sets, and didn’t let you configure all or nothing inserts, or specify if you wanted to copy standard objects as well as custom or not). I was doing some thinking and I remembered hearing about the queueable interface, which allows for asynchronous processing and bigger governor limits. I started thinking about chaining queueable jobs together to allow for copying much larger data sets. Each invocation would get it’s own governor limits and could theoretically go on as long as it took since you can chain jobs infinitely. I had attempted to use queueable to solve this before but i made the mistake of trying to kick off multiple jobs per invocation (one for each related object type). This obviously didn’t work due to limits imposed on queueable. Once I thought of a way to only need one invocation per call (basically just rolling all the records that need to get cloned into one object and iterate over it) I figured I might have a shot at making this work. I took what I had written before, added a few options, and I think I’ve done it. An asynchronous deep clone that operates in distinct batches with all or nothing handling, and cleanup in case of error. This is some hot off the presses code, so there is likely some lingering bugs, but I was too excited not to share this. Feast your eyes!

public class deepClone implements Queueable {

    //global describe to hold object describe data for query building and relationship iteration
    public map<String, Schema.SObjectType> globalDescribeMap = Schema.getGlobalDescribe();
    
    //holds the data to be cloned. Keyed by object type. Contains cloneData which contains the object to clone, and some data needed for queries
    public map<string,cloneData> thisInvocationCloneMap = new map<string,cloneData>();
    
    //should the clone process be all or nothing?
    public boolean allOrNothing = false;
    
    //each iteration adds the records it creates to this property so in the event of an error we can roll it all back
    public list<id> allCreatedObjects = new list<id>();
    
    //only clone custom objects. Helps to avoid trying to clone system objects like chatter posts and such.
    public boolean onlyCloneCustomObjects = true;
    
    public static id clone(id sObjectId, boolean onlyCustomObjects, boolean allOrNothing)
    {
        
        deepClone startClone= new deepClone();
        startClone.onlyCloneCustomObjects  = onlyCustomObjects;
        startClone.allOrNothing = allOrNothing;
        
        sObject thisObject = sObjectId.getSobjectType().newSobject(sObjectId);
        cloneData thisClone = new cloneData(new list<sObject>{thisObject}, new map<id,id>());
        map<string,cloneData> cloneStartMap = new map<string,cloneData>();
        
        cloneStartMap.put(sObjectId.getSobjectType().getDescribe().getName(),thisClone);
        
        startClone.thisInvocationCloneMap = cloneStartMap;
        return System.enqueueJob(startClone);      
    }
    
    public void execute(QueueableContext context) {
        deepCloneBatched();
    }
        
    /**
    * @description Clones an object and the entire related data hierarchy. Currently only clones custom objects, but enabling standard objects is easy. It is disabled because it increases risk of hitting governor limits
    * @param sObject objectToClone the root object be be cloned. All descended custom objects will be cloned as well
    * @return list<sobject> all of the objects that were created during the clone.
    **/
    public list<id> deepCloneBatched()
    {
        map<string,cloneData> nextInvocationCloneMap = new map<string,cloneData>();
        
        //iterate over every object type in the public map
        for(string relatedObjectType : thisInvocationCloneMap.keySet())
        { 
            list<sobject> objectsToClone = thisInvocationCloneMap.get(relatedObjectType).objectsToClone;
            map<id,id> previousSourceToCloneMap = thisInvocationCloneMap.get(relatedObjectType).previousSourceToCloneMap;
            
            system.debug('\n\n\n--------------------  Cloning record ' + objectsToClone.size() + ' records');
            list<id> objectIds = new list<id>();
            list<sobject> clones = new list<sobject>();
            list<sObject> newClones = new list<sObject>();
            map<id,id> sourceToCloneMap = new map<id,id>();
            list<database.saveresult> cloneInsertResult;
                       
            //if this function has been called recursively, then the previous batch of cloned records
            //have not been inserted yet, so now they must be before we can continue. Also, in that case
            //because these are already clones, we do not need to clone them again, so we can skip that part
            if(objectsToClone[0].Id == null)
            {
                //if they don't have an id that means these records are already clones. So just insert them with no need to clone beforehand.
                cloneInsertResult = database.insert(objectsToClone,allOrNothing);

                clones.addAll(objectsToClone);
                
                for(sObject thisClone : clones)
                {
                    sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
                }
                            
                objectIds.addAll(new list<id>(previousSourceToCloneMap.keySet()));
                //get the ids of all these objects.                    
            }
            else
            {
                //get the ids of all these objects.
                for(sObject thisObj :objectsToClone)
                {
                    objectIds.add(thisObj.Id);
                }
    
                //create a select all query to get all the data for these objects since if we only got passed a basic sObject without data 
                //then the clone will be empty
                string objectDataQuery = buildSelectAllStatment(relatedObjectType);
                
                //add a where condition
                objectDataQuery += ' where id in :objectIds';
                
                //get the details of this object
                list<sObject> objectToCloneWithData = database.query(objectDataQuery);
    
                for(sObject thisObj : objectToCloneWithData)
                {              
                    sObject clonedObject = thisObj.clone(false,true,false,false);
                    clones.add(clonedObject);               
                }    
                
                //insert the clones
                cloneInsertResult = database.insert(clones,allOrNothing);
                
                for(sObject thisClone : clones)
                {
                    sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
                }
            }        
            
            for(database.saveResult saveResult :  cloneInsertResult)
            {
                if(saveResult.success)
                {
                    allCreatedObjects.add(saveResult.getId());
                }
                else if(allOrNothing)
                {
                    cleanUpError();
                    return allCreatedObjects;
                }
            }
              
            //Describes this object type so we can deduce it's child relationships
            Schema.DescribeSObjectResult objectDescribe = globalDescribeMap.get(relatedObjectType).getDescribe();
                        
            //get this objects child relationship types
            List<Schema.ChildRelationship> childRelationships = objectDescribe.getChildRelationships();
    
            system.debug('\n\n\n-------------------- ' + objectDescribe.getName() + ' has ' + childRelationships.size() + ' child relationships');
            
            //then have to iterate over every child relationship type, and every record of that type and clone them as well. 
            for(Schema.ChildRelationship thisRelationship : childRelationships)
            { 
                          
                Schema.DescribeSObjectResult childObjectDescribe = thisRelationship.getChildSObject().getDescribe();
                string relationshipField = thisRelationship.getField().getDescribe().getName();
                
                try
                {
                    system.debug('\n\n\n-------------------- Looking at ' + childObjectDescribe.getName() + ' which is a child object of ' + objectDescribe.getName());
                    
                    if(!childObjectDescribe.isCreateable() || !childObjectDescribe.isQueryable())
                    {
                        system.debug('-------------------- Object is not one of the following: queryable, creatable. Skipping attempting to clone this object');
                        continue;
                    }
                    if(onlyCloneCustomObjects && !childObjectDescribe.isCustom())
                    {
                        system.debug('-------------------- Object is not custom and custom object only clone is on. Skipping this object.');
                        continue;                   
                    }
                    if(Limits.getQueries() >= Limits.getLimitQueries())
                    {
                        system.debug('\n\n\n-------------------- Governor limits hit. Must abort.');
                        
                        //if we hit an error, and this is an all or nothing job, we have to delete what we created and abort
                        if(!allOrNothing)
                        {
                            cleanUpError();
                        }
                        return allCreatedObjects;
                    }
                    //create a select all query from the child object type
                    string childDataQuery = buildSelectAllStatment(childObjectDescribe.getName());
                    
                    //add a where condition that will only find records that are related to this record. The field which the relationship is defined is stored in the maps value
                    childDataQuery+= ' where '+relationshipField+ ' in :objectIds';
                    
                    //get the details of this object
                    list<sObject> childObjectsWithData = database.query(childDataQuery);
                    
                    system.debug('\n\n\n-------------------- Object queried. Found ' + childObjectsWithData.size() + ' records to clone');
                    
                    if(!childObjectsWithData.isEmpty())
                    {               
                        map<id,id> childRecordSourceToClone = new map<id,id>();
                        
                        for(sObject thisChildObject : childObjectsWithData)
                        {
                            childRecordSourceToClone.put(thisChildObject.Id,null);
                            
                            //clone the object
                            sObject newClone = thisChildObject.clone();
                            
                            //since the record we cloned still has the original parent id, we now need to update the clone with the id of it's cloned parent.
                            //to do that we reference the map we created above and use it to get the new cloned parent.                        
                            system.debug('\n\n\n----------- Attempting to change parent of clone....');
                            id newParentId = sourceToCloneMap.get((id) thisChildObject.get(relationshipField));
                            
                            system.debug('Old Parent: ' + thisChildObject.get(relationshipField) + ' new parent ' + newParentId);
                            
                            //write the new parent value into the record
                            newClone.put(thisRelationship.getField().getDescribe().getName(),newParentId );
                            
                            //add this new clone to the list. It will be inserted once the deepClone function is called again. I know it's a little odd to not just insert them now
                            //but it save on redudent logic in the long run.
                            newClones.add(newClone);             
                        }  
                        cloneData thisCloneData = new cloneData(newClones,childRecordSourceToClone);
                        nextInvocationCloneMap.put(childObjectDescribe.getName(),thisCloneData);                             
                    }                                       
                       
                }
                catch(exception e)
                {
                    system.debug('\n\n\n---------------------- Error attempting to clone child records of type: ' + childObjectDescribe.getName());
                    system.debug(e); 
                }            
            }          
        }
        
        system.debug('\n\n\n-------------------- Done iterating cloneable objects.');
        
        system.debug('\n\n\n-------------------- Clone Map below');
        system.debug(nextInvocationCloneMap);
        
        system.debug('\n\n\n-------------------- All created object ids thus far across this invocation');
        system.debug(allCreatedObjects);
        
        //if our map is not empty that means we have more records to clone. So queue up the next job.
        if(!nextInvocationCloneMap.isEmpty())
        {
            system.debug('\n\n\n-------------------- Clone map is not empty. Sending objects to be cloned to another job');
            
            deepClone nextIteration = new deepClone();
            nextIteration.thisInvocationCloneMap = nextInvocationCloneMap;
            nextIteration.allCreatedObjects = allCreatedObjects;
            nextIteration.onlyCloneCustomObjects  = onlyCloneCustomObjects;
            nextIteration.allOrNothing = allOrNothing;
            id  jobId = System.enqueueJob(nextIteration);       
            
            system.debug('\n\n\n-------------------- Next queable job scheduled. Id is: ' + jobId);  
        }
        
        system.debug('\n\n\n-------------------- Cloneing Done!');
        
        return allCreatedObjects;
    }
     
    /**
    * @description create a string which is a select statement for the given object type that will select all fields. Equivalent to Select * from objectName ins SQL
    * @param objectName the API name of the object which to build a query string for
    * @return string a string containing the SELECT keyword, all the fields on the specified object and the FROM clause to specify that object type. You may add your own where statements after.
    **/
    public string buildSelectAllStatment(string objectName){ return buildSelectAllStatment(objectName, new list<string>());}
    public string buildSelectAllStatment(string objectName, list<string> extraFields)
    {       
        // Initialize setup variables
        String query = 'SELECT ';
        String objectFields = String.Join(new list<string>(globalDescribeMap.get(objectName).getDescribe().fields.getMap().keySet()),',');
        if(extraFields != null)
        {
            objectFields += ','+String.Join(extraFields,',');
        }
        
        objectFields = objectFields.removeEnd(',');
        
        query += objectFields;
    
        // Add FROM statement
        query += ' FROM ' + objectName;
                 
        return query;   
    }    
    
    public void cleanUpError()
    {
        database.delete(allCreatedObjects);
    }
    
    public class cloneData
    {
        public list<sObject> objectsToClone = new list<sObject>();        
        public map<id,id> previousSourceToCloneMap = new map<id,id>();  
        
        public cloneData(list<sObject> objects, map<id,id> previousDataMap)
        {
            this.objectsToClone = objects;
            this.previousSourceToCloneMap = previousDataMap;
        }   
    }    
}    

It’ll clone your record, your records children, your records children’s children’s, and yes even your records children’s children’s children (you get the point)! Simply invoke the deepClone.clone() method with the id of the object to start the clone process at, whether you want to only copy custom objects, and if you want to use all or nothing processing. Deep Clone takes care of the rest automatically handling figuring out relationships, cloning, re-parenting, and generally being awesome. As always I’m happy to get feedback or suggestions! Enjoy!

-Kenji


Salesforce True Deep Clone, the (Im)Possible Dream

So getting back to work work (sorry alexa/amazon/echo, I’ve gotta pay for more smart devices somehow), I’ve been working on a project where there is a fairly in depth hierarchy of records. We will call them surveys, these surveys have records related to them. Those records have other records related to them, and so on. It’s a semi complicated “tree” that goes about 5 levels deep with different kinds of objects in each “branch”. Of course with such a complicated structure, but a common need to copy and modify it for a new project, the request for a better clone came floating across my desk. Now Salesforce does have a nice clone tool built  in, but it doesn’t have the ability to copy an entire hierarchy, and some preliminary searches didn’t turn up anything great either. The reason why, it’s pretty damn tricky, and governor limits can initially make it seem impossible. What I have here is an initial attempt at a ‘true deep clone’ function. You give it a record (or possibly list of records, but I wouldn’t push your luck) to clone. It will do that, and then clone then children, and re-parent them to your new clone. It will then find all those records children and clone and re-parent them as well, all the way down. Without further ado, here is the code.

    //clones a batch of records. Must all be of the same type.
    //very experemental. Small jobs only!
    public  Map<String, Schema.SObjectType> globalDescribeMap = Schema.getGlobalDescribe();    
    public static list<sObject> deepCloneBatched(list<sObject> objectsToClone) { return deepCloneBatched(objectsToClone,new map<id,id>());}
    public static list<sObject> deepCloneBatched(list<sObject> objectsToClone, map<id,id> previousSourceToCloneMap)
    {
        system.debug('\n\n\n--------------------  Cloning record ' + objectsToClone.size() + ' records');
        list<id> objectIds = new list<id>();
        list<sobject> clones = new list<sobject>();
        list<sObject> newClones = new list<sObject>();
        map<id,id> sourceToCloneMap = new map<id,id>();
        
        
        if(objectsToClone.isEmpty())
        {
            system.debug('\n\n\n-------------------- No records in set to clone. Aborting');
            return clones;
        }
                
        //if this function has been called recursively, then the previous batch of cloned records
        //have not been inserted yet, so now they must be before we can continue. Also, in that case
        //because these are already clones, we do not need to clone them again, so we can skip that part
        if(objectsToClone[0].Id == null)
        {
            //if they don't have an id that means these records are already clones. So just insert them with no need to clone beforehand.
            insert objectsToClone;
            clones.addAll(objectsToClone);
            
            for(sObject thisClone : clones)
            {
                sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
            }
                        
            objectIds.addAll(new list<id>(previousSourceToCloneMap.keySet()));
            //get the ids of all these objects.                    
        }
        else
        {
            //get the ids of all these objects.
            for(sObject thisObj :objectsToClone)
            {
                objectIds.add(thisObj.Id);
            }
            
            for(sObject thisObj : objectsToClone)
            {
                sObject clonedObject = thisObj.clone(false,true,false,false);
                clones.add(clonedObject);               
            }    
            
            //insert the clones
            insert clones;
            
            for(sObject thisClone : clones)
            {
                sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
            }
        }        

        //figure out what kind of object we are dealing with
        string relatedObjectType = objectsToClone[0].Id.getSobjectType().getDescribe().getName();
        
        //Describes this object type so we can deduce it's child relationships
        Schema.DescribeSObjectResult objectDescribe = globalDescribeMap.get(relatedObjectType).getDescribe();
                    
        //get this objects child relationship types
        List<Schema.ChildRelationship> childRelationships = objectDescribe.getChildRelationships();

        system.debug('\n\n\n-------------------- ' + objectDescribe.getName() + ' has ' + childRelationships.size() + ' child relationships');
        
        //then have to iterate over every child relationship type, and every record of that type and clone them as well. 
        for(Schema.ChildRelationship thisRelationship : childRelationships)
        { 
                      
            Schema.DescribeSObjectResult childObjectDescribe = thisRelationship.getChildSObject().getDescribe();
            string relationshipField = thisRelationship.getField().getDescribe().getName();
            
            try
            {
                system.debug('\n\n\n-------------------- Looking at ' + childObjectDescribe.getName() + ' which is a child object of ' + objectDescribe.getName());
                
                if(!childObjectDescribe.isCreateable() || !childObjectDescribe.isQueryable() || !childObjectDescribe.isCustom())
                {
                    system.debug('-------------------- Object is not one of the following: queryable, creatable, or custom. Skipping attempting to clone this object');
                    continue;
                }
                if(Limits.getQueries() >= Limits.getLimitQueries())
                {
                    system.debug('\n\n\n-------------------- Governor limits hit. Must abort.');
                    return clones;
                }
                //create a select all query from the child object type
                string childDataQuery = buildSelectAllStatment(childObjectDescribe.getName());
                
                //add a where condition that will only find records that are related to this record. The field which the relationship is defined is stored in the maps value
                childDataQuery+= ' where '+relationshipField+ ' in :objectIds';
                
                //get the details of this object
                list<sObject> childObjectsWithData = database.query(childDataQuery);
                
                if(!childObjectsWithData.isEmpty())
                {               
                    map<id,id> childRecordSourceToClone = new map<id,id>();
                    
                    for(sObject thisChildObject : childObjectsWithData)
                    {
                        childRecordSourceToClone.put(thisChildObject.Id,null);
                        
                        //clone the object
                        sObject newClone = thisChildObject.clone();
                        
                        //since the record we cloned still has the original parent id, we now need to update the clone with the id of it's cloned parent.
                        //to do that we reference the map we created above and use it to get the new cloned parent.                        
                        system.debug('\n\n\n----------- Attempting to change parent of clone....');
                        id newParentId = sourceToCloneMap.get((id) thisChildObject.get(relationshipField));
                        
                        system.debug('Old Parent: ' + thisChildObject.get(relationshipField) + ' new parent ' + newParentId);
                        
                        //write the new parent value into the record
                        newClone.put(thisRelationship.getField().getDescribe().getName(),newParentId );
                        
                        //add this new clone to the list. It will be inserted once the deepClone function is called again. I know it's a little odd to not just insert them now
                        //but it save on redudent logic in the long run.
                        newClones.add(newClone);             
                    }  
                    //now we need to call this function again, passing in the newly cloned records, so they can be inserted, as well as passing in the ids of the original records
                    //that spawned them so the next time the query can find the records that currently exist that are related to the kind of records we just cloned.                
                    clones.addAll(deepCloneBatched(newClones,childRecordSourceToClone));                                  
                }                    
            }
            catch(exception e)
            {
                system.debug('\n\n\n---------------------- Error attempting to clone child records of type: ' + childObjectDescribe.getName());
                system.debug(e); 
            }            
        }
        
        return clones;
    }
     
    /**
    * @description create a string which is a select statment for the given object type that will select all fields. Equivilent to Select * from objectName ins SQL
    * @param objectName the API name of the object which to build a query string for
    * @return string a string containing the SELECT keyword, all the fields on the specified object and the FROM clause to specify that object type. You may add your own where statments after.
    **/
    public static string buildSelectAllStatment(string objectName){ return buildSelectAllStatment(objectName, new list<string>());}
    public static string buildSelectAllStatment(string objectName, list<string> extraFields)
    {       
        // Initialize setup variables
        String query = 'SELECT ';
        String objectFields = String.Join(new list<string>(Schema.getGlobalDescribe().get(objectName).getDescribe().fields.getMap().keySet()),',');
        if(extraFields != null)
        {
            objectFields += ','+String.Join(extraFields,',');
        }
        
        objectFields = objectFields.removeEnd(',');
        
        query += objectFields;
    
        // Add FROM statement
        query += ' FROM ' + objectName;
                 
        return query;   
    }

You should be able to just copy and paste that into a class, invoke the deepCloneBatched method with the record you want to clone, and it should take care of the rest, cloning every related record that it can. It skips non custom objects for now (because I didn’t need them) but you can adjust that by removing the if condition at line 81 that says

|| !childObjectDescribe.isCustom()

And then it will also clone all the standard objects it can. Again this is kind of a ‘rough draft’ but it does seem to be working. Even cloning 111 records of several different types, I was still well under all governor limits. I’d explain more about how it works, but the comments are there, it’s 3:00 in the morning and I’m content to summarize the workings of by shouting “It’s magic. Don’t question it”, and walking off stage. Let me know if you have any clever ways to make it more efficient, which I have no doubt there is. Anyway, enjoy. I hope it helps someone out there.


Create an Alexa skill in Node.Js and host it on Heroku

Ok, here we go.

So for the last couple weeks I’ve been totally enamored with working with Alexa. It’s really fun to have a programming project that can actually ‘interact’ with the real world. Until now I’ve just been using my hacked together web server with If This Then That commands to trigger events. After posting some of my work on Reddit, I got some encouragement to try and develop and actual Alexa skill, instead of having to piggyback off IFTTT. With some sample code, an awesome guy willing to help a bit along the way and more than a little caffeine I decided to give it a shot.

The Approach: Since I already have a decent handle on Node.Js and I was informed there are libraries for working with Alexa, I decided Node would be my language of choice for accomplishing this. I’ll be using alexa-app-server and alexa-app Node.Js libraries. I’ll be using 2 github repos, and hosting the end result on Heroku.

How it’s done: I’ll be honest, this is a reasonably complex project with about a million steps, so I’ll do my best to outline them all but forgive me if I gloss over a few things. We will be hosting our server in github, creating a repo for the server, making a sub module for the skill, and deploying it all to Heroku. Lets get started. First off, go and get yourself a github account and a Heroku account. If you haven’t used git yet, get that installed. Also install the heroku toolbelt (which may come with git, I can’t quite remember). Of course you’ll also need node.js and Node Package Manager (NPM) which odds are you already have.

If you don’t want to create all the code and such from scratch and want to just start with a functioning app, feel free to clone my test app https://github.com/Kenji776/AlexaServer1.git

Create a new directory for your project on your local machine for your project. Then head on over to github and create yourself a new project/repo. Call it AlexaServer or something similar. Go ahead and make it a public repo. Do not initialize it with a readme at this time. This is the repo where the core server code will live. It’s important to think of the server code as a separate component from each individual skill, they are distinct things. You should see this screen.

repo1

Open a command prompt and change to the directory you created for your project. Enter the commands show in the first section for creating a new repository. Once those commands are entered if you refresh the screen you shoud see your readme file there with the contents shown like this.

github1

Okay, now we are ready to get the Alexa-Server app, https://www.npmjs.com/package/alexa-app-server is what you are looking for. In your project directory type in

“npm install alexa-app-server”

This will take a few moments but should complete without any problems. In your project folder you’ll want to create a folder called apps. This is where each individual skill will live. We will cover that later. Now you’ll need to create your actual server file. Create a file called server.js. Put this in there.

'use strict';

var AlexaAppServer = require( 'alexa-app-server' );

var server = new AlexaAppServer( {
	httpsEnabled: false,
	port: process.env.PORT || 80
} );

server.start();

Pretty simple code overall. The weird bit of code in the port section is for Heroku (they give your app a port to use when hosted). If not on Heroku then it will default to using port 80. Now you need to create your Procfile. This is going to tell Heroku what to do when it tries to run your program. It should be a file named Procfile in the same directory with no file extension. The contents of which are simply

“web: node server.js”

without quotes. We will also want to create a package.json file. So again in your project directory run the command

npm init

This will run a script and it will ask you a few questions. Answer them and your package.json file should get generated. Go ahead and push this all into Github using the following command sequence.

git add .
git commit -m “added server.js and Procfile, along with alexa-app-server-dependency”
git push origin master

If you view your Github repo online you should see all your files there. It should look something like this.

first repo push

 

You can see all of our files got pushed in. Now with the server setup, it’s time to create our skill. Create another GitHub repo. Call this whatever you like, hopefully something descriptive of the skill you are making. In your command prompt get into the apps directory within your main project. In there create another folder with a name same or similar to your new GitHub repo. Follow the same steps as before to initialize the repo and do the initial commit/push. Now we are going to indicate to GitHub that the apps/test-skill folder is actually a sub module so any dependencies and such will be maintained within itself and not within the project root. To do this navigate to the root project folder and enter.

git submodule add https://github.com/Kenji776/AlexaTestSkill.git apps/test-skill

Replacing the github project url with the one for your skill, and the apps/test-skill with apps/whatever-your-skill-is-named. Now Git knows that this folder is a submodule, but NPM doesn’t know that. If you try and install anything using NPM for this skill it’s going to toss it into the root directory of the project. So we generate a package.json file for this skill and then NPM knows that this skill is a stand alone thing. So run

npm init

Again and go through all the questions again. This should generate a package.json file for your skill. Now we are ready to install the actual alexa-app package. So run…

npm install alexa-app –save

and you should see that the skill now has it’s own node_modules folder, in which is contained the alexa-app dependency. After this you’ll have to regenerate your package.json file again because you’ve now added a new dependency. Now it’s time to make our skill DO something. In your skill folder create a file called index.js. Just to get started as a ‘hello world’ kind of app, plug this into that file.

module.change_code = 1;
'use strict';

var alexa = require( 'alexa-app' );
var app = new alexa.app( 'test-skill' );


app.launch( function( request, response ) {
	response.say( 'Welcome to your test skill' ).reprompt( 'Way to go. You got it to run. Bad ass.' ).shouldEndSession( false );
} );


app.error = function( exception, request, response ) {
	console.log(exception)
	console.log(request);
	console.log(response);	
	response.say( 'Sorry an error occured ' + error.message);
};

app.intent('sayNumber',
  {
    "slots":{"number":"NUMBER"}
	,"utterances":[ 
		"say the number {1-100|number}",
		"give me the number {1-100|number}",
		"tell me the number {1-100|number}",
		"I want to hear you say the number {1-100|number}"]
  },
  function(request,response) {
    var number = request.slot('number');
    response.say("You asked for the number "+number);
  }
);

module.exports = app;

Now, if you aren’t familiar with how Alexa works, I’ll try and break it down for you real quick to explain what’s happening with this code. Every action a skill can perform is called an intent. It is called this because ideally there are many ways a person might trigger that function. The might say “say the number {number}” or they might say “give me the number {number}” or many other variations, but they all have the same intent. So hence the name. Your intent should account for most of the common ways a user a might try to invoke your functions. You do this by creating utterances. Each utterance represents something a user might say to make that function run. Slots are variables. You define the potential variables using slots, then use them in your utterances. There are different types of data a slot can contain, or you can create your own. Check out https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interaction-model-reference for more information on slots types. I’m still figuring them out myself a bit. So the intent is triggered by saying one of the utterances. The slot is populated with the number the person says, and after reading the variable from the request, it is read back to the user.

So now, believe it or not, your skill is usable. You can test it by heading by starting your server. Again in your command line shell within the root project directory type

node server.js

A console window should show up that looks like this.

skill running

Now in your browser you should be able to bring up an emulator/debugger page be heading to

http://yourserver/alexa/your-skill-name       (EX: http://localhost/alexa/test-skill)

You should get a page that looks like this.

emulator

Holy cow, we did it! We have a functioning skill. The only problem now is that there is no way for Alexa to ‘get to it’. As in, Alexa doesn’t know about this skill, so there is no way for echo to route requests to it (also, I find it mildly creepy that when referring to Alexa, I keep accidentally wanting to type ‘her’. Ugh). So now we have to put this skill somewhere were it’s accessible. That is where Heroku is going to come in. Sure we could host this on our local machine, setup port forwarding, handle the SSL certs ourselves, but who wants to have their skill always running on their local box? I’ll do a bit on how to do that later since it requires creating a self signed SSL certificate using openSSL and such but for now I’m going to focus on getting this sucker into a cloud host.

Oh by the way, once everything is working smoothly, you should commit and push it all into github. Remember, you’ll have to do a separate commit and push for your server, and for the skill submodule since they are officially different things as far as github is concerned, even though the skill is a sub directory of your server. Also, I’m still learning exactly how submodules work, but if for some reason it doesn’t seem like your submodule is updating properly you can navigate to the project root folder and try this series of commands.

git submodule foreach git pull origin master
git add .
git commit -m “updated submodule”
git push origin master

with some luck that should do it. You’ll know it’s all working right when you open up your server repo in github, click the apps folder and you see a greyish folder icon for the name of your skill. Clicking it should show you the skill with all the files in there. Like this.

submoduleNow, on to Heroku. This part is pretty easy comparatively. Head on over to Heroku and create a new app. As usual, name it whatever you like, preferably something descriptive. Next you’ll be in the deploy area. You’ll be able to specify where the code comes from. Since we are smart, and we used github you can actually just link this heroku project straight to your github project.

hreoku connect
Just like that! However, this automatic integration does not seem to include sub modules, so we will have to add our skill submodule from the command line. So again navigate to your project folder from the shell, and run this command to link your directory with the heroku project.

heroku git:remote -a alexatestapp

Of course replacing alexatestapp with whatever the name of your heroku app is. You should get back a response saying that the heroku remote has been added. Now we need to push it all into Heroku (which in retrospect may have made the above linking done from the website un-needed, but whatever. It doesn’t seem to hurt anything). So now run

git push heroku master

You should be treated with a whole bunch of console output about compressing, creating runtimes, resolving dependencies, blah blah blah. Now with some luck your app should work via your hosted Heroku app. Just like when hosted locally we can access the emulator/debugger by going to

https://alexatestapp.herokuapp.com/alexa/test-skill

Replacing the domain with your heroku app, and the skill with whatever your skill is named. If all has gone well it should operate just like it did on your local machine. We are now finally ready to let Amazon and Echo know about our new skill. So head on over to https://developer.amazon.com/edw/home.html#/skills/list (you’ll need to sign up for a developer account if you do not have one) and click the add new skill button.

app create 1

On the next page it’s going to ask you for your intent schema. This is just a JSON representation of your intents and their slots. Thankfully that handy debug page generates that for you. So it’s just copy paste the intent schema from the debug page into that box. It will ask you about custom slot types, and odds are if you are making a more complicated app you are going to want to make your own, but really all that is is creating a name for your data type, common values for it (no it doesn’t have to be every value) and saving it. Spend some time reading the docs on custom slot types if you need to. It’s also going to want sample utterances, which again that debug page generates for you, so again, just copy paste. You might get a few odd errors here about formatting, or wrong slot types. Just do your best to work through them.

model

 

Next up is setting up the HTTPS. Thankfully Heroku handles that for us, so we don’t have to worry about it! Just select

“My development endpoint is a subdomain of a domain that has a wildcard certificate from a certificate authority”

Next up us account linking. I haven’t figured that one out yet, still working on it, so for now we just have to say no. Next up is the test screen. If everything is gone well up to this point, it should work just peachy for ya.

testsay

Next you’ll be asked to input some meta data about your app. Description, keywords, logo, icon, etc. Just fill it out and hit next. Finally you’ll be asked about privacy info. For now you have just say no to everything, and come back to update that when you’ve got your million dollar app ready to go. Hit save and you should be just about ready to test through your Echo. If you open your Alexa app on your phone and look for your skill by name, you should find it and it should be automatically enabled. You should now be able to go to your echo, use the wake word, speak the invocation name and ask for a number. If all has gone well you’ll hear the wonderful sound of Alexa reading back your number to you. For example

“Alexa Ask Test Get My Number”

Alexa should respond with what you defined as the app start message in your skill. You can now say

“Say the number 10”

Which should then be repeated back to you. Congrats. You just created a cloud hosted, Node.Js Alexa Skill that is now ready for you to make do something awesome. Get out there and blow my mind. Good luck!


Making Amazon Echo/Alexa order me a pizza

UPDATE! I’ve started a github project for my server software. Still very alpha but you can check it out here: https://github.com/Kenji776/AlexaHomeHub

If you follow my blog, you might have caught my post yesterday about how I bought an Amazon Echo device, and have begun creating my own custom actions for it. I started small, simply making it call out to my web server by using If This Then That (IFTTT – a website that allows you to connect and integrate different services). I got it to connect to my door lock and unlock service I had written, and even got it to chromecast specific pre-setup videos to one of my TV’s using a command line tool. Feeling somewhat confident I decided it was time to take on something a little more in depth, but oh so worth it. I was going going to make Alexa order me a pizza.

If you are an Echo/Alexa user you might know that there is already support for ordering a pizza but only from Dominos. It uses some system of having a saved order, then tweeting a specific bit of info at the Dominos twitter account that is tied to your order which then places it. This has a few drawbacks. Primarily being that it orders you a Dominos pizza (sorry guys, in all fairness Dominos has gotten a lot better in recent years). Also it requires twitter integration, and as far as I know only supports one order that you have saved (I could be wrong). The using a saved order was a good idea as it streamlines and simplifies the ordering process quite nicely. I wanted to do something like this, but instead I wanted Sarpinos pizza, and I wanted to be able to pick from several different pre-created orders. Using my knowledge of browser automation that I picked up from my door lock/unlock project and my existing web server, I got to work.

First off, I had to figure out all the things that needed to happen. Off the bat I knew I’d be using their online ordering interface. They don’t have an API, so I knew I’d have to be automating browser interactions. Next I had to break down the process of ordering the pizza online step by step, all the HTML elements involved and how to interact with them. Then I would be able to automate those interactions using the Selenium library. So I went through the process like a normal person and created this list. At each step I inspected the HTML elements involved and recorded them so I could figure out how to identify them and interact with them later. I created an order and saved it as a favorite so next time I’d come back in I would be prompted if I’d like to order that again. From there I was able to create the following list of things I knew needed to get done.

Order Steps:

1) Invoke: https://order.gosarpinos.com/Login/

2) Wait For Load

3) On load populate credential fields:
	- <input class="text-box single-line valid" data-val="true" data-val-required="Email is required" id="Email" name="Email" type="email" value="" >
	- <input class="text-box single-line password valid" data-val="true" data-val-required="Password is required" id="Password" name="Password" type="password" value="" >

4) Click Login button
	- <input type="submit" value="Login" class="ui-button ui-widget ui-state-default ui-corner-all" role="button" aria-disabled="false">
	
5) Wait For Page Load

6) Find button with provided favourite id (428388)
	- <button class="wcFavoriteSelectFavoriteButton ui-button ui-widget ui-state-default ui-corner-all ui-button-text-only" data-favorite-id="428388" id="wiSelectFavorite428388" type="button" role="button" aria-disabled="false">

7) For For Delivery Popup to load
	- 

 

A fair amount of steps, but none of them super complicated. I knew I’d have to learn a bit more about Selenium as this interaction was definitely more complicated than the door lock code, and that one was already seemingly over complicated. Thankfully I did, and found out that in my previous attempt I had been mixing synchronous and async methods unknowingly hence leading to perceived complexity (I thought driver.wait() was a async method and you put everything that depended on it inside. Turns out it’s synchronous and once the condition inside is true the program continues. No wonder it was acting a little funny). I knew also since I was going to be passing in a fair amount of data (username, password, order id, credit card info, etc) that I should probably define an object which would have all the required properties, then just pass JSON into my web service that mirrored that object. This is what I came up with.

{
"action":"pizza",
"username":"xxxxxxxx@xxxx.com",
"password":"xxxxx",
"orderId":"428388",
"ccNumber":"xxxxxxxxxxxxxxxxxx",
"ccExpMonth":"03",
"ccExpYear":"2018",
"ccv":"xxxxx",
"tip":"5.00",
"ccZip":"xxxxx"
}

 

Obviously the sensitive values are blacked out, but you get the jist. My webserver is already primed to look for post requests that have a JSON payload. The ‘router’ code looks at the ‘action’ attribute to figure out what function to send the payload to. I created a new ‘pizza’ action and related function. Here is that function.

function orderPizza(orderObject,callback)
{
	var orderResult = new Object();
	
	//create instance of selenium web driver
	var driver = new webdriver.Builder().
	withCapabilities(webdriver.Capabilities.chrome()).
	build();

		
	//request the login page with the locks page as the return url
	driver.get('https://order.gosarpinos.com/Login');
	
	driver.findElement(webdriver.By.name('Email')).sendKeys(orderObject.username);
	driver.findElement(webdriver.By.name('Password')).sendKeys(orderObject.password);
	
	//find and click submit button
	driver.findElement(webdriver.By.css("input[type='submit']")).submit();

	console.log('Logged in as: ' + orderObject.username);

	//wait for order page to load
	driver.wait(function() {
		return driver.getTitle();
	},5000);
	
	console.log('Attempting To Choose Favorite Order With Id: ' + orderObject.orderId);
		
	//have to wait until the proper order button appears since it's in a dialog. If it isn't found after 5 seconds, fail. Otherwise click the corresponding order button
	//the favorite order button has an attribute 'type' of 'button' and a 'data-favorite-id' attribute with the id of that order
	
	driver.wait(function () {
		return driver.findElement(webdriver.By.css("div[aria-describedby='wiFavoriteListDialog']")).isDisplayed();
	}, 5000);
	
	
	driver.wait(function () {
		return driver.findElement(webdriver.By.css("button[data-favorite-id='"+orderObject.orderId+"']")).isDisplayed();
	}, 5000);
	
	driver.findElement(webdriver.By.css("button[data-favorite-id='"+orderObject.orderId+"']")).click();
			
	//after the button above is clicked, that dialog closes and another one opens. This one asks the user to select delivery or pickup. We want delivery
	//the delivery button has an attribute with a 'data-type' of 'WBD' and an attribute 'role' of 'button'
	driver.wait(function () {
		return driver.findElement(webdriver.By.css("button[data-type='WBD'][role='button']")).isDisplayed();
	}, 5000);
	
	driver.findElement(webdriver.By.css("button[data-type='WBD'][role='button']")).click();
	
	//ensure the checkout button is visible
	driver.wait(function () {
		return driver.findElement(webdriver.By.css("button[id='wiLayoutColumnGuestcheckCheckoutBottom'][role='button']")).isDisplayed();
	}, 5000);
	
	//hacky method to ensure that the modal dialog should now be gone and we can click the checkout button
	driver.sleep(2000);
	
	driver.findElement(webdriver.By.css("button[id='wiLayoutColumnGuestcheckCheckoutBottom'][role='button']")).click(); 
	
	//then the browser will move to the order screen. Once it loads we have to populate the order field data

	//wait for payment page to load
	driver.wait(function() {
		return driver.getTitle();
	},5000);	

	//wait until the pay by credit card button shows up.
	driver.wait(function () {
		return driver.findElement(webdriver.By.id("wiCheckoutPaymentCreditCard")).isDisplayed();
	}, 5000);

	//check the pay by credit card radio button
	driver.findElement(webdriver.By.id("wiCheckoutPaymentCreditCard")).click(); 
	
	//wait until the credit card number box shows up
	driver.wait(function () {
		return driver.findElement(webdriver.By.id("Payment_CCNumber")).isDisplayed();
	}, 5000);
	
	//populate the form fields
	driver.findElement(webdriver.By.name('Payment_CCNumber')).sendKeys(orderObject.ccNumber);

	//stupid jQuery ui selects are impossible to set with normal selenium since the original select is hidden. So use an execute script to set em.
	driver.executeScript("$('#Payment_ExpMonth').val("+orderObject.ccExpMonth+");");

	driver.executeScript("$('#Payment_ExpYear').val("+orderObject.ccExpYear+");");
		
	driver.findElement(webdriver.By.name('Payment_CVV')).sendKeys(orderObject.ccv);
	//driver.findElement(webdriver.By.name('Payment_CCTip')).sendKeys(parseFloat(orderObject.tip));
	driver.findElement(webdriver.By.name('Payment_AVSZip')).sendKeys(orderObject.ccZip);
	
	driver.executeScript("$('#Payment_CCTip').val("+parseFloat(orderObject.tip)+");");
	
	driver.findElement(webdriver.By.id('wiCheckoutButtonNext')).click();

	//wait for confirmation page to load.
	driver.wait(function() {
		return driver.getTitle();
	},5000);

	
	driver.findElement(webdriver.By.id("wiPlaceOrderNow")).click();	

	driver.wait(function() {
		return driver.getTitle();
	},5000).then(function(){
		console.log('Ordering complete!');
		orderResult.success = true;
		orderResult.message = 'Order Placed Successfully';
		callback(orderResult);			
	});
	

}

 

Now if that code seems a little dense or confusing, don’t feel bad. It took me several hours of trial and error to figure it out, especially then it came to setting the select list values, and getting the script to wait while various elements where created and destroyed by the page. Selenium has this super fun behavior where if you ever try and reference an element that doesn’t exist, the whole script goes down in flames. In response to that I made my code very ‘defensive’ checking to make sure elements that are required frequently before attempting to interact with them.

With the script created and integrated into my web server ‘router’ I was ready to get IFTTT to invoke it. Once again it was as simple as creating a new recipe with Alexa as the If and the Maker make a web request feature as the do.

pizza 1pizza2

You can see that with the combination of specific phrases and the fact that you can have multiple saved orders, it would be easy to setup many different possibilities. My roommate is even going to create his own IFTTT account and link it to my Alexa. Then he can create his own orders, specify his own credit card information in the JSON payload, and order whenever he wants using the same device but have his own information. The next step I think is to encrypt the JSON payloads which contain the credit card info and then decrypt them when the arrive at my server. That way I’m not storing my CC info in plain text anywhere which right now is a bit of a concern. This was mostly just proof of concept stuff last night, but I was too excited not to share this as soon as I could, so some of the ‘polish’ features are missing but overall I think it’s a damn good start. Now if you’ll excuse me, I’m going to get myself a pizza.

Update: Adding encryption was pretty easy. First I got some encrypt and decrypt functions set up. Like this.
.

// Nodejs encryption with CTR
var crypto = require('crypto'),
    algorithm = 'aes-256-ctr',
    password = 'xxxxxxxxxxxxxxxxxxxxxxxxxx';

function encrypt(text){
  var cipher = crypto.createCipher(algorithm,password)
  var crypted = cipher.update(text,'utf8','hex')
  crypted += cipher.final('hex');
  return crypted;
}
 
function decrypt(text){
  var decipher = crypto.createDecipher(algorithm,password)
  var dec = decipher.update(text,'hex','utf8')
  dec += decipher.final('utf8');
  return dec;
}

Then update my incoming data object so that the encrypted data was in its own property so I could still tell what kind of request it was without having to decrypt the payload first (since my webserver supports other, unencrypted calls).

"action":"pizza",
"data":"f613f8f479bad299bdfedf [rest of encrypted string omitted]",
"encrypted":true

 

Then just had to change my ‘router’ to decrypt the incoming data if encryption was detected.

else if(action == 'pizza')
{
	//pizza request contains encrypted info. Decrypt and send to function
	var pizzaRequestData = new Object();
	
	if(parsedContent.encrypted)
	{
		console.log('Encryped Payload Detected. Decrypting Containted Data');
		
		pizzaRequestData = JSON.parse(decrypt(parsedContent.data));
		
		console.log('Decryption complete');

		responseObject.message = 'Ordering Pizza!';
		
		console.log(responseObject.message);
		//because order pizza is async the result data comes in a callback
		orderPizza(pizzaRequestData,function(data){
			responseObject.pizzaRequest = data;
			console.log(data);
			sendResponse(response,responseObject);
			return;
		});
		}
	else
	{
		console.log('Un-encrypted pizza order detected. Skipping');
	}
}

After that I just had to use the encrypt function to generate an encrypted version of my pizza request data, update the IFTTT recipe with the new request and that’s it! Now my CC information is safely encrypted and I don’t really have to worry about it getting intercepted. Yay security.


Amazon Alexa is going to run/ruin my life

It was my birthday recently, just turned 28. As a gift to myself I finally decided to order an Amazon Alexa cause I’ve wanted one since I heard about it a few months ago. If you aren’t familiar it’s basically like a ‘siri’ or ‘cortana’ thing that is a stand alone personal assistant device that lives in your home. It’s always on and responds to voice commands from surprisingly far away. It can tell you the weather, check your calendar, manage your shopping list and all that kind of nifty stuff. However, it can do more, much more. Thanks to the ability to develop custom ‘skills’ (their name for apps) and out of the box If This Then That (IFTTT) integration you can quickly start making Alexa do just about anything. I’ve owned it only a day now and I’ve already taught it two new tricks.

Also, if you aren’t familiar with IFTTT it’s an online service that basically allows you to create simple rules that perform actions (hence the name, if this then that). They have the ability to integrate all kinds of different services so you no longer have to be an advanced programmer to automate much of your life. It’s a cool free service and I’d highly recommend checking it out.

You may remember a while back I did that whole write about about making an automatic door locking service software to lock and unlock my front door. I figured a good way to jump into making custom commands would be if I could to see if I could teach Alexa to do it for me upon request. Turns out it was surprisingly easy. Since I already had the web service up and running to respond to HTTP post requests, I simply needed to create an IFTTT rule to send a request when Alexa heard a specific phrase. You may recall that I had some problems with IFTTT not seeming to work for me before, but it seems to now, might have been an error on my part for all I know. Here is the rule as it stands currently.

door 1door 2

Every command issued to Alexa starts with the ‘wake word’ in this case I’ve chosen Alexa (since you can only pick between Alexa, Echo, and Amazon). Second is the command to issue so it knows what service to route the request to. For this the command is ‘trigger’ so Alexa knows to send the request to IFTTT. Then you simple include the phrase to match, and what to do. I decided to make the phrase ‘lock the door’ which when that happens will send a post request to my web server is listening with the given JSON payload. Boom done.

The next thing I wanted to do, and this is still just a very rough outline of a final idea is Chromecast integration. Ideally I’d like to be able to say ‘Alexa trigger play netflix [moviename]’ but as of right now triggers created from IFTTT for Alexa can’t really contain variables aside from just the whole command itself. So I could do ‘Alexa trigger netflix bojack horseman’ and create a specific request just for that show, but there is no way to create a generic template kind of request and pass on the details to the web service that is listening. That aside, what I do have currently is a start.

I found a command line tool that can interact with the chromecast (check this guide for  Command Line Chromecast), and then created a execute statment to call that from my web service. My door lock and unlock service already has logic for handling different commands so I just created a new one called ‘play’ that plays my test video.

else if(action == 'play')
{
	console.log('Casting Requested Thing!');
	var exec = require('child_process').exec;
	var cmd = 'castnow c:\\cast\\testVideo.mp4 --device "Upstairs Living Room"';

	exec(cmd, function(error, stdout, stderr) {
	});					
}

So that turned out to be pretty easy. Small caveat being that castnow is more meant to be an application that is kept open and you interact with to control the video. Since it is being invoked via a web service call it doesn’t really get to ‘interact’ with it. I suppose you might be able to do some crazy shit like keeping open a web socket and continue to pass commands to it, but that’s for another day.

The IFTTT command is basically the same as the door lock one. Just change the command to trigger it, and change the JSON payload to have the action as “play” instead of “lock” or “unlock” and the command gets triggered. I also created a corollary rule and bit of code for stopping the casting of the current video by playing another empty video file (since there isn’t an explicit stop command in the castnow software).

There you have it, with Alexa, IFTTT, and a home web server you can start to do some pretty cool customized automation stuff. I think next up is getting it to order my favorite local pizza for me 😀


Mimicking callback functions for Visualforce ActionFuncitons

Hey everyone. So I’ve got a nifty ‘approach’ for you this time around. So let me give you a quick run down on what I was doing, the problem I encountered and how I decided to solve it using what I believe to be a somewhat novel approach. The deal is that I have been working on a fairly complicated ‘one page’ app for mobile devices. What I decided to do was have one parent visualforce page, and a number of components that are hidden and shown depending on what ‘page’ the user is on. This allows for a global javascript scope to be shared between the components and also for them to have their own unique namespaces as well. I may cover the pros and cons of this architecture later.

The issue I started to have, is that I wanted some action functions on the main parent container page to be used by the components in the page. That’s fine, no problem there. The issue becomes the fact that since actionFunctions are asynchronous, and do not allow for dynamic callback functions anything that wants to invoke your actionFunction is stuck having the same oncomplete function as all the functions that may want to invoke it. So if component A and component B both want to invoke ActionFunctionZ they both are stuck with the same oncomplete function, and since it’s async there is no good way to know when it’s done. Or is there?

My solution to this problem doesn’t use any particularity amazing hidden features, just a bit of applied javascript knowledge. What we are going to do is create a javascript object in the global/top level scope. That object is going to have properties that match the names of action functions. The properties will contain the function to run once the action function is complete. Then that property will be deleted to clean up the scope for the next caller. That might sound a little whack. Here let’s check an example.

    <style>
        #contentLoading
        {
            height: 100%;
            width: 100%;
            left: 0;
            top: 0;
            overflow: hidden;
            position: fixed; 
            display: table;
            background-color: rgba(9, 9, 12, 0.5);  
              
        }
        #spinnerContainer
        {
            display: table-cell;
            vertical-align: middle;        
            width:200px;
            text-align:center;
            margin-left:auto;
            margin-right:auto;
        }

        div.spinner {
          position: relative;
          width: 54px;
          height: 54px;
          display: inline-block;
        }
        
        div.spinner div {
          width: 12%;
          height: 26%;
          background: #fff;
          position: absolute;
          left: 44.5%;
          top: 37%;
          opacity: 0;
          -webkit-animation: fade 1s linear infinite;
          -webkit-border-radius: 50px;
          -webkit-box-shadow: 0 0 3px rgba(0,0,0,0.2);
        }
        
        div.spinner div.bar1 {-webkit-transform:rotate(0deg) translate(0, -142%); -webkit-animation-delay: 0s;}    
        div.spinner div.bar2 {-webkit-transform:rotate(30deg) translate(0, -142%); -webkit-animation-delay: -0.9167s;}
        div.spinner div.bar3 {-webkit-transform:rotate(60deg) translate(0, -142%); -webkit-animation-delay: -0.833s;}
        div.spinner div.bar4 {-webkit-transform:rotate(90deg) translate(0, -142%); -webkit-animation-delay: -0.75s;}
        div.spinner div.bar5 {-webkit-transform:rotate(120deg) translate(0, -142%); -webkit-animation-delay: -0.667s;}
        div.spinner div.bar6 {-webkit-transform:rotate(150deg) translate(0, -142%); -webkit-animation-delay: -0.5833s;}
        div.spinner div.bar7 {-webkit-transform:rotate(180deg) translate(0, -142%); -webkit-animation-delay: -0.5s;}
        div.spinner div.bar8 {-webkit-transform:rotate(210deg) translate(0, -142%); -webkit-animation-delay: -0.41667s;}
        div.spinner div.bar9 {-webkit-transform:rotate(240deg) translate(0, -142%); -webkit-animation-delay: -0.333s;}
        div.spinner div.bar10 {-webkit-transform:rotate(270deg) translate(0, -142%); -webkit-animation-delay: -0.25s;}
        div.spinner div.bar11 {-webkit-transform:rotate(300deg) translate(0, -142%); -webkit-animation-delay: -0.1667s;}
        div.spinner div.bar12 {-webkit-transform:rotate(330deg) translate(0, -142%); -webkit-animation-delay: -0.0833s;}
    
         @-webkit-keyframes fade {
          from {opacity: 1;}
          to {opacity: 0.25;}
        }    	
	</style>
	
		var globalScope = new Object();
		
		function actionFunctionOnCompleteDispatcher(functionName)
		{
			console.log('Invoking callback handler for ' +functionName);
			console.log(globalScope.actionFunctionCallbacks);
			
			if(globalScope.actionFunctionCallbacks.hasOwnProperty(functionName))
			{
				console.log('Found registered function. Calling... ');
				console.log(globalScope.actionFunctionCallbacks.functionName);
				globalScope.actionFunctionCallbacks[functionName]();
				delete globalScope.actionFunctionCallbacks.functionName;
			}
			else
			{
				console.log('No callback handler found for ' + functionName);
			}    
		}         
		
		function registerActionFunctionCallback(functionName, callback)
		{
			console.log('Registering callback function for ' + functionName + ' as ' + callback);
			globalScope.actionFunctionCallbacks[functionName] = callback;
			
			console.log(globalScope.actionFunctionCallbacks);
		} 
		
		function linkActionOne(dataValue)
		{
			registerActionFunctionCallback('doThing', function(){
				console.log('Link Action Two was clicked. Then doThing action function was called. Once that was done this happened');
				alert('I was spawened from link action 1!');
			});		
			
			doThing(dataValue);
		}
		
		function linkActionTwo(dataValue)
		{
			registerActionFunctionCallback('doThing', function(){
				console.log('Link Action Two was clicked. Then doThing action function was called. Once that was done this happened');
				alert('I was spawened from link action 2!');
			});		

			doThing(dataValue);
		}

		function loading(isLoading) {
			if (isLoading) 
			{            
				$('#contentLoading').show();
			}
			else {
				$('#contentLoading').hide();
			}	
		}		
    
	
	<apex:form >
		<apex:actionFunction name="doThing" action="{!DoTheThing}" reRender="whatever" oncomplete="actionFunctionOnCompleteDispatcher('doThing');">
			<apex:param name="some_data"  value="" />
		</apex:actionFunction>
		
		<apex:actionStatus id="loading" onstart="loading(true)" onstop="loading(false)" />
	
		<a href="#" onclick="linkActionOne('Link1!')">Link One!</a>
		<a href="#" onclick="linkActionTwo('Link2!')">Link Two!</a>

		
id="contentLoading" style="display:none">
id="spinnerContainer">
class="spinner">
class="bar1">
class="bar2">
class="bar3">
class="bar4">
class="bar5">
class="bar6">
class="bar7">
class="bar8">
class="bar9">
class="bar10">
class="bar11">
class="bar12">
</div> </div> </div> </apex:form>

So what the hell is going on here? Long story short we have two links which both call the same actionFunction but have different ‘callbacks’ that happen when that actionFunction is complete. I was trying to come up with a more interesting example, but I figured I should keep it simple for sake of explanation.  You click link one, the doThing action is called. Then it calls the actionFunctionOnCompleteDispatcher function with it’s own name. That function looks to see if any callbacks have been registered for that function name. If so, it is called. If not, it just doesn’t do anything. Pretty slick eh? You may be wondering why I included all that code with the action status, the loading animation, the overlay and all that. Not really relevant to what we are doing right (though the animation is cool.)? The answer to that is (other than the fact you get a cool free loading mechanism), this approach as it stands will start to run into odd issues if you users clicks link2 before link1 has finished doing it’s work. The callback function registered by 2 would get called twice. Once the call to doThing from link1 one its going to call whatever function is registered, even if that means the click from link2 overwrote what link1 set.  I am thinking you could probably get around this by making the property of the global object an array instead of just a reference to a function. Each call would just push it’s requested callback into the array and then as they were called they would be removed from the array, but I haven’t played with this approach yet (‘I’m pretty sure it would work, I’m just too lazy and tired to write it up for this post. If there is interest I’ll do it later). In any case putting up a blocking loading screen while the action function does its work ensures that the user cannot cause any chaos by mashing links and overwriting callbacks.

The thing that is kind of cool about this which becomes clear pretty quick is that you can start ‘chaining’ callbacks. So you can have a whole series of action functions that all execute in sequence instead of just running async all over the place. So you can do things like this. Also make note of the commenting. The thing about callbacks is you can quickly end up ‘callback hell’ where it gets very difficult to track what is happening in what order. So I at least attempt to label them in an effort to stem the madness. This is just a quick copy paste from the thing I’m actually working on to give you a flavor of how the chaining can work.

//once a project has been created we need to clear out any existing temp record, then set the type of the new/current temp record as being tpe. Then finally
//we have to set the project Id on that temp record as the one we created. Then finally we can change page to the select accounts screen.
VSD_SelectProject.addNewTpeComplete = function()
{

	//order 2: happens after clearTempRecord is done 
	//once the existing temp record has been cleared and a new one is created, get a new temp record and set the type as tpe
	registerActionFunctionCallback('clearTempRecord', function(){
		setRequestType('tpe');
	});
				
	//order 3: happens after setRequestType is done 
	//once set request type is done (which means we should now have a temp record in the shared scope, then call set project id
	registerActionFunctionCallback('setRequestType', function(){
		setProjectId('{!lastCreatedProjectId}');
	});

	//order 4: happens after setProjectId is done 
	//once set project id is called and completed change the page to the new_pcr_start (poorly named, it should actually be called select_accounts
	registerActionFunctionCallback('setProjectId', function(){
		setTitle('New TPE Request');
		setSubHeader('Select Accounts');
					
		changePage('new_pcr_start');
	});
	 
	//order 1: happens first. Kicks off the callback chain defined above.                                                
	clearTempRecord();                
}

Anyway, I hope this might help some folks. I know it would be easy to get around this issue in many cases by just creating many of the ‘same’ actionFunction just with different names and callbacks but who want’s dirty repetitive code like that?

Tune in next time as I reveal the solution to an odd ‘bug’ that prevents apex:inputFields from binding to their controller values. Till next time!

-Kenji


Even more Automation with Tasker and Geofencing

So if you saw my last blog, you know I spent some time writing a small application that would log into my home security providers website, then lock and unlock my door on a timed schedule. If you haven’t read it, check it out at https://iwritecrappycode.wordpress.com/2015/10/21/automating-things-that-should-already-be-automated-with-selenium-and-node-js to get some context. While that was great and all, I figured I could do more with it. I mean timed scheduled are great, but I do occasionally leave the house so you know what would be really cool? If my door automatically locked when I left, and unlocked itself again when I returned home. I figured it should be a reasonably simple process to make my script respond to remote commands, and trigger my phone to send such commands when leaving and entering an area. Turns out I was mostly right. At it’s core its pretty easy, but there are a lot of moving parts and places to make mistakes, which I did make plenty of (it’s crazy how often as a programmer I type the wrong numbers in somewhere).

So obviously the first part of this challenge was going to be to setup my script to respond somehow to external requests. Being written in Node.js it made sense that it respond to web requests. I figured the requester should provide the username and password to login to the security website with, as well as a desired action of either lock or unlock. I decided to make it respond to POST requests just to reduce the possibility of some web spider hitting it and by some fluke making it do something (I also put the service on a different port, but we’ll cover that later). So the service listens for post requests with some JSON encoded data in the request body and then makes the request to alarm.com. The code for that method is as follows.

//create a server
server = http.createServer( function(request, response) {

    console.log('Request received from ' + request.connection.remoteAddress);

	var responseObject = new Object();
	responseObject.message = 'Waiting';
	responseObject.status = 'OK';

	//listen for POST requests
    if (request.method == 'POST') {

		console.log("POST Request Received");

		//when we get the data from the post reqest
        request.on('data', function (data)
		{
			try
			{
				console.log('Raw Data: ' + data);
				
				//parse content in the body of the request. It should be JSON
				var parsedContent = JSON.parse(data);
				
				//read some variables from the parsed json
				var action = parsedContent['action'];
				var password = parsedContent['password'];
				var username = parsedContent['username'];

				responseObject.action = action;


				if(action == 'lock')
				{
					responseObject.message = 'Sent Lock Request';
					console.log('Locking Door!');
					
					//because toggledoor is async the result data comes in a callback
					toggleDoor(username,password,true,function(data){
						responseObject.lockRequestData = data;
						sendResponse(response,responseObject);
						return;
					});

				}
				else if(action == 'unlock')
				{
					responseObject.message = 'Send Unlock Request';
					console.log('Unlocking Door!');
					//because toggledoor is async the result data comes in a callback
					toggleDoor(username,password,false,function(data){
						responseObject.lockRequestData = data;
						sendResponse(response,responseObject);
						return;
					});

				}
				else
				{
					console.log('Invalid Post Acton ' + action);
					responseObject.message = 'No method defined with name: ' + action;
					sendResponse(response,responseObject);
				}
			}
			catch(exception)
			{
				responseObject.message = 'Error: ' + exception.message;
				console.log(exception);
				sendResponse(response,responseObject);
			}
		});
    }
	else
	{
		console.log('Non post request. Ignoring.');
		responseObject.message = 'Please use post request';
		sendResponse(response,responseObject);
	}


}).listen(PORT, function(){
    //Callback triggered when server is successfully listening. Hurray!
    console.log("Server listening on: http://localhost:%s", PORT);
});

So now with a server that listens, I need to allow requests from the outside world to come in and actually get to my server. This is where my old days of networking came in handy. First I decided I’d like to have a domain name to reach my server instead of just an IP address (maybe I can setup dynamic DNS later so if my home IP address changes, my registar it automatically updated with the new one and will change my zone file records). I know this is more of a programming blog, but just a basic bit of networking is required. First I headed over to GoDaddy and grabbed myself a cheap domain name. Then changed the DNS settings to point the @ record at my home IP address.

Simple DNS Config

Now going to my domain would send traffic to my home, but of course it would be stopped at my firewall. So I had to setup port forwarding in my router. This is where I was able to change from using the web traffic port 80 to my custom port to help obscure the fact that I am running a web server (as well as having to change the listening port in the Node script). Of course I decided to turn off DHCP on the machine hosting this and give it a static IP address. I would have used a reservation but my crappy router doesn’t have that option. Lame.

port config

After some fiddling with settings for a bit my service was now responding to outside traffic. The only bummer part is that my domain name does not work within my network because again my router is crappy and doesn’t support NAT loopback, so it doesn’t know how to route the request to the right machine internally. I could modify my hosts file or setup my own DNS server but it’s not worth it seeing as this service is only useful when I’m outside my own network anyway.

Now the trickiest part, I needed to find a way to make my phone detect when I entered or left a given geographic region and when it detected that, send the specially crafted POST request to my server. I knew of two options off the top of my head that I had heard of but never played with. The first one I tried was a service called ‘If this then that’ or IFTTT for short. While the website and app are very sleek and they make configuration of these rules very easy, there was one problem. It just didn’t work. No matter what I tried, what recipes I configured nothing worked. Not even when triggered manually would it send the POST request. After playing with that for a bit, I decided to give up and give the other app I had heard of a shot. Tasker.

So if IFTTT is the mac of task automation (sleek, easy, unable to do the most basic things). Then Tasker is Linux. It’s extremely powerful, flexible, a bit difficult to understand and not much to look at. It does however have all the features I needed to finish my project. I ended up buying both Tasker (its like 3 bucks) and a plugin for it called AutoLocation. You see Tasker by itself is fairly powerful but it allowes for additional plugins to perform other actions and gather other kinds of data. AutoLocation allowed me to easy configure a ‘geofence’ basically a geographic barrier upon passing through which you can trigger actions. So I configured my geofence and then imported that config data into Tasker. Then it was simple a matter of created two profiles. One for entering the area and one for leaving it.

GeoFence Takser Profiles

I also added a simple vibrate rule so that my phone will buzz when either of the rules trigger so I know the command was sent. Later that night when I headed off to the store, that little buzz was the sweet vibration of success. I may or may not have yelled for joy in my car and frightened other motorists. I hope perhaps this post might inspire you to create own you crazy automation service. With the combination of selenium, node, tasker, and a bit of networking know how it’s possible to create all kinds of cool things.  If you’d like to download the source code for my auto lock program you can grab it below (I don’t really feel like making a git project for something so small. Also I realize its not the best quality code in the world, it was meant to be a simple script not a portfolio demo).

Download AutoLock Source

Till next time!
-Kenji


Automating Things That Should Already Be Automated With Selenium and Node.js

So I had invited my parents over a while back, and it came up that I don’t often lock my front door. Of course being good parents they chided me about it saying that I should really do so. I really have no excuse because I even have an app that allows me to to it remotely (yay home automation) but the be honest I’m just forgetful when it comes to things like that. However I decided to heed their warning and do something about it. I decided if I was going to get diligent about locking my door, I couldn’t be the one in charge of actually doing it, I’d have to make a computer do it. The problem is, that while my home security provider does offer a web app, and a phone app for locking the door and a very basic ‘rule’ system (arm the panel when the door is locked, vice versa) there are no time based controls, so I’d still have to actually do something. Totally unacceptable.

After some investigation I found as I had figured that my security provider does not offer any sort of API. Nor would it be easy to try and replicate the post request that is send from the app to trigger the lock door command due to numerous session variables and cookies and things include unique to each login session. Nope, if I was going to automate this it seemed like I’d actually have to interact with the browser as much as the thought displeased me (I’m all for dirty hacks, but c’mon). At first I looked at python for a solution, but as seems to often be the case with python every discussion of a solution was disjointed with no clear path and generally unsatisfactory (sorry python). Instead I turned to Node to see what potential solutions awaited me there. After a bit of looking around I found Selenium for Node. While it’s obvious and stated focus was on automated web app testing, not intentional browser automation scripts I could see no reason it wouldn’t work.

Quickly I spun up a new Node project and used NPM to grab the Selenium package (even after many uses NPM still feels like some kind of magic after manually handling javascript libraries for so long). Followed the guide to getting the Selenium web drivers to work, which at first seemed a bit odd having to install executable on my system for a javascript package but it makes perfect sense in retrospect. After finding a basic Selenium tutorial I was ready to attempt to get my script to login to alarm.com’s web page. First I had to get the names of the inputs I wanted Selenum to fill. Of course chrome makes this easy, just right click, inspect element and snag the names of the inputs.

Finding the required name property is easy.

Finding the required name property is easy.

 

Then simply tell Selenium to populate the boxes and click the login button.

 

var driver = new webdriver.Builder().
withCapabilities(webdriver.Capabilities.firefox()).
build();
   
driver.get('https://www.alarm.com/login?m=no_session&ReturnUrl=/web/Automation/Locks.aspx');
driver.findElement(webdriver.By.name('ctl00$ContentPlaceHolder1$loginform$txtUserName')).sendKeys(username);
driver.findElement(webdriver.By.name('txtPassword')).sendKeys(password);
driver.findElement(webdriver.By.name('ctl00$ContentPlaceHolder1$loginform$signInButton')).click();

Thankfully just through dumb luck when creating this my session had timed out and I found out the login page accepted a return url parameter that it would direct the browser to after successful login. So now, if the login goes smoothly, the browser should be at the screen where I can control the locks. A different button is used to lock or unlock the door and only one is visible at a time. So writing my function in such a way that it accepted boolean ‘lock’ (where false would be unlock) param and then failing if it’s not able to find the button is a safe way to ensure I don’t unlock the door when I mean to lock it and vice versa.

driver.wait(function() {
	return driver.getTitle().then(function(title) {
		console.log('Toggling Door Status');
		if(lock)
		{
			driver.findElement(webdriver.By.name('ctl00$phBody$summaryRepeater$ctl00$lockButton')).click();
			lockResult.message = 'Lock request sent!';
		}
		else
		{
			driver.findElement(webdriver.By.name('ctl00$phBody$summaryRepeater$ctl00$unlockButton')).click();	
			lockResult.message = 'UnLock request sent!';					
		}
		lockResult.success = true;
		

		driver.quit();

		return lockResult;
	});
}, 1000);

 

I don’t know Selenium super well yet, but it seems that after the login button is clicked the driver waits until it can retrieve the title of the page (which is a easy way to tell the page has at least somewhat loaded) and when it has then run the inner logic for clicking the proper button.  Honestly I’m not totally sure, but it works and that’s the important thing 😛

I was actually a bit shocked when my little function worked. Calling it with the proper username and password actually made the door lock a few moments later, much to my dogs surprise as he sat napping in the living room (the little motor on that lock is kind of loud). Now the next peice of this puzzle was to invoke that function on a timer system. Locking the door at say, 11:00pm and unlocking at 8:00am. This turned out to require only your regular every day javascript, nothing fancy.

var lockHour = 23;
var unlockHour = 8;
var beenLockedToday = false;
var beenUnlockedToday = false;

var date = new Date();
var current_hour = date.getHours(); 
console.log('Checking current hour for lock status checks. Current hour is ' + current_hour  + ' Will automatically lock at ' + lockHour + ' and unlock at ' + unlockHour + ' listening on port ' + port);
	
function monitorLoop() {
	
	
	date = new Date();
	current_hour = date.getHours();       
	
	console.log('Checking local hour. It is ' + current_hour);
	
	if(current_hour >= lockHour && !beenLockedToday) 
	{
		console.log('Lock hour hit or passed and door has not been locked. Locking!!');
		toggleDoor(alarm_username,alarm_password,true);		
		beenLockedToday = true;
	} 
	if(current_hour == unlockHour && !beenUnlockedToday) 
	{
		console.log('Un-Lock hour hit!');
		toggleDoor(alarm_username,alarm_password,false);		
		beenUnlockedToday = true;
	}
	if(current_hour == 0)
	{
		console.log('Resetting lock status variables');
		beenLockedToday = false;
		beenUnlockedToday = false;		
	}

	setTimeout(monitorLoop,600000);

}

monitorLoop();

It’s just a function that is called via setInterval every hour. It checks the current hour against my two predefined lock and unlock times. I used a couple variables to track if the door has been locked or unlocked so if I reduce the time on the event loop it’s not attempting to lock/unlock the door every few minutes and wearing out the batteries on the motor. Obviously omitted from this is my alarm_username and alarm_password variables stored higher up in the script. With this event loop and the Selenium automation function I now have one less thing to worry about. Now if I could just find a Node.Js host that supported Selenium (damn you Heroku). So if anything, I’d say this is the take away: Don’t do manually what you can automate, and browser automation with Selenium is crazy easy. So easy that when it all worked I was almost disappointed that it seemed like I didn’t do anything.

Glorious event loop in action

Glorious event loop in action

Till next time!
-Kenji

Also, be sure to check out the addendum to this project in my next blog post where I added automatic operations with a geofence. https://iwritecrappycode.wordpress.com/2015/10/21/automating-things-that-should-already-be-automated-with-selenium-and-node-js/


Export SOQL Query as CSV

Hey guys,
Long time no blog! Sorry about that, been kind of busy and honestly haven’t had too many interesting tidbits to share. However, I think I have something kind of neat to show you. I had a project recently where the user wanted to be to create a custom SOQL query and export the results as a CSV file. I don’t know why they didn’t want to use regular reports and export (my guess is they figured the query may be too complex or something) but it sounded fun to write, so I didn’t argue.

Breaking this requirement down into it’s individual parts revealed the challenges I’d have to figure out solutions for:
1) Allow a user to create a custom SOQL query through the standard interface
2) Extract and iterate over the fields queried for to create the column headings
3) Properly format the query results as a CSV file
4) Provided the proper MIME type for the visualforce page to prompt the browser to download the generated file

As it turns out, most of this was pretty easy. I decided to create a custom object called ‘SOQL_Query_Export__c’ where a user could create a record then specify the object to query against, the fields to get, the where condition, order by and limit statements. This would allow for many different queries to be easily created and saved, or shared between orgs. Obviously the user would have to know how to write SOQL in the first place, but in this requirement that seemed alright. The benefit as well is that an admin could pre-write a query, then users could just run it whenever.

With my data model/object created now I set about writing the apex controller. I’ll post it, and explain it after.

public class SOQL_Export {

    public SOQL_Query_Export__c exporter     {get;set;}
    public list<sobject>        queryResults {get;set;}
    public list<string>         queryFields  {get;set;}
    public string               queryString  {get;set;}
    public string               fileName     {get;set;}
    
    public SOQL_Export(ApexPages.StandardController controller) 
    {
        //Because the fields of the exporter object are not refernced on the visualforce page we need to explicity tell the controller
        //to include them. Instead of hard coding in the names of the fields I want to reference, I simply describe the exporter object
        //and use the keyset of the fieldMap to include all the existing fields of the exporter object.
        
        //describe object
        Map<String, Schema.SObjectField> fieldMap = Schema.SOQL_Query_Export__c.sObjectType.getDescribe().fields.getMap();
        
        //create list of fields from fields map
        list<string> fields = new list<string>(fieldMap.keySet());
        
        //add fields to controller
        if(!Test.isRunningTest())
        {
            controller.addFields(fields);
        }
        //get the controller value
        exporter = (SOQL_Query_Export__c) controller.getRecord();

        //create a filename for this exported file
        fileName = exporter.name + ' ' + string.valueOf(dateTime.now());
                
        //get the proper SOQL order direction from the order direction on the exporter object (Ascending = asc, Descending = desc)
        string orderDirection = exporter.Order_Direction__c == 'Ascending' ? 'asc' : 'desc';
        
        //create a list of fields from the comma separated list the user entered in the config object
        queryFields =  exporter.fields__c.split(',');
        
        //create the query string using string appending and some ternary logic
        queryString = 'select ' + exporter.fields__c + ' from ' + exporter.object_name__c;
        queryString += exporter.where_condition__c != null ? ' where ' + exporter.where_condition__c : '';
        queryString += exporter.Order_by__c != null ? ' order by ' + exporter.Order_by__c + ' ' + orderDirection :'';
        queryString += exporter.Limit__c != null ? ' limit ' +string.valueOf(exporter.Limit__c) : ' limit 10000';
        
        //run the query
        queryResults = database.query(queryString);
        
        
    }

    //creates and returns a newline character for the CSV export. Seems kind of hacky I know, but there does not seem to be a better
    //way to generate a newline character within visualforce itself.
    public static String getNewLine() {
      return '\n';
    }
}

Because I was going to use the SOQL_Query_Export__c object as the standard controller my apex class would be an extension. This meant using the controller.addFields method (fields not explicitly added by the addFields method or referenced in the visualforce page are not available on the record passed into the controller. So if I had attempted to reference SOQL_Query_Export__c.Name without putting it in my add fields method, or referencing it on the invoking page it would not be available). Since my visualforce page was only going to be outputting CSV content, I have to manually add the fields I want to reference. I decided instead of hard coding that list, I’d make it dynamic. I did this by describing the the SOQL_Query_Export__c object and passing the fields.getMap() keyset to the controller.addFields method. Also, just as something to know, test classes cannot use the addFields method, so wrap that part in an if statement.

Next it’s just simple work of constructing a filename for the generated file, splitting the fields (so I can get an array I can loop over to generate the column headers for the CSV file). Then it’s just generating the actual query string. I  used some ternary statements since things like order by and limit are not really required. I did include a hard limit of 10000 records if one isn’t specified since that is the largest a read only collection of sobjects can be. Finally we just run the query. That last method in the class is used by the visualforce page to generate proper CSV line breaks (since you can’t do it within the page itself. Weird I know).

So now with the controller, we look at the page.

<apex:page standardController="SOQL_Query_Export__c" cache="true"  extensions="SOQL_Export" readOnly="true" showHeader="false" standardStylesheets="false" sidebar="false" contentType="application/octet-stream#{!fileName}.csv">

  <apex:repeat value="{!queryFields}" var="fieldName">{!fieldName},</apex:repeat>{!newLine}
  
  <apex:repeat value="{!queryResults}" var="record"><apex:repeat value="{!queryFields}" var="fieldName">{!record[fieldName]},</apex:repeat>{!newLine}</apex:repeat>
  

</apex:page>

I know the code looks kind of run together. That is on purpose to prevent unwanted line breaks and such in the generated CSV file. Anyway, the first line sets up the page itself obviously. Removes the stylesheets, header, footer, and turns on caching. Now there are two reasonably important things here. The readOnly attribute allows a visualforce collection to be 10000 records instead of only 1000, very useful for a query exporter. The second is the ‘contentType=”application/octet-stream#{!fileName}.csv”‘ part. That tells the browser to treat the generated content as a CSV file, which in most browsers should prompt a download. You can also see that the filename is an Apex property that was generated by the class.

With the page setup, now we just need to construct the actual CSV values. To create the headers of the file, we simply iterate over that list of fields we split in the controller, putting a comma after each one (according to CSV spec trailing commas are not a problem so I didn’t worry about them). You can see I also invoke the {!newLine} method to create a proper CSV style newline after the header row. If anyone knows of a way to generate a newline character in pure visualforce I’d love to hear it, because I couldn’t find a way.

Lastly we iterate over the query results. For each record in the query, we then iterate over each fields. Using the bracket notation we can the field from the record dynamically. Again we create a newline at the end of each record. After this on the SOQL Export object I simple created a button that invoked this page passing in its record ID. That newly opened window would provide the download and then user could then close it (I’m experimenting with ways to automatically close the window once the download is done, but it’s a low priority and any solution would be rather hacky).

There you have it. A simple SOQL query export tool. I have this packaged up, but I’m not 100% I can give that URL away right now. I’ll update this entry if it turns out I’m allowed to share it. Anyway, hope this someone, or if nothing else shows a couple neat techniques you might be able to use.


Entity is deleted on apex merge

Hey guys,

Just a little quick fix post here, a silly little bug that took me a bit of time to hunt down (probably just because I hadn’t had enough coffee yet). Anyway, the error happens when trying to merge two accounts together. I was getting the error ‘entity is deleted’. The only thing that made my code any different from other examples was that, the account I was trying to merge was being selected by picking it from a lookup on the master.  The basic code looked like this (masterAccount was being set by the constructor for the class, so it is already setup properly).

            try
            {
                Account subAccount = new Account(id=masterAccount.Merge_With__c);
                merge masterAccount subAccount;
                mergeResult = 'Merge successful';
            }
            catch(exception e)
            {
                mergeResult = e.getMessage();
            }

Can you spot the problem here? Yup, because the Merge_With__c field on the master account would now be referencing an account that doesn’t exist (since after a merge the child records get removed) it was throwing that error. So simple once you realize it. Of course the fix for it is pretty easy as well. Just null out the lookup field before the merge call.

            try
            {
                Account subAccount = new Account(id=masterAccount.Merge_With__c);
                masterAccount.Merge_With__c = null;
                merge masterAccount subAccount;
                mergeResult = 'Merge successful';
            }
            catch(exception e)
            {
                mergeResult = e.getMessage();
            }

There you have it. I realize this is probably kind of a ‘duh’ post but it had me stumped for a few minutes, and I’m mostly just trying to get back into the swing of blogging more regularly, so I figured I’d start with something easy. ‘Till next time!



URL Encode Object/Simple Object Reflection in Apex

Hey all,

Kind of a quick yet cool post for you today. Have you ever wanted to be able to iterate over the properties of a custom class/object? Maybe wanted to read out all the values, or for some other reason (such as serializing the object perhaps) wanted to be able to figure out what all properties an object contained but couldn’t find a way? We all know Apex has come a long way, but it still is lacking a few core features, reflection being one of them. Recently I had a requirement were I wanted to be able to take an object and serialize it into URL format. I didn’t want to have to have to manually type out every property of the object since it could change, and I’m lazy like that. Without reflection this seems impossible, but it’s not!

Remembering that the deserialize json method that Apex has is capable of creating an iteratable version of an object by casting it into a list, or a map suddenly it becomes much more viable. Check it out.

 

    public static string urlEncodeObject(object objectToEncode)
    {
        string urlEncodedString;
        String serializedObject = JSON.serialize(objectToEncode);
        
        Map<String,Object> deserializedObject = (Map<String,Object>) JSON.deserializeUntyped(serializedObject);
        
        for(String key : deserializedObject.keySet())
        {
            urlEncodedString+= key+'='+string.valueOf(deserializedObject.get(key))+'&';
        }
        urlEncodedString = urlEncodedString.substring(0,urlEncodedString.length()-1);
        urlEncodedString = encodingUtil.urlEncode(urlEncodedString,'utf-8');
        return urlEncodedString;
    }       

There you have it. By simply simply serializing an object, then deserializing it, we can now iterate over it. Pretty slick eh? Not perfect I know, and doesn’t work awesome for complex objects, but it’s better than nothing until Apex introduces some real reflection abilities.


Using google forms and sheets as a data source for graphs

Hey all,

Long time no post! I’ve been on vacation and in general just being kind of lazy, but today I’ve got a simple fun project for us. You see, my girlfriend is always right, well almost always. Very rarely I’ll remember something correctly, but in general she’s always correct (and not in the ‘haha men are so dumb, women know everything’ way, actually legit she remembers way more stuff than me). This phenomenon has gotten so pervasive that I just for kicks wanted to create a live chart running in the house display how often either of us was right about stuff (I know I’ll regret this eventually).  So for my mini project I had a few goals

1) Have a live chart that updates automatically on a TV in my house (we have an extra TV that we generally just use a media center/music streaming box via a chomecast)

2) Make an easy interface to add new data to the chart

3) Make the chart slick looking

4) Keep it simple. This is basically a hobby project so I don’t want to go too nuts.

Before we get started, you can see the demo here:
http://xerointeractive-developer-edition.na9.force.com/partyForce/RightChart

Please close it when you are done though, my dev org only gets so many HTTP requests per day (note to self, add some kind of global request caching or something).

I was able to complete this project in about an hour and a half and meet all my goals. So now I’ll show you how.

Right off the bat I had a general idea of how I would do this (though the approach did morph a bit). From a previous project I knew it was possible that store and retrieve data in a google spreadsheet. You can get the raw CSV data by using a special URL, and them import that via an http request from an Apex controller. I figured this was easier than setting up a salesforce object, creating a custom interface for adding data, and hell it’s cool to be able to utilize google forms data for something.

form

My basic form for collecting data

From there it’s just a matter of passing the data to a chart system, and making it poll the sheet occasionally. So anyway, first off we are going to need a google form to collect our data. Head to google docs, and create a new spreadsheet. Use the forms menu to create a new form for your page. In my case, it’s just a simple single question multiple choice (with an other option). Each time the form is submitted it puts the name, and a timestamp into a sheet called ‘Form Responses 1’. This data format works pretty well. I played around with trying to create another sheet that used queryIf to sum all the times various names appeared in the sheet, but that approach had a limiting factor of only working for names I pre-coded it for. It wasn’t dynamic enough. So I decided to just let google collect the data, and I’d handle the summing and formatting in my code.

sheet1

Your form should be gathering data in a way that looks something like this

To actually get the data in a usable form for programming, we need a raw csv version of it. Thankfully google will provide this for you (though they aren’t exactly forthcoming with it). As of this writting, so get the raw CSV of your sheet, go to file and hit publish. Just publish the one sheet. You should be given a shareable url with a long unique looking id string. Take that and put it into this URL format

https://docs.google.com/spreadsheets/d/key/export?format=csv&id=key

Just replace the word key with your documents unique ID. You should be able to put that URL in your browser and it should automatically attempt to download your spreadsheet in CSV format. If so, you are in good shape. If not, make sure you published it, and it’s shared and all that good stuff. Once you have that working we can move to the next step.

publish

Publish your form results sheet and make note of that unique ID, you’ll need it!

So now that the data exists and is accessible we need to GET it. I decided because it’s the easiest publishing platform I know I’d just use Salesforce sites. So that means Apex is going to be my back end. So I’ll need an Apex call to fetch the CSV data from the google sheet, and some code to parse that CSV into some kind of logical structure. Again thankfully from past projects, I had just such a a class.

//gets CSV data from a given URL and parses it into a list of lists
global class RightChartController 
{

    public String getDataSourceUrl() {
        return 'Your google document url here';
    }

   

    //gets CSV data from a given source
    @remoteAction
    global static  List<List<String>> importCSV(string url)
    {
         List<List<String>> result = new List<List<String>>(); 
        try
        {
            string responseBody;
            
            //create http request to get import data from
            HttpRequest req = new HttpRequest();
            req.setEndpoint(url);
            req.setMethod('GET');         
            Http http = new Http();
            
            //if this is not a test actually send the http request. if it is a test, hard code the returned results.
            if(!Test.isRunningTest())
            {
                HTTPResponse res = http.send(req);
                responseBody = res.getBody();
            }
            else
            {
                responseBody = 'Name,Count\ntammy,10\njoe,5\nFrank,0';
            }
            
            //the data should come back in in CSV format, so hand it off the the parsing function which will make a list of a list of strings (each list is one row, each item within that sub list is one column)
            result = RightChartController.parseCSV (responseBody,true);
        }
        catch(exception e)
        {
            system.debug('\n\n\n\n----------------------------- Error importing chart data. ' + e.getMessage() + ' on line ' + e.getLineNumber());
        }
        return result;
    }
    
    //parses a csv file. REturns a list of lists. Each main list is a row, and the list contained is all the columns.
    public static List<List<String>> parseCSV(String contents,Boolean skipHeaders)
    {
        List<List<String>> allFields = new List<List<String>>();
    
        // replace instances where a double quote begins a field containing a comma
        // in this case you get a double quote followed by a doubled double quote
        // do this for beginning and end of a field
        contents = contents.replaceAll(',"""',',"DBLQT').replaceall('""",','DBLQT",');
        // now replace all remaining double quotes - we do this so that we can reconstruct
        // fields with commas inside assuming they begin and end with a double quote
        contents = contents.replaceAll('""','DBLQT');
        // we are not attempting to handle fields with a newline inside of them
        // so, split on newline to get the spreadsheet rows
        List<String> lines = new List<String>();
        try {
            lines = contents.split('\n');
        } catch (System.ListException e) {
            System.debug('Limits exceeded?' + e.getMessage());
        }
        Integer num = 0;
        for(String line : lines) {
            // check for blank CSV lines (only commas)
            if (line.replaceAll(',','').trim().length() == 0) break;
            
            List<String> fields = line.split(',');  
            List<String> cleanFields = new List<String>();
            String compositeField;
            Boolean makeCompositeField = false;
            for(String field : fields) {
                if (field.startsWith('"') && field.endsWith('"')) {
                    cleanFields.add(field.replaceAll('DBLQT','"'));
                } else if (field.startsWith('"')) {
                    makeCompositeField = true;
                    compositeField = field;
                } else if (field.endsWith('"')) {
                    compositeField += ',' + field;
                    cleanFields.add(compositeField.replaceAll('DBLQT','"'));
                    makeCompositeField = false;
                } else if (makeCompositeField) {
                    compositeField +=  ',' + field;
                } else {
                    cleanFields.add(field.replaceAll('DBLQT','"'));
                }
            }
            
            allFields.add(cleanFields);
        }
        if (skipHeaders) allFields.remove(0);
        return allFields;       
    }
}

So now we’ve got the back end code that is required to both get the data and parse it (Don’t forget to add a remote site exception in your Salesforce security controls for docs.google.com!). Now we just need an interface to use that data and display it in a nifty chart. Using highcharts this is pretty easy. Mine ended up looking something like this (You don’t have to tell me the code is kind of sloppy, this was just a quick throw together project).

<apex:page controller="RightChartController" sidebar="false" showHeader="false" standardStylesheets="false">
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
    <script src="https://code.highcharts.com/highcharts.js"></script>
    <script src="https://code.highcharts.com/highcharts-3d.js"></script>
    <script>
        //load the document source  locally incase we want to let the user change it or something later
        var docSource = '{!dataSourceUrl}';
        var chart;
        
        //fetches the data from the google sheet
        function getData(docSource,callback)
        {
           Visualforce.remoting.Manager.invokeAction(
                '{!$RemoteAction.RightChartController.importCSV}', 
                docSource,
                function(result, event){
                    if (event.status) {
                        callback(result);
                    }
                }, 
                {escape: true}
            );   
     
        }
        
        //massages the data from being an array of arrays (one line per form entry) into an array of objects with totals
        //should probably be refactored to make it more efficient, but whatever.
        function translateDataToHighChartFormat(csvData)
        {
            var chartData = new Array();
            var totals = new Object();
            
            for(var i = 0; i < csvData.length; i++)
            {
                var timestamp = csvData[i][0];
                var name = csvData[i][1];
                 
                if(totals.hasOwnProperty(name))
                {
                    totals[name]++;
                }
                else
                {
                    totals[name] = 1;
                }
            }
            
            for(key in totals)
            {
                var thisPoint = new Object();
                thisPoint.name = key;
                thisPoint.y = totals[key];
                chartData.push(thisPoint);
            }
            
            return chartData;
        }
        
        //create the chart on document load
        $(function () 
        {
            chart = new Highcharts.Chart({
                chart: {
                    type: 'pie',
                    options3d: {
                        enabled: true,
                        alpha: 45,
                        beta: 0,
                    },
                    renderTo: 'container'
                },
                title: {
                    text: 'Told You So'
                },                
                plotOptions: {
                    pie: {
                        depth: 25
                    }
                },
                series: [{
                    data: []
                }]
            });
            
            //set interval timer to poll the document every 10 seconds
            setInterval(function(){
                getData(docSource,function(result){
                    chart.series[0].setData(translateDataToHighChartFormat(result));
                    
                });
            },10000);
            
            //get the data one initially so we don't have to wait for the first delay to get data
            getData(docSource,function(result){
                chart.series[0].setData(translateDataToHighChartFormat(result));
                $('#Loading').hide();
            });
        });    
    </script>
    <div id="container" style="height: 400px"></div>
    <div id="Loading" style="text-align:center; font-weight:bold; font-size: 24px">Loading Chart Data Please Wait</div>
</apex:page>

If everything has gone smoothly, you should end up with something that looks like this

chart

With our page alive, it’s a simple matter to add it to a Salesforce site. Anyone can view it, and anyone you give the form link to will be able to add data to it. As data is added the chart will automatically redraw itself every 10 seconds with the new data set. Then it was just a simple matter of having the chart open on some computer and using the chrometab app for chrome to send it to my chromecast. Now we can be reminded of how stupid I am all the time….. what have I done?


Stripping Nulls from a JSON object in Apex

NOTE: If you don’t’ want to read the wall of text/synopsis/description just scroll to the bottom. The function you need is there.

I feel dirty. This is the grossest hack I have had to write in a while, but it is also too useful not to share (I think). Salesforce did us an awesome favor by introducing the JSON.serialize utility, it can take any object and serialize it into JSON which is great! The only problem is that you have no control over the output JSON, the method takes no params except for the source object. Normally this wouldn’t be a big deal, I mean there isn’t a lot to customize about JSON usually, it just is what it is. There is however one case when you may want to control the output, and that is in the case of nulls. You see most of the time when you are sending JSON to a remote service, if you have a param specified as null, it will just skip over it as it should. Some of the stupider APIs try and process that null as if it were a value. This is especially annoying when the API has optional parameters and you are using a language like Apex which being strongly types makes it very difficult to modify an object during run time to remove a property. For example, say I am ordering a pizza, via some kind of awesome pizza ordering API. The API might take a size, some toppings, and a desired delivery time (for future deliveries). Their API documentation states that delivery time is an optional param, and if not specified it will be delivered as soon as possible, which is nice. So I write my little class in apex

    public class pizzaOrder
    {
    	public string size;
    	public list<string> toppings;
    	public datetime prefferedDeliveryTime;
    
    }
    
    public static string orderPizza(string size, list<string> toppings, datetime prefferedDeliveryTime)
    {
    	pizzaOrder thisOrder = new pizzaOrder();
    	thisOrder.size = size;
    	thisOrder.toppings = toppings;
    	thisOrder.prefferedDeliveryTime	= prefferedDeliveryTime;
    	
    	string jsonOrderString = JSON.serialize(thisOrder);
    	
   
    }
    
    list<string> toppings = new list<string>();
    toppings.add('cheese');
    toppings.add('black olives');
    toppings.add('jalepenos');
                     
    orderPizza('large', toppings, null);

And your resulting JSON looks like

{“toppings”:[“cheese”,”black olives”,”jalepenos”],”size”:”large”,”prefferedDeliveryTime”:null}

Which in would work beautifully, unless the Pizza API is setup to treat any present key in the JSON object as an actual value, which in that case would be null. The API would freak out saying that null isn’t a valid datetime, and you are yelling at the screen trying to figure out why the stupid API can’t figure out that if an optional param has a null value, to just skip it instead of trying to evaluate it.

Now in this little example you could easily work around the issue by just specifying the prefferedDeliveryTime as the current date time if the user didn’t pass one in. Not a big deal. However, what if there was not a valid default value to use? In my recent problem there is an optional account number I can pass in to the API. If I pass it in, it uses that. If I don’t, it uses the account number setup in the system. So while I want to support the ability to pass in an account number, if the user doesn’t enter one my app will blow up because when the API encounters a null value for that optional param it explodes. I can’t not have a property for the account number because I might need it, but including it as a null (the user just wants to use the default, which Salesforce has no idea what is) makes the API fail. Ok, whew, so now hopefully we all understand the problem. Now what the hell do we do about it?

While trying to solve this, I explored a few different options. At first I thought of deserialize the JSON object back into a generic object (map<string,object>) and check for nulls in any of the key/value pairs, remove them then serialize the result. This failed due to difficulties with detecting the type of object the value was (tons of ‘unable to convert list<any> to map<string,object> errors that I wasn’t’ able to resolve). Of course you also have the recursion issue since you’ll need to look at every element in the entire object which could be infinity deep/complex so that adds another layer of complexity. Not impossible, but probably not super efficient and I couldn’t even get it to work. Best of luck if anyone else tries.

The next solution I investigated was trying to write my own custom JSON generator that would just not put nulls in the object in the first place. This too quickly fell apart, because I needed a generic function that could take string or object (not both, just a generic thing of some kind) and turn it into JSON, since this function would have to be used to strip nulls from about 15 different API calls. I didn’t look super hard at this because all the code I saw looked really messy and I just didn’t like it.

My solution that I finally decided to go for, while gross, dirty, hackish and probably earned me a spot in programmer hell is also simple and efficient. Once I remembered that JSON is just a string, and can be manipulated as such, I started thinking about maybe using regex (yes I am aware when you solve one problem with regex now you have two) to just strip out nulls. Of course then you have to worry about cleaning up syntax (extra commas, commas against braces, etc) when just just rip elements out of the JSON string, but I think I’ve got a little function here that will do the job, at least until salesforce offeres a ‘Don’t serialize nulls’ option in their JSON serializer.

    public static string stripJsonNulls(string JsonString)
    {

    	if(JsonString != null)   	
    	{
			JsonString = JsonString.replaceAll('\"[^\"]*\":null',''); //basic removeal of null values
			JsonString = JsonString.replaceAll(',{2,}', ','); //remove duplicate/multiple commas
			JsonString = JsonString.replace('{,', '{'); //prevent opening brace from having a comma after it
			JsonString = JsonString.replace(',}', '}'); //prevent closing brace from having a comma before it
			JsonString = JsonString.replace('[,', '['); //prevent opening bracket from having a comma after it
			JsonString = JsonString.replace(',]', ']'); //prevent closing bracket from having a comma before it
    	}
  	
	return JsonString;
    }

Which after running on our previously generated JSON we get

{“toppings”:[“cheese”,”black olives”,”jalepenos”],”size”:”large”}

Notice, no null prefferedDeliveryTime key. It’s  not null, its just non existent. So there you have it, 6 lines of find and replace to remove nulls from your JSON object. Yes, you could combine them and probably make it a tad more efficient. I went for readability here. So sue me. Anyway, hope this helps someone out there, and if you end up using this, I’m sure I’ll see you in programmer hell at some point. Also, if anyone can make my initial idea of recursively spidering the JSON object and rebuilding it as a map of <string,object> without the nulls, I’d be most impressed.


Super Handy Mass Deploy Tool

So I know it has been a while. I’m not dead I promise, just busy. Busy with trying to keep about a thousand orgs in sync, pushing code changes, layout changes, all kinds of junk from one source org to a ton of other orgs. I know you are saying ‘just use managed packages, or change sets’. Manages packages can be risky early in the dev process because you usually can’t remove components and things and you get locked into a bit of  a structure that you might not quite be settled on. Change sets are great, but many of these orgs are not linked, they are completely disparate for different clients. Over the course of the last month or two it’s become apparant that just shuffling data around in Eclipse wasn’t going to do it anymore. I was going to have to break into using ANT and the Salesforce migration tool.

For those unaware, ANT is some kind of magical command line tool that is used by the Salesforce migration tool (or maybe vice versa, not really sure the relationship there) but when they work together it allows you to script deployments which can be pretty useful. Normally though, trying to actually setup the deployment with ANT is a huge pain in the butt because you have to be modifying XML files, setting up build files and stuff, in general it’s kind of slow to do. However, if you could write a script to write the needed files by the deployment script, now that would be handy. That is where this tool I wrote comes in. Now don’t get me wrong, it’s nothing fancy. It just helps make generating deployments a little easier. What it does is allows you to specify a list of orgs and their credentials that you want to deploy to. In the deploy folder you place the package.xml file that contains the definitions of what you want to deploy, and the meta data itself (classes, triggers, objects, etc). Then when you run the program one by one it will log into each org, back it up, then deploy your package contents. It’s a nice set it and forget it way of deploying to numerous orgs in one go.

So here is what we are going to do, first of all, you are going to need to make sure you have a Java Runtime Enviornment (JRE), and the Java Developers Kit (JDK) Installed. Make sure to set your JAVA_HOME environment variable path to wherever the JDK library is installed (for me it was C:\Program Files\Java\jdk1.8.0_05). Then grab ANT and follow it’s guide for install. Then grab the Force.com migration tool and get that installed in your ANT setup. Then last, grab my SF Deploy Tool from bitbucket (https://Daniel_Llewellyn@bitbucket.org/Daniel_Llewellyn/sf-deploy-tool.git)

Now we have all the tools we need to deploy some components, but we don’t have anything to deploy, and we haven’t setup who we are going to deploy it to. So lets use Eclipse to grab our deploy-able contents and generate our package.xml file (which contains the list of stuff to deploy). Fire up Eclipse and create a new project. For the project contents, select whatever you want to deploy to your target orgs. This is why using a package is useful because it simplifies this process. Let the IDE download all the files for your project then navigate to the project contents folder on your computer. Copy everything inside the src folder, including that package.xml file. Then paste it into the deploy folder of my SF deploy tool. This is the payload that will be pushed to your orgs.

The last step in our setup is to tell the deploy tool which orgs to push this content into. Open the orgs.txt file in the SF Deployer folder and enter the required information. One org per line. Each org requires a username, password, token, url and name attribute, separated by semincolons with an equal sign used to denote the key/value. EX

username=xxxx;password=xxxxx;token=xxxxxxxxx;url=https://login.salesforce.com;name=TEST ORG

Now with all your credentials saved, you can run the SalesforceMultiDeploy.exe utility. It will one by one iterate over each org, back up the org, the deploy your changes. The console window will keep you informed of it’s progress as it goes and let you know when it’s all done. Of course this process is still subject to all the normal deploy problems you can encounter, but if everything in the target orgs is prepared to accept your deployment package, this can make life much easier. You could for example write another small script that copies the content from your source org at the end of each week, slaps it into the deploy folder, then invokes the deployment script to have an automated process that keeps your orgs in sync.

Also I just threw this tool together quickly and would love some feedback. So either fork it and change it, or just give me ideas and I’ll do my best to implement them (one thing I really want to do is make this multi threaded so that it can do deployments in parallel instead of serial, which would be a huge bonus for deployment speeds). Anyway as always, I hope this is useful, and I’ll catch ya next time.

-Kenji