Oh my god. It's full of code!

Latest

Merge Salesforce Package.xml Files

So I’ve recently been asked to play a larger role in doing our version control, which is not my strongest suite so it’s definitely been a learning experience. Our process at the moment involves creating a feature branch in bit bucket for each new user story. Folks do whatever work they need to do, and then create a change set. From that they generate a zip file that contains all the stuff they worked on and a package.xml file (using the super cool ORGanizer chrome plugin I just found out about). Once they get that they send it over to me and I get their changes setup as a feature branch and then request a pull to get it back into the master branch. Now I’m not sure if this is the best way to do things or not but in each new branch we need to append all the contents of the new package.xml into the existing package.xml. Much to my surprise I couldn’t find a quick clean easy way to merge two XML files. Tried a few online tools and nothing really seemed to work right, so me being me I decided to write something to do it for me. I wasn’t quite sure how to approach this, but then in an instant I realized that from my post a couples weeks ago I can convert XML into a javscript object easily. Once I do that then I can simply merge the objects in memory and build a new file. One small snag I found is that the native javacript methods for merging objects actually overwrites any properties of the same name, it doesn’t smash them together like I was hoping. So with a little bit of elbow grease I managed to write some utility methods for smashing all the data together. To use this simply throw your XML files in the packages directory and run the ‘runMerge.bat’ (this does require you to have node.js installed). It will spit out a new package.xml in the root directory that is a merge of all your package.xml files. Either way, hope this helps someone.

UPDATE (5/19): Okay after squashing a few bugs I now proudly release a version of package merge that actually like…. works (I hope). Famous last words I know.
UPDATE (5/20): Now supports automatic sorting of the package members, having XML files in sub-directories in the packages folder, forcing a package version, and merging all data into a master branch package file for continual cumulative add ons.
Download Package Merge Here!

Mass Updating Salesforce Country and State Picklist Integration Values

So it’s Friday afternoon about 4:00pm and I’m getting ready to wrap it up for the day. Just as I’m about to get up I hear the dreaded ping of my works instant messenger indicating I’ve been tagged. So of course I see whats up, it’s a coworker wondering if there is any way I might be able to help with what otherwise will be an insanely laborious chore. They needed to change the ‘integration value’ on all the states in the United States from having the full state name to just the state code (e.g. Minnesota->MN) in the State and Country Picklist. Doing this manually would take forever, and moreover it had to be done in 4 different orgs. I told him I’d see what I could do over the weekend.

So my first thought was of course see if I can do it in Apex, just find the table that contains the data make a quick script and boom done. Of course, it’s Salesforce so it’s never that easy. The state and country codes are stored in the meta data and there ins’t really a great way to modify that directly in Apex (that I know of, without using that wrapper class but I didn’t want to have to install a package and learn a whole new API for this one simple task). I fooled around with a few different ideas in Apex but after a while it just didn’t seem like it was doable. I couldn’t find any way to update the metadata even though I could fetch it. After digging around a bit I decided probably the best way was to simply download the metadata, modify it and push it back. So first I had to actually get the metadata file. At first I was stuck because AddressSettings didn’t appear in the list of meta data object in VScode (I have a package.xml builder that lets me just select whatever I want from a list and it builds the file for me) and didn’t know how to build a package.xml file that would get it. I found a handy stack overflow post that gave me the command

sfdx force:source:retrieve -m Settings:Address

Which worked to pull the data. The same post also showed the package.xml file that could be used to either pull or push that metadata (with this you don’t even need the above command. You can just pull it directly by using ‘retrieve source in manifest from org’ in VS code).

<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>46.0</version>
    <types>
        <members>Address</members>
        <name>Settings</name>
    </types>
</Package>

 
Now that I had the data the only issue really was that there wasn’t an easy way to just do a find and replace or something to update the file. The value for each state (only in the United States) had to be copied from the state code field into the integration value field. So I decided to whip up a quick nodeJS project to do it. You can download it here (it comes with the original and fixed Address.settings-meta.xml files as well if you just want to get those). It’s a pretty simple script, but it does require xml2js because parsing XML is a pain otherwise.

const fs = require('fs');
var xml2js = require('xml2js');
var parseString = xml2js.parseString;

try 
{
    const data = fs.readFileSync('Address.settings-meta.xml', 'utf8')

    parseString(data, function (err, result) {
        

        var root = result.AddressSettings.countriesAndStates[0].countries;
        //console.log(root);

        for(var i = 0; i < root.length; i++)
        {
            
            var countryName = root[i].integrationValue[0];
            if(countryName == 'United States')
            {
                console.log('Found US!');
        
                for(var j = 0; j < root[i].states.length; j++)
                {
                    console.log('Changing ' + root[i].states[j].integrationValue[0] + ' to ' + root[i].states[j].isoCode[0]);
                    root[i].states[j].integrationValue[0] = root[i].states[j].isoCode[0];
                }
            }
        }
        
        var builder = new xml2js.Builder();
        var xml = builder.buildObject(result);
    
        fs.writeFile("Address.settings-meta-fixed.xml", xml, function(err) {
            if(err) {
                return console.log(err);
            }
            console.log("The file was saved!");
        });     
    });
    
} 
catch (err) 
{
    console.error(err)
}
script

Output of my script. Always satisfying when stuff works.

With my fixed address settings file the last step was “simply” to push it back into Salesforce. I’ll be honest, I haven’t used SFDX much, and this last step actually took longer than it should have. I couldn’t decide if I should be using force:source:deploy or force:mdapi:deploy. Seeing as I had to do this in a production org originally I thought I had to use mdapi but a new update made that no longer the case. mdapi wanted me to build a zip file or something and I got frustrated trying to figure it out. I’m just trying to push one damn file why should I need to be building manifests and making zip files and whatever?! So after some trial and error with force:source:deploy I found that it could indeed push to prod and would take just a package.xml as its input. Initially it complained about not running any test so I told it to only run local tests. That also failed because some other code in the org is throwing errors. As a work around I simply provided it a specific test to run (ChangePasswordController, which is in like every org) and that worked. The final command being

sfdx force:source:deploy -x manifest/package.xml -w 10 -l RunSpecifiedTests –runtests ChangePasswordController

deploy

Hooray it finally worked!

And viola! The fixed metadata was pushed into the org and I spared my coworker days of horrific manual data entry. I know in the end this all ended up being a fairly simply process but it did end up taking me much longer than I initially figured mostly just due to not knowing the processes involved or how to reference the data I wanted so I figured maybe this would save someone some time. Till next time.

Apex list all fields on page layout

Hey everyone, I know it’s been a while but I am in fact still alive. Anyway, I’ve got something new for ya. I’ve been asked to describe fields that are actually being used by evaluating page layouts. Fields that are actually being used will need have their values exported from a legacy SF instance and imported into a new one. So instead of having to manually go through and log every field and find it’s data type which would be crazy slow an error prone, I wrote a nifty little script. You simply feed it page layout names and it will get all the fields on them, describe them, create suggested mappings and transformations then email you the results with each layout as a separate csv attachment so you can use it as a starting point for an excel mapping document. It can also describe the picklist values for every field on any object described. Hopefully your sandbox is the same as your production so you can just save this in there and run it without having to deploy to prod. Remember to turn on email deliverability when running from a sandbox! This is still pretty new, as in my first time using it immediately after building it so if you find errors with it’s output or have any suggestions I’m definitely open to hearing about them in the comments.

UPDATE: After adding some more features it became too large to be an execute anonymous script. It’s now been converted to a class. So save this into a new apex class then from execute anonymous call LayoutDescriber.SendLayoutInfo() to run with default settings or pass in a list of page layoutnames and if you want to get picklist values or not. If you want to run it as a script you can remove the picklist value builder lines 156-210 and the check for valid page layout names lines 26-41. That should get it small enough to run.

public class LayoutDescriber
{
    /**
    *@Description gets all the fields for the provided page layouts and emails the current user a csv document for each. It
                  also gets related field data and provides suggested mapping configuration for import. Ooptionally can get picklist values for objects.
    *@Param pageLayoutNames a list of page layout names. Format is [obectName]-[namespace]__[Page layout name]. 
            Omit namespace and underscores if layout is not part of a managed package.
            EX: Account-SkienceFinSln__Address 
            OR
            EX: Account-Account Layout
    @Param getPicklistValues flag that controls whether picklist values for described objects should be included.
    **/
    public static void sendLayoutInfo(list<string> pageLayoutNames, boolean getPicklistValues)
    { 
        List<Metadata.Metadata> layouts = Metadata.Operations.retrieve(Metadata.MetadataType.Layout, pageLayoutNames);
        
        for(string layOutName : pageLayoutNames)
        {
            boolean layoutFound = false;
            for(integer i = 0; i < layouts.size(); i++)
            {
                Metadata.Layout layoutMd = (Metadata.Layout) layouts.get(i);
                if(layoutMd.fullName == layOutName)
                {
                    layoutFound = true;
                }
            }
            if(layoutFound == false)
            {
                throw new applicationException('No layout with name' + layoutName + ' could be found. Please check and make sure namespace is included if needed');
            }
        }
        map<string,map<string,list<string>>> objectPicklistValuesMap = new map<string,map<string,list<string>>>();
        
        map<string,list<string>> objectFieldsMap = new map<string,list<string>>();
        
        for(integer i = 0; i < layouts.size(); i++)
        {
            Metadata.Layout layoutMd = (Metadata.Layout) layouts.get(i);
        
            list<string> objectFields = new list<string>();
            
            for (Metadata.LayoutSection section : layoutMd.layoutSections) 
            {        
                for (Metadata.LayoutColumn column : section.layoutColumns) 
                {
                    if (column.layoutItems != null) 
                    {
                        for (Metadata.LayoutItem item : column.layoutItems) 
                        {
                            if(item.field == null) continue;
                            objectFields.add(item.field);
                        }
                    }
                }
            }
            objectFields.sort();
            objectFieldsMap.put(pageLayoutNames[i].split('-')[0],objectFields);
        }
        
        system.debug(objectFieldsMap);
        
        Map<String, Schema.SObjectType> globalDescribe = Schema.getGlobalDescribe();
        
        Map<String, Map<String, Schema.SObjectField>> objectDescribeCache = new Map<String, Map<String, Schema.SObjectField>>();
        
        String userName = UserInfo.getUserName();
        User activeUser = [Select Email From User where Username = : userName limit 1];
        String userEmail = activeUser.Email;
        
        Messaging.SingleEmailMessage message = new Messaging.SingleEmailMessage();
        message.toAddresses = new String[] { userEmail };
        message.subject = 'Describe of fields on page layouts';
        message.plainTextBody = 'Save the attachments and open in excel. Fieldnames and types should be properly formated.';
        Messaging.SingleEmailMessage[] messages =   new List<Messaging.SingleEmailMessage> {message};
        list<Messaging.EmailFileAttachment> attachments = new list<Messaging.EmailFileAttachment>();
        
        integer counter = 0;    
        for(string thisObjectType : objectFieldsMap.keySet())
        {
            list<string> fields = objectFieldsMap.get(thisObjectType);
            
            Map<String, Schema.SObjectField> objectDescribeData;
            if(objectDescribeCache.containsKey(thisObjectType))
            {
                objectDescribeData = objectDescribeCache.get(thisObjectType);
            }
            else
            {
                objectDescribeData = globalDescribe.get(thisObjectType).getDescribe().fields.getMap();
                objectDescribeCache.put(thisObjectType,objectDescribeData);
            }
        
        
            string valueString = 'Source Field Name, Source Field Label, Source Field Type, Source Required, Source Size, Is Custom, Controlling Field, Target Field Name, Target Field Type, Target Required, Transformation \r\n';
            for(string thisField : fields)
            {
                if(thisField == null || !objectDescribeData.containsKey(thisField))
                {
                    system.debug('\n\n\n--- Missing field! ' + thisField);
                    if(thisField != null) valueString+= thisField + ', Field Data Not Found \r\n';
                    continue;
                }
                
                Schema.DescribeFieldResult dfr = objectDescribeData.get(thisField).getDescribe();
                
                if( (dfr.getType() == Schema.DisplayType.picklist || dfr.getType() == Schema.DisplayType.MultiPicklist) && getPicklistValues)
                {
                    List<String> pickListValuesList= new List<String>();
                    List<Schema.PicklistEntry> ple = dfr.getPicklistValues();
                    for( Schema.PicklistEntry pickListVal : ple)
                    {
                        pickListValuesList.add(pickListVal.getLabel());
                    }     
        
                    map<string,list<string>> objectFields = objectPicklistValuesMap.containsKey(thisObjectType) ? objectPicklistValuesMap.get(thisObjectType) : new map<string,list<string>>();
                    objectFields.put(thisField,pickListValuesList);
                    objectPicklistValuesMap.put(thisObjectType,objectFields);
                }
                boolean isRequired = !dfr.isNillable() && string.valueOf(dfr.getType()) != 'boolean' ? true : false;
                string targetFieldName = dfr.isCustom() ? '' : thisField;
                string targetFieldType = dfr.isCustom() ? '' : dfr.getType().Name();
                string defaultTransform = '';
                
                if(dfr.getType() == Schema.DisplayType.Reference)
                {
                    defaultTransform = 'Update with Id of related: ';
                    for(Schema.sObjectType thisType : dfr.getReferenceTo())
                    {
                        defaultTransform+= string.valueOf(thisType) + '/';
                    }
                    defaultTransform.removeEnd('/');
                }    
                if(thisField == 'LastModifiedById') defaultTransform = 'Do not import';
                valueString+= thisField +',' + dfr.getLabel() + ',' +  dfr.getType() + ',' + isRequired + ',' +dfr.getLength()+ ',' +dfr.isCustom()+ ',' +dfr.getController() + ','+ 
                              targetFieldName + ',' + targetFieldType +',' + isRequired + ',' + defaultTransform +'\r\n';
            }
        
            Messaging.EmailFileAttachment efa = new Messaging.EmailFileAttachment();
            efa.setFileName(pageLayoutNames[counter]+'.csv');
            efa.setBody(Blob.valueOf(valueString));
            attachments.add(efa);
            
            counter++;
        }
        //if we are getting picklist values we will now build a document for each object. One column per picklist, with it's rows being the values of the picklist
        if(getPicklistValues)
        {
            //loop over the object types
            for(string objectType : objectPicklistValuesMap.keySet())
            {
                //get all picklist fields for this object
                map<string,list<string>> objectFields = objectPicklistValuesMap.get(objectType);
                
                //each row of data will be stored as a string element in this list
                list<string> dataLines = new list<string>();
                integer rowIndex = 0;
                
                //string to contains the header row (field names)
                string headerString = '';
                
                //due to how the data is structured (column by column) but needs to be built (row by row) we need to find the column with the maximum amount of values
                //so our other columns can insert a correct number of empty space placeholders if they don't have values for that row.
                integer numRows = 0;
                for(string fieldName : objectFields.keySet())
                {
                    if(objectFields.get(fieldName).size() > numRows) numRows = objectFields.get(fieldName).size();
                }
                
                //loop over every field now. This is going to get tricky because the data is structured as a field with all its values contained but we need to build
                //our spreadsheet row by row. So we will loop over the values and create one entry in the dataLines list for each value. Each additional field will then add to the string
                //as required. Once we have constructed all the rows of data we can append them together into one big text blob and that will be our CSV file.
                for(string fieldName : objectFields.keySet())
                {
                    headerString += fieldName +',';
                    rowIndex = 0;
                    list<string> picklistVals = objectFields.get(fieldName);
                    for(integer i = 0; i<numRows; i++ )
                    {
                        string thisVal = i >= picklistVals.size() ? ' ' : picklistVals[i]; 
                        if(dataLines.size() <= rowIndex) dataLines.add('');
                        dataLines[rowIndex] += thisVal + ', ';
                        rowIndex++;        
                    }
                }
                headerString += '\r\n';
                
                //now that our rows are constructed, add newline chars to the end of each
                string valueString = headerString;
                for(string thisRow : dataLines)
                {            
                    thisRow += '\r\n';
                    valueString += thisRow;
                }
                
                Messaging.EmailFileAttachment efa = new Messaging.EmailFileAttachment();
                efa.setFileName('Picklist values for ' + objectType +'.csv');
                efa.setBody(Blob.valueOf(valueString));
                attachments.add(efa);        
            }
        }
        
        
        message.setFileAttachments( attachments );
        
        Messaging.SendEmailResult[] results = Messaging.sendEmail(messages);
         
        if (results[0].success) 
        {
            System.debug('The email was sent successfully.');
        } 
        else 
        {
            System.debug('The email failed to send: ' + results[0].errors[0].message);
        }
    }
    public class applicationException extends Exception {}
    
    public static void sendLayoutInfo()
    {
        list<string> pageLayoutNames = new List<String>();
        pageLayoutNames.add('Account-Account Layout');
        pageLayoutNames.add('Contact-Contact Layout');
        pageLayoutNames.add('Opportunity-Opportunity Layout');
        pageLayoutNames.add('Lead-Lead Layout');
        pageLayoutNames.add('Task-Task Layout');
        pageLayoutNames.add('Event-Event Layout');
        pageLayoutNames.add('Campaign-Campaign Layout');
        pageLayoutNames.add('CampaignMember-Campaign Member Page Layout');
        sendLayoutInfo(pageLayoutNames, true);
    }
}


The result is an email with a bunch of attachments. One for each page layout and one for each objects picklist fields (if enabled).

mmmmm attachments

For example this is what is produced for the lead object.

Nicely formatted table of lead fields and suggested mappings.

Nicely formatted table of lead fields and suggested mappings.

 

And here is what it built for the picklist values

Sweet sweet picklist values. God a love properly formatted data.

Anyway, I hope this might help some of ya’ll out there who are given the painful task of finding what fields are actually being used on page layouts. Till next time.

Salesforce development is broken (and so am I)

Before I begin this is mostly a humor and venting post. Don’t take it too seriously.

So I’m doing development on a package that needs to work for both person accounts and regular accounts. Scratch orgs didn’t exist when this project was started so we originally had a developer org, then a packaging org which contained the namespace for the package (this ended up being a terrible idea because all kinds of weird bugs start to show up when you do your dev without a namespace and then try to add one. Any dynamic code pretty much breaks and you have to remove the namespace from any data returned by apex controllers that provide data to field inputs in lightning, field set names, object names, etc all get messed up.

Still after adding some work arounds we got that working. However since the developer org doesn’t have person accounts we need another org that does to add in the extra bits of logic where needed. We wanted to keep the original dev org without person accounts as it’s sort of an auxiliary feature and didn’t want it causing any problems with the core package.

Development of the core package goes on for about a year. Now it’s time to tackle adding the extra logic for person accounts which in themselves are awful. I don’t know who thought it was a good idea to basically have two different schemas with the second being a half broken poorly defined bastardization of the original good version. Seriously they are sometimes account like, sometimes contact like, the account has the contact fields but a separate contact object kind of exists but you cannot get to it without directly entering the Id in the URL. The whole thing barely makes any sense. Interacting with them from apex is an absolute nightmare. In this case account and contact data are integrated with a separate system, which also has concepts of accounts and contacts. So normally we create an account, then tie contacts to it. In the case of person accounts we have to create some kind of weird hybrid of the data, creating both an account and contact from one object, but not all the data is directly on the account. For example we need to get the mailing address off the contact portion and a few other custom fields that the package adds. So we have to like smash the two objects together and send it. It’s just bizarre. Anyway at this point scratch orgs exist but we cannot create one from our developer org for some reason, the dev hub options just doesn’t exist. The help page says dev hub/scratch orgs are available in developer orgs, but apparently not in this specific one for no discernible reason.

We cannot enable them in our packaging org either as you cannot enable dev hub from an org with namespaces. So my coworker instead enables dev hub from his own personal dev org and creates me a scratch org into which I install the unmanaged version of the package to easily get all the code and such. Then I just manually roll my changes from that org into dev, and from dev into packaging. That works fine until the scratch org expires, which apparently it just did. Now I cannot log into it, and my dev is suddenly halted. There were no warning emails received (maybe he did, but didn’t tell me) and no way to re-enable the org. It’s just not accessible anymore. Thank goodness I have local copies of my code (we haven’t really gotten version control integrated into our workflow yet) or else I’d have lost any work.

I now have to set out to get a new org setup (when I’m already late for a deadline on some fixes). Fine, so I attempt to create a scratch org from my own personal dev org (which itself is halfway broken, it still has the theme from before ‘classic’. Enabling lightning gives me a weird hybrid version which looks utterly ridiculous).

I enable dev hub and set out to create my scratch org from VS code (I’ve never done this so I’m following a tutorial). So I create my project, authorize my org, then lo and behold, an error occurs while trying to create my scratch org “ERROR running force:org:create: Must pass a username and/or OAuth options when creating an AuthInfo instance.” I can’t find any information on how to fix this, I tried recreating the project, reauthorizing and still nothing. Not wanting to waste anymore time, I say fine I’ll just create a regular old developer org, install the un-managed package and enable person accounts.

I create my new dev org (after some mild annoyance and not being able to end my username with a number) and get it linked to my IDE. So now I need to enable person accounts, but wait you cannot do that yourself. You have to contact support to enable that and guess what Salesforce no longer allows you to create cases from a developer org. Because this package is being developed as an ISV type package I don’t have a prod org to login to create a case from. So now I’m mostly stuck. I’ve asked a co-worker who has access to a production org to log a case, and giving them my org ID, I’m hoping support will be willing to accept a feature request for an org other than the one the case is coming from. Otherwise I don’t know what I’ll do.


I’m sure once things mature more it’ll get better, and a good chunk of these problems are probably my own fault somehow but still, this is nuts.

Image

Salesforce Lightning DataTable Query Flattener

So I was doing some playing around with the Salesforce Lightning Datatable component and while it does make displaying query data very easy, it isn’t super robust when it comes to handling parent and child records. Just to make life easier in the future I thought it might be nice to make a function which could take a query returned by a controller and ‘flatten’ it so that all the data was available to the data table since it cannot access nested arrays or objects. Of course the table itself doesn’t have a way to iterate over nested rows so the child array flatted function is not quite as useful (unless say you wanted to show a contacts most recent case or something). Anyway, hopefully this will save you some time from having to write wrapper classes or having to skip using the data table if you have parent or child nested data.

Apex Controller

public with sharing class ManageContactsController {

    @AuraEnabled
    public static list<Contact> getContacts()
    {
        return [select firstname, name, lastname, email, phone, Owner.name, Owner.Profile.Name, (select id, subject from cases limit 1 order by createdDate desc ) from contact];
    }
}

Lightning Controller

({
   init: function (component, event, helper) {
        component.set('v.mycolumns', [
                {label: 'Contact Name', fieldName: 'Name', type: 'text'},
                {label: 'Phone', fieldName: 'Phone', type: 'phone'},
                {label: 'Email', fieldName: 'Email', type: 'email'},
            	{label: 'Owner', fieldName: 'Owner_Name', type: 'text'},
            	{label: 'Most Recent Case', fieldName: 'Cases_0_Subject', type: 'text'}
            ]);
        helper.getData(component, event, 'getContacts', 'mydata');
    }
})

Helper

({
    flattenObject : function(propName, obj)
    {
        var flatObject = [];
        
        for(var prop in obj)
        {
            //if this property is an object, we need to flatten again
            var propIsNumber = isNaN(propName);
            var preAppend = propIsNumber ? propName+'_' : '';
            if(typeof obj[prop] == 'object')
            {
                flatObject[preAppend+prop] = Object.assign(flatObject, this.flattenObject(preAppend+prop,obj[prop]) );

            }    
            else
            {
                flatObject[preAppend+prop] = obj[prop];
            }
        }
        return flatObject;
    },
    
	flattenQueryResult : function(listOfObjects) {
        if(typeof listOfObjects != 'Array') 
        {
        	var listOfObjects = [listOfObjects];
        }
        
        console.log('List of Objects is now....');
        console.log(listOfObjects);
        for(var i = 0; i < listOfObjects.length; i++)
        {
            var obj = listOfObjects[i];
            for(var prop in obj)
            {      
                if(!obj.hasOwnProperty(prop)) continue;
                if(typeof obj[prop] == 'object' && typeof obj[prop] != 'Array')
                {
					obj = Object.assign(obj, this.flattenObject(prop,obj[prop]));
                }
                else if(typeof obj[prop] == 'Array')
                {
                    for(var j = 0; j < obj[prop].length; j++)
                    {
                        obj[prop+'_'+j] = Object.assign(obj,this.flattenObject(prop,obj[prop]));
                    }
                }
        	}
        }
        return listOfObjects;
    },
    getInfo : function(component, event, methodName, targetAttribute) {
        var action = component.get('c.'+methodName);
        action.setCallback(this, $A.getCallback(function (response) {
            var state = response.getState();
            if (state === "SUCCESS") {
                console.log('Got Raw Response for ' + methodName + ' ' + targetAttribute);
                console.log(response.getReturnValue());
                
                var flattenedObject = this.flattenQueryResult(response.getReturnValue());
                
                component.set('v.'+targetAttribute, flattenedObject);
                
                console.log(flattenedObject);
            } else if (state === "ERROR") {
                var errors = response.getError();
                console.error(errors);
            }
        }));
        $A.enqueueAction(action);
    }
})

Component (Sorry my code highlighter didn’t like trying to parse this)

<aura:component controller=”ManageContactsController” implements=”forceCommunity:availableForAllPageTypes” access=”global”>
<aura:attribute name=”mydata” type=”Object”/>
<aura:attribute name=”mycolumns” type=”List”/>
<aura:handler name=”init” value=”{! this }” action=”{! c.init }”/>
<h3>Contacts (With Sharing Applied)</h3>
<lightning:datatable data=”{! v.mydata }”
columns=”{! v.mycolumns }”
keyField=”Id”
hideCheckboxColumn=”true”/>
</aura:component>

Result

Hope this helps!

Lightning Update List of Records Workaround (Quick Fix)

I’ve been doing some work with Salesforce Lightning, and so far it is certainly proving… challenging. I ran into an issue the other day to which I could find no obvious solution. I was attempting to pass a set of records from my javascript controller to the Apex controller for upsert. However it was throwing an error about ‘upsert not allowed on generic sObject list’ or something of that nature when the list of sObjects was in-fact defined as a specific type. After messing around with some various attempts at casting the list and modifying objects in the javascript controller before passing to the apex to have types I couldn’t find an elegant solution. Instead I found a workaround of simply creating a new list of the proper object type and adding the passed in records to it. I feel like there is probably a ‘proper’ way to make this work, but it works for me, so I figured I’d share.

//***************** Helper *************************//
	saveMappingFields : function(component,fieldObjects,callback)
	{

        var action = component.get("c.saveMappingFields");
        action.setParams({
            fieldObjects: fieldObjects
        });        
        action.setCallback(this, function(actionResult){
         
            if (typeof callback === "function") {
            	callback(actionResult);
            }
        });  
        
        $A.enqueueAction(action);             
	}
	
//**************** Apex Controller **********************//
//FAILS
@AuraEnabled
global static string saveMappingFields(list<Mapping_Field__c> fieldObjects)
{
	list<database.upsertResult> saveFieldResults = database.upsert(fieldObjects,false);	
}

//WORKS
@AuraEnabled
global static string saveMappingFields(list<Mapping_Field__c> fieldObjects)
{
	list<Mapping_Field__c> fixedMappingFields = new list<Mapping_Field__c>(fieldObjects);
	
	list<database.upsertResult> saveFieldResults = database.upsert(fixedMappingFields,false);	
}

Simplification

Hey all,

So I wanted to just throw this out there, I’ve moved from Minnesota to VERY rural Montana. I traded in my 3 bedroom rambler for a studio cabin on some ranch near the Canadian border. As such my access to technology is somewhat reduced and I don’t know if I’ll be posting as much interesting stuff on this blog for a while. Odds are I’ll have some cool Salesforce stuff from time to time since I am maintaining my employment remotely but I won’t be doing as much at home hacking. If you are curious how things are going, why this happened or just like my writing style I’ve started a new blog detailing my journey. You can check it out here:

Montana Dan Blog

Anyway, I’ll still post what I can but I figured I’d should at least inform the community why I might not be around quite as much. Till next time.

-Kenji

Dynamic Apex Invocation/Callbacks

So I’ve been working on that DeepClone class and it occurred to me that whatever invokes that class might like to know when the process is done (so maybe it can do something with those created records). Seeing as the DeepClone is by it’s very nature asynchronous that presents a problem, since the caller cannot sit and wait for process to complete. You know what other language has to deal with async issues a lot? Javascript. In Javascript we often solve this problem with a ‘callback’ function (I know callbacks are old and busted, promises are the new hotness but bare with me here), where in you call your asynchronous function and tell it what to call when it’s done. Most often that is done by passing in the actual function code instead of just the name, but both are viable. Here is an example of what both might look like.

var someData = 'data to give to async function';

//first type of invocation passes in an actual function as the callback. 
asyncThing(someData,function(result){
	console.log('I passed in a function directly!' + result);
});

//second type of invocation passes in the name of a function to call instead
asyncThing(someData,'onCompleteHandler');

function onCompleteHandler(result)
{
	console.log('I passed in the name of a function to call and that happened' + result);
}

function asyncThing(data,callback)
{
	//async code here, maybe a callout or something.
	var data = 'probably  a status code or the fetched data would go here';
	
	//if our callback is a function, then just straight up invoke it
	if(typeof callback == 'function')
	{
		callback(data);
	}
	//if our callback is a string, then dynamically invoke it
	else if(typeof callback == 'string')
	{
		window[callback](data);
	}
}

So yeah, javascript is cool, it has callbacks. What does this have to do with Apex? Apex is strongly typed, you can’t just go around passing around functions as arguments, and you sure as hell can’t do dynamic invocation… or can you? Behold, by abusing the tooling api, I give you a basic implementation of a dynamic Apex callback!

public HttpResponse invokeCallback(string callback, string dataString)
{
	HttpResponse res = new HttpResponse();
	try
	{
		string functionCall = callback+'(\''+dataString,',')+'\');';
		HttpRequest req = new HttpRequest();
		req.setHeader('Authorization', 'Bearer ' + UserInfo.getSessionID());
		req.setHeader('Content-Type', 'application/json');
		string instanceURL = System.URL.getSalesforceBaseUrl().getHost().remove('-api' ).toLowerCase();
		String toolingendpoint = 'https://'+instanceURL+'/services/data/v28.0/tooling/executeAnonymous/?anonymousBody='+encodingUtil.urlEncode(functionCall,'utf-8');
		req.setEndpoint(toolingendpoint);
		req.setMethod('GET');
		
		Http h = new Http();
		res = h.send(req);
	}
	catch(exception e)
	{
		system.debug('\n\n\n\n--------------------- Error attempting callback!');
		system.debug(e);
		system.debug(res);
	}
	return res;
} 

What’s going on here? The Tooling API allows us to execute anonymous code. Normally the Tooling API is for external tools/languages to access Salesforce meta-data and perform operations. However, by accessing it via REST and passing in both the name of a class and method, and properly encoding any data you’d like to pass (strings only, no complex object types) you can provide a dynamic callback specified at runtime. We simply create a get request against the Tooling API REST endpoint, and invoke the execute anonymous method. Into that we pass the desired callback function name. So now when DeepClone for example is instantiated the caller can set a class level property of class and method it would like called when DeepClone is done doing it’s thing. It can pass back all the Id’s of the records created so then any additional work can be performed. Of course the class provided has to be public, and the method called must be static. Additionally you have to add your own org id to the allowed remote sites under security->remote site settings. Anyway, I thought this was a pretty nice way of letting your @future methods and your queueable methods to pass information back to a class so you aren’t totally left in the dark about what the results were. Enjoy!

Deep Clone (Round 2)

So a day or two ago I posted my first draft of a deep clone, which would allow easy cloning of an entire data hierarchy. It was a semi proof of concept thing with some limitations (it could only handle somewhat smaller data sets, and didn’t let you configure all or nothing inserts, or specify if you wanted to copy standard objects as well as custom or not). I was doing some thinking and I remembered hearing about the queueable interface, which allows for asynchronous processing and bigger governor limits. I started thinking about chaining queueable jobs together to allow for copying much larger data sets. Each invocation would get it’s own governor limits and could theoretically go on as long as it took since you can chain jobs infinitely. I had attempted to use queueable to solve this before but i made the mistake of trying to kick off multiple jobs per invocation (one for each related object type). This obviously didn’t work due to limits imposed on queueable. Once I thought of a way to only need one invocation per call (basically just rolling all the records that need to get cloned into one object and iterate over it) I figured I might have a shot at making this work. I took what I had written before, added a few options, and I think I’ve done it. An asynchronous deep clone that operates in distinct batches with all or nothing handling, and cleanup in case of error. This is some hot off the presses code, so there is likely some lingering bugs, but I was too excited not to share this. Feast your eyes!

public class deepClone implements Queueable {

    //global describe to hold object describe data for query building and relationship iteration
    public map<String, Schema.SObjectType> globalDescribeMap = Schema.getGlobalDescribe();
    
    //holds the data to be cloned. Keyed by object type. Contains cloneData which contains the object to clone, and some data needed for queries
    public map<string,cloneData> thisInvocationCloneMap = new map<string,cloneData>();
    
    //should the clone process be all or nothing?
    public boolean allOrNothing = false;
    
    //each iteration adds the records it creates to this property so in the event of an error we can roll it all back
    public list<id> allCreatedObjects = new list<id>();
    
    //only clone custom objects. Helps to avoid trying to clone system objects like chatter posts and such.
    public boolean onlyCloneCustomObjects = true;
    
    public static id clone(id sObjectId, boolean onlyCustomObjects, boolean allOrNothing)
    {
        
        deepClone startClone= new deepClone();
        startClone.onlyCloneCustomObjects  = onlyCustomObjects;
        startClone.allOrNothing = allOrNothing;
        
        sObject thisObject = sObjectId.getSobjectType().newSobject(sObjectId);
        cloneData thisClone = new cloneData(new list<sObject>{thisObject}, new map<id,id>());
        map<string,cloneData> cloneStartMap = new map<string,cloneData>();
        
        cloneStartMap.put(sObjectId.getSobjectType().getDescribe().getName(),thisClone);
        
        startClone.thisInvocationCloneMap = cloneStartMap;
        return System.enqueueJob(startClone);
        
        return null;      
    }
    
    public void execute(QueueableContext context) {
        deepCloneBatched();
    }
        
    /**
    * @description Clones an object and the entire related data hierarchy. Currently only clones custom objects, but enabling standard objects is easy. It is disabled because it increases risk of hitting governor limits
    * @param sObject objectToClone the root object be be cloned. All descended custom objects will be cloned as well
    * @return list<sobject> all of the objects that were created during the clone.
    **/
    public list<id> deepCloneBatched()
    {
        map<string,cloneData> nextInvocationCloneMap = new map<string,cloneData>();
        
        //iterate over every object type in the public map
        for(string relatedObjectType : thisInvocationCloneMap.keySet())
        { 
            list<sobject> objectsToClone = thisInvocationCloneMap.get(relatedObjectType).objectsToClone;
            map<id,id> previousSourceToCloneMap = thisInvocationCloneMap.get(relatedObjectType).previousSourceToCloneMap;
            
            system.debug('\n\n\n--------------------  Cloning record ' + objectsToClone.size() + ' records');
            list<id> objectIds = new list<id>();
            list<sobject> clones = new list<sobject>();
            list<sObject> newClones = new list<sObject>();
            map<id,id> sourceToCloneMap = new map<id,id>();
            list<database.saveresult> cloneInsertResult;
                       
            //if this function has been called recursively, then the previous batch of cloned records
            //have not been inserted yet, so now they must be before we can continue. Also, in that case
            //because these are already clones, we do not need to clone them again, so we can skip that part
            if(objectsToClone[0].Id == null)
            {
                //if they don't have an id that means these records are already clones. So just insert them with no need to clone beforehand.
                cloneInsertResult = database.insert(objectsToClone,allOrNothing);

                clones.addAll(objectsToClone);
                
                for(sObject thisClone : clones)
                {
                    sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
                }
                            
                objectIds.addAll(new list<id>(previousSourceToCloneMap.keySet()));
                //get the ids of all these objects.                    
            }
            else
            {
                //get the ids of all these objects.
                for(sObject thisObj :objectsToClone)
                {
                    objectIds.add(thisObj.Id);
                }
    
                //create a select all query to get all the data for these objects since if we only got passed a basic sObject without data 
                //then the clone will be empty
                string objectDataQuery = buildSelectAllStatment(relatedObjectType);
                
                //add a where condition
                objectDataQuery += ' where id in :objectIds';
                
                //get the details of this object
                list<sObject> objectToCloneWithData = database.query(objectDataQuery);
    
                for(sObject thisObj : objectToCloneWithData)
                {              
                    sObject clonedObject = thisObj.clone(false,true,false,false);
                    clones.add(clonedObject);               
                }    
                
                //insert the clones
                cloneInsertResult = database.insert(clones,allOrNothing);
                
                for(sObject thisClone : clones)
                {
                    sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
                }
            }        
            
            for(database.saveResult saveResult :  cloneInsertResult)
            {
                if(saveResult.success)
                {
                    allCreatedObjects.add(saveResult.getId());
                }
                else if(allOrNothing)
                {
                    cleanUpError();
                    return allCreatedObjects;
                }
            }
              
            //Describes this object type so we can deduce it's child relationships
            Schema.DescribeSObjectResult objectDescribe = globalDescribeMap.get(relatedObjectType).getDescribe();
                        
            //get this objects child relationship types
            List<Schema.ChildRelationship> childRelationships = objectDescribe.getChildRelationships();
    
            system.debug('\n\n\n-------------------- ' + objectDescribe.getName() + ' has ' + childRelationships.size() + ' child relationships');
            
            //then have to iterate over every child relationship type, and every record of that type and clone them as well. 
            for(Schema.ChildRelationship thisRelationship : childRelationships)
            { 
                          
                Schema.DescribeSObjectResult childObjectDescribe = thisRelationship.getChildSObject().getDescribe();
                string relationshipField = thisRelationship.getField().getDescribe().getName();
                
                try
                {
                    system.debug('\n\n\n-------------------- Looking at ' + childObjectDescribe.getName() + ' which is a child object of ' + objectDescribe.getName());
                    
                    if(!childObjectDescribe.isCreateable() || !childObjectDescribe.isQueryable())
                    {
                        system.debug('-------------------- Object is not one of the following: queryable, creatable. Skipping attempting to clone this object');
                        continue;
                    }
                    if(onlyCloneCustomObjects && !childObjectDescribe.isCustom())
                    {
                        system.debug('-------------------- Object is not custom and custom object only clone is on. Skipping this object.');
                        continue;                   
                    }
                    if(Limits.getQueries() >= Limits.getLimitQueries())
                    {
                        system.debug('\n\n\n-------------------- Governor limits hit. Must abort.');
                        
                        //if we hit an error, and this is an all or nothing job, we have to delete what we created and abort
                        if(!allOrNothing)
                        {
                            cleanUpError();
                        }
                        return allCreatedObjects;
                    }
                    //create a select all query from the child object type
                    string childDataQuery = buildSelectAllStatment(childObjectDescribe.getName());
                    
                    //add a where condition that will only find records that are related to this record. The field which the relationship is defined is stored in the maps value
                    childDataQuery+= ' where '+relationshipField+ ' in :objectIds';
                    
                    //get the details of this object
                    list<sObject> childObjectsWithData = database.query(childDataQuery);
                    
                    system.debug('\n\n\n-------------------- Object queried. Found ' + childObjectsWithData.size() + ' records to clone');
                    
                    if(!childObjectsWithData.isEmpty())
                    {               
                        map<id,id> childRecordSourceToClone = new map<id,id>();
                        
                        for(sObject thisChildObject : childObjectsWithData)
                        {
                            childRecordSourceToClone.put(thisChildObject.Id,null);
                            
                            //clone the object
                            sObject newClone = thisChildObject.clone();
                            
                            //since the record we cloned still has the original parent id, we now need to update the clone with the id of it's cloned parent.
                            //to do that we reference the map we created above and use it to get the new cloned parent.                        
                            system.debug('\n\n\n----------- Attempting to change parent of clone....');
                            id newParentId = sourceToCloneMap.get((id) thisChildObject.get(relationshipField));
                            
                            system.debug('Old Parent: ' + thisChildObject.get(relationshipField) + ' new parent ' + newParentId);
                            
                            //write the new parent value into the record
                            newClone.put(thisRelationship.getField().getDescribe().getName(),newParentId );
                            
                            //add this new clone to the list. It will be inserted once the deepClone function is called again. I know it's a little odd to not just insert them now
                            //but it save on redudent logic in the long run.
                            newClones.add(newClone);             
                        }  
                        cloneData thisCloneData = new cloneData(newClones,childRecordSourceToClone);
                        nextInvocationCloneMap.put(childObjectDescribe.getName(),thisCloneData);                             
                    }                                       
                       
                }
                catch(exception e)
                {
                    system.debug('\n\n\n---------------------- Error attempting to clone child records of type: ' + childObjectDescribe.getName());
                    system.debug(e); 
                }            
            }          
        }
        
        system.debug('\n\n\n-------------------- Done iterating cloneable objects.');
        
        system.debug('\n\n\n-------------------- Clone Map below');
        system.debug(nextInvocationCloneMap);
        
        system.debug('\n\n\n-------------------- All created object ids thus far across this invocation');
        system.debug(allCreatedObjects);
        
        //if our map is not empty that means we have more records to clone. So queue up the next job.
        if(!nextInvocationCloneMap.isEmpty())
        {
            system.debug('\n\n\n-------------------- Clone map is not empty. Sending objects to be cloned to another job');
            
            deepClone nextIteration = new deepClone();
            nextIteration.thisInvocationCloneMap = nextInvocationCloneMap;
            nextIteration.allCreatedObjects = allCreatedObjects;
            nextIteration.onlyCloneCustomObjects  = onlyCloneCustomObjects;
            nextIteration.allOrNothing = allOrNothing;
            id  jobId = System.enqueueJob(nextIteration);       
            
            system.debug('\n\n\n-------------------- Next queable job scheduled. Id is: ' + jobId);  
        }
        
        system.debug('\n\n\n-------------------- Cloneing Done!');
        
        return allCreatedObjects;
    }
     
    /**
    * @description create a string which is a select statement for the given object type that will select all fields. Equivalent to Select * from objectName ins SQL
    * @param objectName the API name of the object which to build a query string for
    * @return string a string containing the SELECT keyword, all the fields on the specified object and the FROM clause to specify that object type. You may add your own where statements after.
    **/
    public string buildSelectAllStatment(string objectName){ return buildSelectAllStatment(objectName, new list<string>());}
    public string buildSelectAllStatment(string objectName, list<string> extraFields)
    {       
        // Initialize setup variables
        String query = 'SELECT ';
        String objectFields = String.Join(new list<string>(globalDescribeMap.get(objectName).getDescribe().fields.getMap().keySet()),',');
        if(extraFields != null)
        {
            objectFields += ','+String.Join(extraFields,',');
        }
        
        objectFields = objectFields.removeEnd(',');
        
        query += objectFields;
    
        // Add FROM statement
        query += ' FROM ' + objectName;
                 
        return query;   
    }    
    
    public void cleanUpError()
    {
        database.delete(allCreatedObjects);
    }
    
    public class cloneData
    {
        public list<sObject> objectsToClone = new list<sObject>();        
        public map<id,id> previousSourceToCloneMap = new map<id,id>();  
        
        public cloneData(list<sObject> objects, map<id,id> previousDataMap)
        {
            this.objectsToClone = objects;
            this.previousSourceToCloneMap = previousDataMap;
        }   
    }    
}    

 

It’ll clone your record, your records children, your records children’s children’s, and yes even your records children’s children’s children (you get the point)! Simply invoke the deepClone.clone() method with the id of the object to start the clone process at, whether you want to only copy custom objects, and if you want to use all or nothing processing. Deep Clone takes care of the rest automatically handling figuring out relationships, cloning, re-parenting, and generally being awesome. As always I’m happy to get feedback or suggestions! Enjoy!

-Kenji

Salesforce True Deep Clone, the (Im)Possible Dream

So getting back to work work (sorry alexa/amazon/echo, I’ve gotta pay for more smart devices somehow), I’ve been working on a project where there is a fairly in depth hierarchy of records. We will call them surveys, these surveys have records related to them. Those records have other records related to them, and so on. It’s a semi complicated “tree” that goes about 5 levels deep with different kinds of objects in each “branch”. Of course with such a complicated structure, but a common need to copy and modify it for a new project, the request for a better clone came floating across my desk. Now Salesforce does have a nice clone tool built  in, but it doesn’t have the ability to copy an entire hierarchy, and some preliminary searches didn’t turn up anything great either. The reason why, it’s pretty damn tricky, and governor limits can initially make it seem impossible. What I have here is an initial attempt at a ‘true deep clone’ function. You give it a record (or possibly list of records, but I wouldn’t push your luck) to clone. It will do that, and then clone then children, and re-parent them to your new clone. It will then find all those records children and clone and re-parent them as well, all the way down. Without further ado, here is the code.

    //clones a batch of records. Must all be of the same type.
    //very experemental. Small jobs only!
    public  Map<String, Schema.SObjectType> globalDescribeMap = Schema.getGlobalDescribe();    
    public static list<sObject> deepCloneBatched(list<sObject> objectsToClone) { return deepCloneBatched(objectsToClone,new map<id,id>());}
    public static list<sObject> deepCloneBatched(list<sObject> objectsToClone, map<id,id> previousSourceToCloneMap)
    {
        system.debug('\n\n\n--------------------  Cloning record ' + objectsToClone.size() + ' records');
        list<id> objectIds = new list<id>();
        list<sobject> clones = new list<sobject>();
        list<sObject> newClones = new list<sObject>();
        map<id,id> sourceToCloneMap = new map<id,id>();
        
        
        if(objectsToClone.isEmpty())
        {
            system.debug('\n\n\n-------------------- No records in set to clone. Aborting');
            return clones;
        }
                
        //if this function has been called recursively, then the previous batch of cloned records
        //have not been inserted yet, so now they must be before we can continue. Also, in that case
        //because these are already clones, we do not need to clone them again, so we can skip that part
        if(objectsToClone[0].Id == null)
        {
            //if they don't have an id that means these records are already clones. So just insert them with no need to clone beforehand.
            insert objectsToClone;
            clones.addAll(objectsToClone);
            
            for(sObject thisClone : clones)
            {
                sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
            }
                        
            objectIds.addAll(new list<id>(previousSourceToCloneMap.keySet()));
            //get the ids of all these objects.                    
        }
        else
        {
            //get the ids of all these objects.
            for(sObject thisObj :objectsToClone)
            {
                objectIds.add(thisObj.Id);
            }
            
            for(sObject thisObj : objectsToClone)
            {
                sObject clonedObject = thisObj.clone(false,true,false,false);
                clones.add(clonedObject);               
            }    
            
            //insert the clones
            insert clones;
            
            for(sObject thisClone : clones)
            {
                sourceToCloneMap.put(thisClone.getCloneSourceId(),thisClone.Id);
            }
        }        

        //figure out what kind of object we are dealing with
        string relatedObjectType = objectsToClone[0].Id.getSobjectType().getDescribe().getName();
        
        //Describes this object type so we can deduce it's child relationships
        Schema.DescribeSObjectResult objectDescribe = globalDescribeMap.get(relatedObjectType).getDescribe();
                    
        //get this objects child relationship types
        List<Schema.ChildRelationship> childRelationships = objectDescribe.getChildRelationships();

        system.debug('\n\n\n-------------------- ' + objectDescribe.getName() + ' has ' + childRelationships.size() + ' child relationships');
        
        //then have to iterate over every child relationship type, and every record of that type and clone them as well. 
        for(Schema.ChildRelationship thisRelationship : childRelationships)
        { 
                      
            Schema.DescribeSObjectResult childObjectDescribe = thisRelationship.getChildSObject().getDescribe();
            string relationshipField = thisRelationship.getField().getDescribe().getName();
            
            try
            {
                system.debug('\n\n\n-------------------- Looking at ' + childObjectDescribe.getName() + ' which is a child object of ' + objectDescribe.getName());
                
                if(!childObjectDescribe.isCreateable() || !childObjectDescribe.isQueryable() || !childObjectDescribe.isCustom())
                {
                    system.debug('-------------------- Object is not one of the following: queryable, creatable, or custom. Skipping attempting to clone this object');
                    continue;
                }
                if(Limits.getQueries() >= Limits.getLimitQueries())
                {
                    system.debug('\n\n\n-------------------- Governor limits hit. Must abort.');
                    return clones;
                }
                //create a select all query from the child object type
                string childDataQuery = buildSelectAllStatment(childObjectDescribe.getName());
                
                //add a where condition that will only find records that are related to this record. The field which the relationship is defined is stored in the maps value
                childDataQuery+= ' where '+relationshipField+ ' in :objectIds';
                
                //get the details of this object
                list<sObject> childObjectsWithData = database.query(childDataQuery);
                
                if(!childObjectsWithData.isEmpty())
                {               
                    map<id,id> childRecordSourceToClone = new map<id,id>();
                    
                    for(sObject thisChildObject : childObjectsWithData)
                    {
                        childRecordSourceToClone.put(thisChildObject.Id,null);
                        
                        //clone the object
                        sObject newClone = thisChildObject.clone();
                        
                        //since the record we cloned still has the original parent id, we now need to update the clone with the id of it's cloned parent.
                        //to do that we reference the map we created above and use it to get the new cloned parent.                        
                        system.debug('\n\n\n----------- Attempting to change parent of clone....');
                        id newParentId = sourceToCloneMap.get((id) thisChildObject.get(relationshipField));
                        
                        system.debug('Old Parent: ' + thisChildObject.get(relationshipField) + ' new parent ' + newParentId);
                        
                        //write the new parent value into the record
                        newClone.put(thisRelationship.getField().getDescribe().getName(),newParentId );
                        
                        //add this new clone to the list. It will be inserted once the deepClone function is called again. I know it's a little odd to not just insert them now
                        //but it save on redudent logic in the long run.
                        newClones.add(newClone);             
                    }  
                    //now we need to call this function again, passing in the newly cloned records, so they can be inserted, as well as passing in the ids of the original records
                    //that spawned them so the next time the query can find the records that currently exist that are related to the kind of records we just cloned.                
                    clones.addAll(deepCloneBatched(newClones,childRecordSourceToClone));                                  
                }                    
            }
            catch(exception e)
            {
                system.debug('\n\n\n---------------------- Error attempting to clone child records of type: ' + childObjectDescribe.getName());
                system.debug(e); 
            }            
        }
        
        return clones;
    }
     
    /**
    * @description create a string which is a select statment for the given object type that will select all fields. Equivilent to Select * from objectName ins SQL
    * @param objectName the API name of the object which to build a query string for
    * @return string a string containing the SELECT keyword, all the fields on the specified object and the FROM clause to specify that object type. You may add your own where statments after.
    **/
    public static string buildSelectAllStatment(string objectName){ return buildSelectAllStatment(objectName, new list<string>());}
    public static string buildSelectAllStatment(string objectName, list<string> extraFields)
    {       
        // Initialize setup variables
        String query = 'SELECT ';
        String objectFields = String.Join(new list<string>(Schema.getGlobalDescribe().get(objectName).getDescribe().fields.getMap().keySet()),',');
        if(extraFields != null)
        {
            objectFields += ','+String.Join(extraFields,',');
        }
        
        objectFields = objectFields.removeEnd(',');
        
        query += objectFields;
    
        // Add FROM statement
        query += ' FROM ' + objectName;
                 
        return query;   
    }

You should be able to just copy and paste that into a class, invoke the deepCloneBatched method with the record you want to clone, and it should take care of the rest, cloning every related record that it can. It skips non custom objects for now (because I didn’t need them) but you can adjust that by removing the if condition at line 81 that says

|| !childObjectDescribe.isCustom()

And then it will also clone all the standard objects it can. Again this is kind of a ‘rough draft’ but it does seem to be working. Even cloning 111 records of several different types, I was still well under all governor limits. I’d explain more about how it works, but the comments are there, it’s 3:00 in the morning and I’m content to summarize the workings of by shouting “It’s magic. Don’t question it”, and walking off stage. Let me know if you have any clever ways to make it more efficient, which I have no doubt there is. Anyway, enjoy. I hope it helps someone out there.