Oh my god. It's full of code!

Latest

Bulk Delete/Truncate Salesforce Custom Metadata Object

Dealing with Salesforce custom metadata records is a pain, there just isn’t any other way to say it. Everything is harder when dealing with them. Thankfully I’ve got at least one little thing here to make it easier to work with them, an easy to use bulk delete helper!

The other night I was importing some metadata records and I failed to notice that the importer had added quotes around one of my fields (this was because the field value had a comma in it, so the quotes were being used to denote it was all one field. I ended up having to change the comma to a colon and modify the code that used that field because I didn’t know how to escape the comma). Anyway, after the botched import I needed a way to delete all 400 some records and had no easy way to do it. All the other guides on-line pretty much say you have to deploy a destructiveChanges.xml file with each item listed individually. Obviously I wasn’t going to sit and manually create that file. So what I did is put together a little apex script that will construct the destructiveChanges.xml file and the needed empty package.xml file and email them to me. Then it’s just a simple matter of dropping them in my vs code project folder and running the deploy command.

sfdx force:mdapi:deploy -d [your folder name here, no brackets] -w -1'

list<string> emailRecips = new list<string>{your_email_here@somewhere.com'};
list<string> objectTypes = new list<string>();
objectTypes.add(your_custom_metadata_sObject_type_here__mdt.getSObjectType().getDescribe().getName());

string destructiveXML = '<?xml version="1.0" encoding="UTF-8"?>\n<Ppckage xmlns="http://soap.sforce.com/2006/04/metadata">\n';
string packageXml = '<?xml version="1.0" encoding="UTF-8"?> '+
    '<package xmlns="http://soap.sforce.com/2006/04/metadata">'+
        '<version>41.0</version>'+
    '</package>';
    
destructiveXML += '<types>\n';
for (String objectType : objectTypes) {
    list<sobject> customMetadataRecords = Database.query('SELECT DeveloperName FROM ' + objectType);
    
    for (SObject customMetadataRecord : customMetadataRecords) {
        String recordName = (String)customMetadataRecord.get('DeveloperName');
        destructiveXML += '<members>' + objectType + '.' + recordName + '</members>\n';
    }
}
destructiveXML += '<name>CustomMetadata</name>\n';
destructiveXML += '</types>\n';
destructiveXML += '</package>';

Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage();
email.setToAddresses(emailRecips);    
email.setSubject('Destructive changes for ' + objectTypes);

string emailBody = 'Here is the destructive changes files you need to delete all this metadata. Put these two files in a folder then run this command. \n';
emailBody += 'sfdx force:mdapi:deploy -d [your folder name here, no brackets] -w -1';

email.setPlainTextBody(emailBody);

list<Messaging.EmailFileAttachment> attachments = new list<Messaging.EmailFileAttachment>();
Messaging.EmailFileAttachment file1 = new Messaging.EmailFileAttachment();
file1.setFileName('destructiveChanges.xml');
file1.setBody(blob.valueOf(destructiveXML));
attachments.add(file1);

Messaging.EmailFileAttachment file2 = new Messaging.EmailFileAttachment();
file2.setFileName('package.xml');
file2.setBody(blob.valueOf(packageXml));
attachments.add(file2);

email.setFileAttachments(attachments);

Messaging.sendEmail(new Messaging.SingleEmailMessage[] { email });

Hope this helps. Till next time.
-Kenji

Copy Fields From One Salesforce Object To Another

Problem: While working with our client they decided they wanted to move about 30 random fields from their account object to another object because the account object already had too many fields and they wanted to re-organize. They only sent over a list of field labels they wanted moved. I needed to find a way to easily move the fields and I didn’t want to do it manually.

Solution: Using the labels query the system for matching field names on the source object to get the field names. Use VS code to pull the field metadata for the source objects fields (which now comes as a list of files, one for each field instead of the massive XML blob it used to). Then run a small script that finds files with matching names from our list of provided field names and copy them to our new object. Use the VS code deploy to push them into the object. Then use the script from my last post to set security on the fields as needed.

So there are a couple steps here, none of them terribly difficult. You’ll need just a few things.
1) Access to execute anonymous code
2) Your Salesforce project setup in VS code with the metadata for your source and target objects pulled down.
3) Node Js Installed.

First, since our provided data only has field labels we need to get the names. A quick bit of apex script makes short work of that. If you already have a list of field names you can skip this, though you’ll need to get them into a JSON array and save that into a file.

string sObjectType = 'Account'; //your source object name here
list<string> fields = new list<string>{'Some field label 1','Some field label 2','field3__c'}; //your list of field labels you need to get names for here
list<string> emailRecips = new list<string>{'your_email_here@somewhere.com'};

Map<String, Schema.SObjectType> globalDescribe = Schema.getGlobalDescribe();
list<string> fieldNames = new list<string>();
list<string> missingFields = new list<string>();
list<string> cleanedFields = new list<string>();
system.debug('\n\n\n----- There are ' + fields.size() + ' Defined fields');


Map<String, Schema.SObjectField> objectFields = globalDescribe.get(sObjectType).getDescribe().fields.getMap();

for(string thisField : fields){
    cleanedFields.add((string) thisField.toLowerCase().replaceAll('[^a-zA-Z0-9\\s+]', ''));
}
system.debug('\n\n\n----- Cleaned field labels');
system.debug(cleanedFields);

integer matches = 0;
for(Schema.SObjectField thisField : objectFields.values()){
    string cleanedLabel = thisField.getDescribe().getLabel();

    system.debug('\n------ Looking for field with label: ' + cleanedLabel + ' in cleaned fields list');
    cleanedLabel = cleanedLabel.toLowerCase().replaceAll('[^a-zA-Z0-9\\s+]', '');
    
    if(cleanedFields.contains(cleanedLabel)) {
        fieldNames.add(thisField.getDescribe().getName());
        matches++;
        integer indexPos = cleanedFields.indexOf(cleanedLabel);
        cleanedFields.remove(indexPos);
    }
}
missingFields = cleanedFields;
string jsonFieldNames = JSON.serialize(fieldNames);
string jsonMissingFields = JSON.serialize(missingFields);


Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage();
email.setToAddresses(emailRecips);    
email.setSubject('Field Labels to Field Names');
string emailBody = 'Found '+matches+'/'+fields.size()+ ' fields with matching labels to names\n\nFOUND FIELDS\n\n';
emailBody += jsonFieldNames;
emailBody += '\n\n\n MISSING FIELDS\n';
emailBody += jsonMissingFields;
email.setPlainTextBody(emailBody);

list<Messaging.EmailFileAttachment> attachments = new list<Messaging.EmailFileAttachment>();
Messaging.EmailFileAttachment efa = new Messaging.EmailFileAttachment();
efa.setFileName('FieldNames.json');
efa.setBody(blob.valueOf(jsonFieldNames));
attachments.add(efa);

Messaging.EmailFileAttachment efa1 = new Messaging.EmailFileAttachment();
efa1.setFileName('MissingFields.json');
efa1.setBody(blob.valueOf(jsonMissingFields));
attachments.add(efa1);

email.setFileAttachments(attachments);

Messaging.sendEmail(new Messaging.SingleEmailMessage[] { email });

system.debug('\n\n\n---- Found Fields');
system.debug(jsonFieldNames);
system.debug('\n\n\n---- Missing Fields');
system.debug(jsonMissingFields);


Alright, that’s easy enough. Now from either the debug log or the email you got sent you should now have a JSON file of all the field names which we need for the next step.

What we are going to do now is run a small node script to find matching file names from the JSON in the source objects fields folder and copy them into the destination objects fields folder. Create a javascript file and put this in there

/* File extractor
Description: Given a list of salesforce field API names in a JSON file, copies matching meta data config files from one object 
to another to easily move fields from object to another. 

For example, say there are 25 fields you want to move from your account object to an object called plan__c. Instead of having to manually 
re-create them or manually having to copy and paste the metadata files which is prone to error you could instead run this utility
and provide Account as your input object and Plan__c as your output object with a JSON file which is just an array of field names to copy.
This utility will copy any matches and move them into the target folder so you can easily deploy them.

*/

//modify these variables to match your objects and filename
var inputObjectName = 'Account';
var outputObjectName = 'Plan__c';
var fieldFileName = 'FieldNames.json'

const fs = require('fs');
var fieldJson;

function init(){
    fieldJson = JSON.parse(fs.readFileSync( fieldFileName ));
    copyObjectFields(inputObjectName, outputObjectName);
}

function copyObjectFields(inputObject, outputObject){
    var inputPath = inputObject+'\\\\fields\\'
    var outputPath = outputObjectName+'\\\\fields\\';
    var files=fs.readdirSync(inputPath);

    for(var i=0;i<files.length;i++)
    {
        var filename=files[i].replace('.field-meta.xml','');
        if(fieldJson.includes(filename)){
            console.log('Found matching file!');
            console.log(filename);

            fs.copyFile( inputPath+'\\'+files[i], outputPath+'\\'+files[i],(err) => {
                if (err) {
                    console.log("Error Found:", err);
                }
                else {
                    console.log('Copied File!');          
                }
            });
        }
    }
}

init();
 

Now in from your Salesforce project folder copy the source object and destination object from your force-app\main\default\objects\ folder and paste them in the same folder with this script (you could run this from directly inside your project folder but I wouldn’t recommend it). It should look something like this.

folder

Open a command line and navigate to the script and run it. You should see something like

output

Now with all your fields copied into that temp folder, copy them back into your actual project folder. In VS code you should now be able to navigate to your target object, right click and deploy to org.
deploy
You may have to make a few small adjustments if there are conflicting relationship names or whatever but you should be able to work through those pretty easily. To set field level security, see my last post for a utility to easily configure that.

Hope this helps. Till next time.
-Kenji

Bulk Set Field Level Security On Objects for Profiles

Problem: A large number of fields were created across several objects and we needed to ensure that the proper read/edit access was set on all of them. Two profiles need to have read but no edit access on every field on every object. This could obviously be done manually but it's tedious and I don't want to. I'm a programmer dammit not some data entry dude. Queue rapid coding session to try and see if I can write something to do this faster than I could do it manually so I don't totally defeat the purpose of doing this. I also wanted to be able to review the potential changes before committing them. Solution: Write a apex method/execute anonymous script to modify the FLS to quickly set the permissions required. The trick here is knowing that every profile has an underlying permission set that controls its access. So it becomes a reasonably simple task of getting the permission sets related to the profiles, getting all the fields on all the objects, checking for existing permissions, modifying them where needed and writing them back. I also needed to generate reports that would show exactly what was changed so our admin folks could look it over and it could be provided for documentation/deployment reasons. So yeah, here that is.

*Note* I threw this together in like two hours so it's not the most elegant thing in the world. It's got some room for improvement definitely, but it does the job.
/* To easily call this as an execute anonymous scripts remove the method signature and brackets and set these variables as desired.
list<string> sObjects = new list<string>{'account','object2__c'};
list<string> profileNames = new list<string>{'System Administrator','Standard User'};
boolean canView; //set to true or false
boolean canEdit; //set to true or false
boolean doUpdate = false;
list<string> emailRecips = new list<string>{'your_email_here@somewhere.com'};
*/

/**
*@Description sets the permissions on all fields of all given objects for all given profiles to the given true or false values for view and edit. Additionally can be set to 
* rollback changes to only generate the report of what would be modified. Emails logs of proposed and completed changes to addresses specified. Only modififies permissions that
  do not match the new settings so un-needed changes are not performed and could theoretically be called repeatedly to chip away at changes if the overall amount of DML ends up being
  too much for one operation (chunking/limit logic does not currently exist so doing too many changes at once could cause errors).
*@Param sObjects list of sObjects to set permissions for all fields on
*@Param profileNames a list of names of profiles for which to modify the permissions for
*@Param canView set the view permission to true or false for all fields on all provided objects for all provided profiles
*@Param canEdit set the edit permission to true or false for all fields on all provided objects for all provided profiles
*@Param doUpdate should the changes to field level security actually be performed or no? If not the reports for proposed changes and what the results would be are still generated
        and sent because a database.rollback is used to undo the changes.
*@Param emailRecips a list of email addresses to send the results to. If in a sandbox ensure email deliverability is turned on to receive the reports.
**/
public static string setPermissionsOnObjects(list<string> sObjects, list<string> profileNames, boolean canView, boolean canEdit, boolean doUpdate, list<string> emailRecips){
        
    system.debug('\n\n\n----- Setting Permissions for profiles');
    list<FieldPermissions> updatePermissions = new list<FieldPermissions>();
    string csvUpdateString = 'Object Name, Field Name, Profile, Could Read?, Could Edit?, Can Read?, Can Edit, What Changed\n';
    map<Id,Id> profileIdToPermSetIdMap = new map<Id,Id>();
    map<Id,Id> permSetToProfileIdMap = new map<Id,Id>();
    map<Id,String> profileIdToNameMap = new map<Id,String>();

    //Every profile has an underlying permission set. We have to query for that permission set id to make a new field permission as 
    //those are related to permission sets not profiles.
    for(PermissionSet thisPermSet : [Select Id, IsOwnedByProfile, Label, Profile.Name from PermissionSet  where Profile.Name in :profileNames]){
        profileIdToPermSetIdMap.put(thisPermSet.ProfileId,thisPermSet.Id);
        permSetToProfileIdMap.put(thisPermSet.Id,thisPermSet.ProfileId);
    }

    Map<String, Schema.SObjectType> globalDescribe = Schema.getGlobalDescribe();
    Map<String,Profile> profilesMap = new Map<String,Profile>();

    //map of profile id to object type to field name to field permission
    map<id,map<string,map<string,FieldPermissions>>> objectToFieldPermissionsMap = new map<id,map<string,map<string,FieldPermissions>>>();

    for(Profile  thisProfile : [select name, id from Profile where name in :profileNames]){
        profilesMap.put(thisProfile.Name,thisProfile);
        profileIdToNameMap.put(thisProfile.Id,thisProfile.Name);
    }

    List<FieldPermissions> fpList = [SELECT SobjectType, 
                                            Field, 
                                            PermissionsRead, 
                                            PermissionsEdit, 
                                            Parent.ProfileId 
                                    FROM FieldPermissions 
                                    WHERE SobjectType IN :sObjects  and
                                    Parent.Profile.Name IN :profileNames
                                    ORDER By SobjectType];

    for(FieldPermissions thisPerm : fpList){
        //gets map of object types to fields to permission sets for this permission sets profile
        map<string,map<string,FieldPermissions>> profilePerms = objectToFieldPermissionsMap.containsKey(thisPerm.parent.profileId) ?
                                                                objectToFieldPermissionsMap.get(thisPerm.parent.profileId) :
                                                                new map<string,map<string,FieldPermissions>>();
        //gets map of field names for this object to permissions
        map<string,FieldPermissions> objectPerms = profilePerms.containsKey(thisPerm.sObjectType) ?
                                                   profilePerms.get(thisPerm.sObjectType) :
                                                   new map<string,FieldPermissions>();

        //puts this field and its permission into the object permission map
        objectPerms.put(thisPerm.Field,thisPerm);

        //puts this object permission map into the object permissions map
        profilePerms.put(thisPerm.sObjectType,objectPerms);

        //write profile permissions back to profile permissions map
        objectToFieldPermissionsMap.put(thisPerm.parent.profileId,profilePerms);
    }
    system.debug('\n\n\n----- Built Object Permission Map');
    system.debug(objectToFieldPermissionsMap);

    for(string thisObject : sObjects){

        system.debug('\n\n\n------ Setting permissions for ' + thisObject);
        Map<String, Schema.SObjectField> objectFields = globalDescribe.get(thisObject).getDescribe().fields.getMap();

        for(string thisProfile : profileNames){

            Id profileId = profilesMap.get(thisProfile).Id;

            //gets map of object types to fields to permission sets for this permission sets profile
            map<string,map<string,FieldPermissions>> profilePerms = objectToFieldPermissionsMap.containsKey(profileId) ?
            objectToFieldPermissionsMap.get(profileId) :
            new map<string,map<string,FieldPermissions>>();            

            //gets map of field names for this object to permissions
            map<string,FieldPermissions> objectPerms = profilePerms.containsKey(thisObject) ?
            profilePerms.get(thisObject) :
            new map<string,FieldPermissions>();

            system.debug('\n\n\n---- Setting permissions for profile: ' + thisProfile);
            
            Id permissionSetId = profileIdToPermSetIdMap.get(profileId);
            for(Schema.SObjectField thisField : objectFields.values()){
                string fieldName = thisField.getDescribe().getName();
                boolean canPermission = thisField.getDescribe().isPermissionable();

                if(!canPermission) {
                    system.debug('\n\n\n---- Cannot change permissions for field: ' + thisField + '. Skipping');
                    continue;
                }

                string fieldObjectName = thisObject+'.'+fieldName;
                FieldPermissions thisPermission = objectPerms.containsKey(fieldObjectName) ?
                                                  objectPerms.get(fieldObjectName) :
                                                  new FieldPermissions(Field=fieldObjectName,
                                                                       SobjectType=thisObject,
                                                                       ParentId=permissionSetId);

                if(thisPermission.PermissionsRead != canView || thisPermission.PermissionsEdit != canEdit){      
                    system.debug('------------------- Adjusting Permission for field: ' + fieldName); 
                    
                    csvUpdateString += thisObject+','+fieldName+','+thisProfile+','+thisPermission.PermissionsRead+','+thisPermission.PermissionsEdit+','+canView+','+canEdit+',';

                    if(thisPermission.PermissionsRead != canView) csvUpdateString += 'Read Access ';
                    if(thisPermission.PermissionsEdit != canEdit) csvUpdateString += 'Edit Access ';
                    csvUpdateString+='\n';

                    thisPermission.PermissionsRead = canView;
                    thisPermission.PermissionsEdit = canEdit;
                    updatePermissions.add(thisPermission);
                }
            }
        }
    }

    system.debug('\n\n\n----- Ready to update ' + updatePermissions.size() + ' permissions');

    Savepoint sp = Database.setSavepoint();

    string upsertResults = 'Object Name, Field Name, Permission Set Id, Profile Name, Message\n';

    Database.UpsertResult[] results = Database.upsert(updatePermissions, false);

    for(Integer index = 0, size = results.size(); index < size; index++) {
        FieldPermissions thisObj = updatePermissions[index];

        string thisProfileName = profileIdToNameMap.get(permSetToProfileIdMap.get(thisObj.ParentId));
        if(results[index].isSuccess()) {
            if(results[index].isCreated()) {
                upsertResults += thisObj.sObjectType +',' + thisObj.Field +','+ thisObj.ParentId +',' +thisProfileName+',permission was created\n';
            } else {
                upsertResults += thisObj.sObjectType +',' + thisObj.Field +','+ thisObj.ParentId +',' +thisProfileName+',permission was edited\n';
            }
        }
        else {
            upsertResults +=thisObj.sObjectType +',' + thisObj.Field + ','+ thisObj.ParentId +',' +thisProfileName+'ERROR: '+results[index].getErrors()[0].getMessage()+'\n';                
        }
    }
    if(!doUpdate) Database.rollback(sp);

    system.debug('\n\n\n------- Update Results');
    system.debug(upsertResults);

    Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage();
    email.setToAddresses(emailRecips);    
    email.setSubject('Object Security Update Result');
    
    string emailBody = 'Updated permissions for objects: ' + sObjects + '\n\n';
    emailBody += 'For profiles: ' + profileNames+'\n\n';
    emailBody += 'CSV Update Plan:\n\n\n\n';
    emailBody += csvUpdateString;
    emailBody += '\n\n\n\n';
    emailBody += 'CSV Update Results: \n\n\n\n';
    emailBody += upsertResults;

    email.setPlainTextBody(emailBody);

    list<Messaging.EmailFileAttachment> attachments = new list<Messaging.EmailFileAttachment>();
    Messaging.EmailFileAttachment efa = new Messaging.EmailFileAttachment();
    efa.setFileName('Update Plan.csv');
    efa.setBody(blob.valueOf(csvUpdateString));
    attachments.add(efa);

    Messaging.EmailFileAttachment efa1 = new Messaging.EmailFileAttachment();
    efa1.setFileName('Update Results.csv');
    efa1.setBody(blob.valueOf(upsertResults));
    attachments.add(efa1);

    email.setFileAttachments(attachments);
    Messaging.sendEmail(new Messaging.SingleEmailMessage[] { email });


    return csvUpdateString;
}

Hope this helps. Till next time. -Kenji

Salesforce Destructive Changes Generator/Deployer

So I was going through some of my old projects and getting them stored in github like some kind of real developer and stumbled across this one which I figured might be worth a post. This is a utility for building a destructiveChanges.xml file that can be deployed to multiple orgs automatically. It uses an execute anonymous script to query your org in whatever way you like to generate a list of metadata to delete and then can deploy it for you to all those orgs. Again I’ll explain a little about why this exists and maybe you’ll find a use for it.

Problem: We have multiple orgs where through testing/development and users being users we have lots of unwanted list views and other types of metadata. We want to remove them from multiple orgs without having to do it manually and refresh sandboxes etc. We also wanted to do this once or twice a week (long story, just trust me on this).

Solution: Write an apex script that can query for the metadata (yes, unfortunately the metadata has to be queryable for this to work), generate a destructive changes file and deploy it to a list of orgs. Then use a nodeJS application to deploy those files to each org automatically.

Now this is more of a developer resource than a full application. It comes with some sample code to show how to use it, but for it to be really useful you are going to have to do some coding of your own most likely (pretty much just writing queries to feed into the sObjectListToStringList function). You’ll need nodeJs and SFDX setup for this to work.

The buildPackage.apex file is where you’ll be adding logic to actually generate the destructive changes file. In the buildPackage function there is a map called packageContents, that is where you will need to add the metadata you want to remove with the key being the metadata type and the value being a list of developerNames of that object type. You’ll write functions to create those queries and then store those values in that map before it is passed to the buildPackageString() function.

The buld_and_deploy_changes.js file is where you can control the behavior such as ignoreDeployErrors for one org and continuing to the next, doDeployment dictates if any kind of deployment should be attempted (validate only or actually deploy) or just generate the XML files, checkOnlyMode so changes are only validated but not deployed (true by default to stop you accidentally deleting stuff while testing), and setting the list of usernames to use as org credentials. Of course for any org you want this to run against you’ll need to have authorized it via SFDX.

Once the script has been run in the destructiveChanges folder a new folder will be created for each org with the destructiveChanges.xml file saved there. After the files are created the automatic deployment will run if you’ve set doDeployment to true and push those changes into your orgs.

You can check it out over at https://github.com/Kenji776/SFDestructiveChangesGenerator and let me know if you have any questions or suggestions on how to improve. Just fair warning I take absolutly no responsibility for the results of using this. It’s up to you to ensure the destructiveChanges files look okay before deploying them so absolutely use the doDeployment=false and/or checkOnlyMode=true until you are sure things look like you want.

Salesforce Apex Script Loader

I’m working on a new tool that may or may not be useful but I’ll explain a little about why it came to exist, what it does and you can decide if it has any value for you or not.

Problem: I have a task that I need to do in our production org each day. I must find any cases that have a certain status, change the status, then change them back to re-trigger a back end process. This is obviously a menial task but can be rather annoying to do if there are more than a few records.

Solution: Write an execute anonymous script to find the cases, modify them as needed and save them back. Easy enough (it’s our prod org and writing a new batch/schedulable class to handle this temporary issue would never fly).

Next Problem: I don’t really like having to log in to the org, open the dev console and run the script. Even that seems like it takes too much time and I actually have to remember to do it. I’d rather just have it all handled automatically.

Next Solution: SFDX allows you to run saved script files against an org you have authenticated against. I can write a simple shell script that can be scheduled and calls my script. The script can handle the work and send me an email with the results incase I’m asked how the queue was that day.

Next Problem: I like to over engineer things when I’m having fun and couldn’t leave well enough alone.

Next Solution: I write an over complicated yet flexible node JS wrapper that I’m calling the Apex Script Loader that is meant to allow numerous different pre saved scripts with their own configurations and even injectable variables for some reason.

You: “Wait, why?”

Me: “Because it was fun and I’ve haven’t had much interesting work to do recently okay?”

So a bit more about how this works. Say you have a script you need to run daily and want to just run it from your desktop instead of needing a scheduled class. Here is what you could do.

  1. Ensure you have Node installed and SFDX installed and your desired org authorized.
  2. Download my Apex Script Loader
  3. Make a new folder to contain your script and the config file.
  4. Write your Apex script and save it in that folder
  5. Copy the sample config file and change your username and other details as needed.
  6. Test that it runs by simply running ‘node ApexScriptLoader.js YourFolderNameHere’
  7. Watch it do it’s thing.
  8. Use the task scheduler on your desktop to run the command whenever you like.
  9. Say “huh, that’s neat” then promptly forget it exists because it’s hardly useful.

But wait you say –

“My script needs to pull in data from some external source and that execute anonymous script can’t access anything on my local file system! Furthermore there is no way to pass data into the script so it’s basically living in a black box aside from what it can query from Salesforce!”

Chill bruh. That’s where my insane over thinking comes in handy. I’ve added a ‘variable injection’ feature that lets you provide as many variables as you like in JSON format via a ‘variables.json’ file (name configurable in the config file). In that file you can provide as many key/value pairs as you like and when the Script Loader runs it will process those and inject them into your Apex script making them available in the runTimeVars variable. So now whatever other processes you have simple need to modify that file adding in their values and your script can access them. Pretty cool huh? I figure later on I’ll probably add the ability to read from multiple files so if you have multiple systems all providing values they aren’t fighting over the file but we’ll see if anyone actually even needs that.

Sample output of the script loader (after that it started printing actual org data so I cut it off)
Sample Configuration File so you can see what variables you can configure
What the variables.json file looks like. You can add as many as you like and of any type supported in JSON
Here you can see what the actual Apex script looks like after the variables have been injected at run time.

If you want to check it out, you can grab it over at https://github.com/Kenji776/ApexScriptLoader

The included sample script does not modify your org in any way, I’ve commented out the update statements so you can run it without fear. It will send you an email when it’s done so you know it ran (if you have it scheduled on a dumb terminal somewhere). Also, I’ve only included a bat file for invoking the script, but it’s literally 3 lines so I’m pretty sure you can write your own version for linux/mac/smart fridge/whatever if you need.

I literally just wrote this today in like the last few hours so there are probably some bugs and less than optimal approaches but I can feel myself already losing interest so I figured I should probably post something before this ends up in the heaps of interesting projects that never see the light of day. Let me know if you have questions are can think of any handy features.

Anyway, hope this helps someone. Till next time.

-Kenji

Merge Salesforce Package.xml Files

So I’ve recently been asked to play a larger role in doing our version control, which is not my strongest suite so it’s definitely been a learning experience. Our process at the moment involves creating a feature branch in bit bucket for each new user story. Folks do whatever work they need to do, and then create a change set. From that they generate a zip file that contains all the stuff they worked on and a package.xml file (using the super cool ORGanizer chrome plugin I just found out about). Once they get that they send it over to me and I get their changes setup as a feature branch and then request a pull to get it back into the master branch. Now I’m not sure if this is the best way to do things or not but in each new branch we need to append all the contents of the new package.xml into the existing package.xml. Much to my surprise I couldn’t find a quick clean easy way to merge two XML files. Tried a few online tools and nothing really seemed to work right, so me being me I decided to write something to do it for me. I wasn’t quite sure how to approach this, but then in an instant I realized that from my post a couples weeks ago I can convert XML into a javscript object easily. Once I do that then I can simply merge the objects in memory and build a new file. One small snag I found is that the native javacript methods for merging objects actually overwrites any properties of the same name, it doesn’t smash them together like I was hoping. So with a little bit of elbow grease I managed to write some utility methods for smashing all the data together. To use this simply throw your XML files in the packages directory and run the ‘runMerge.bat’ (this does require you to have node.js installed). It will spit out a new package.xml in the root directory that is a merge of all your package.xml files. Either way, hope this helps someone.

UPDATE (5/19): Okay after squashing a few bugs I now proudly release a version of package merge that actually like…. works (I hope). Famous last words I know.
UPDATE (5/20): Now supports automatic sorting of the package members, having XML files in sub-directories in the packages folder, forcing a package version, and merging all data into a master branch package file for continual cumulative add ons.
https://github.com/Kenji776/SFPackageMerge

Mass Updating Salesforce Country and State Picklist Integration Values

So it’s Friday afternoon about 4:00pm and I’m getting ready to wrap it up for the day. Just as I’m about to get up I hear the dreaded ping of my works instant messenger indicating I’ve been tagged. So of course I see whats up, it’s a coworker wondering if there is any way I might be able to help with what otherwise will be an insanely laborious chore. They needed to change the ‘integration value’ on all the states in the United States from having the full state name to just the state code (e.g. Minnesota->MN) in the State and Country Picklist. Doing this manually would take forever, and moreover it had to be done in 4 different orgs. I told him I’d see what I could do over the weekend.

So my first thought was of course see if I can do it in Apex, just find the table that contains the data make a quick script and boom done. Of course, it’s Salesforce so it’s never that easy. The state and country codes are stored in the meta data and there ins’t really a great way to modify that directly in Apex (that I know of, without using that wrapper class but I didn’t want to have to install a package and learn a whole new API for this one simple task). I fooled around with a few different ideas in Apex but after a while it just didn’t seem like it was doable. I couldn’t find any way to update the metadata even though I could fetch it. After digging around a bit I decided probably the best way was to simply download the metadata, modify it and push it back. So first I had to actually get the metadata file. At first I was stuck because AddressSettings didn’t appear in the list of meta data object in VScode (I have a package.xml builder that lets me just select whatever I want from a list and it builds the file for me) and didn’t know how to build a package.xml file that would get it. I found a handy stack overflow post that gave me the command

sfdx force:source:retrieve -m Settings:Address

Which worked to pull the data. The same post also showed the package.xml file that could be used to either pull or push that metadata (with this you don’t even need the above command. You can just pull it directly by using ‘retrieve source in manifest from org’ in VS code).

<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>46.0</version>
    <types>
        <members>Address</members>
        <name>Settings</name>
    </types>
</Package>

 
Now that I had the data the only issue really was that there wasn’t an easy way to just do a find and replace or something to update the file. The value for each state (only in the United States) had to be copied from the state code field into the integration value field. So I decided to whip up a quick nodeJS project to do it. You can download it here (it comes with the original and fixed Address.settings-meta.xml files as well if you just want to get those). It’s a pretty simple script, but it does require xml2js because parsing XML is a pain otherwise.

const fs = require('fs');
var xml2js = require('xml2js');
var parseString = xml2js.parseString;

try 
{
    const data = fs.readFileSync('Address.settings-meta.xml', 'utf8')

    parseString(data, function (err, result) {
        

        var root = result.AddressSettings.countriesAndStates[0].countries;
        //console.log(root);

        for(var i = 0; i < root.length; i++)
        {
            
            var countryName = root[i].integrationValue[0];
            if(countryName == 'United States')
            {
                console.log('Found US!');
        
                for(var j = 0; j < root[i].states.length; j++)
                {
                    console.log('Changing ' + root[i].states[j].integrationValue[0] + ' to ' + root[i].states[j].isoCode[0]);
                    root[i].states[j].integrationValue[0] = root[i].states[j].isoCode[0];
                }
            }
        }
        
        var builder = new xml2js.Builder();
        var xml = builder.buildObject(result);
    
        fs.writeFile("Address.settings-meta-fixed.xml", xml, function(err) {
            if(err) {
                return console.log(err);
            }
            console.log("The file was saved!");
        });     
    });
    
} 
catch (err) 
{
    console.error(err)
}
script

Output of my script. Always satisfying when stuff works.

With my fixed address settings file the last step was “simply” to push it back into Salesforce. I’ll be honest, I haven’t used SFDX much, and this last step actually took longer than it should have. I couldn’t decide if I should be using force:source:deploy or force:mdapi:deploy. Seeing as I had to do this in a production org originally I thought I had to use mdapi but a new update made that no longer the case. mdapi wanted me to build a zip file or something and I got frustrated trying to figure it out. I’m just trying to push one damn file why should I need to be building manifests and making zip files and whatever?! So after some trial and error with force:source:deploy I found that it could indeed push to prod and would take just a package.xml as its input. Initially it complained about not running any test so I told it to only run local tests. That also failed because some other code in the org is throwing errors. As a work around I simply provided it a specific test to run (ChangePasswordController, which is in like every org) and that worked. The final command being

sfdx force:source:deploy -x manifest/package.xml -w 10 -l RunSpecifiedTests –runtests ChangePasswordController

deploy

Hooray it finally worked!

And viola! The fixed metadata was pushed into the org and I spared my coworker days of horrific manual data entry. I know in the end this all ended up being a fairly simply process but it did end up taking me much longer than I initially figured mostly just due to not knowing the processes involved or how to reference the data I wanted so I figured maybe this would save someone some time. Till next time.

Update: If you want to grab this project you can find it over at https://github.com/Kenji776/SFAddressUpdate

Apex list all fields on page layout

Hey everyone, I know it’s been a while but I am in fact still alive. Anyway, I’ve got something new for ya. I’ve been asked to describe fields that are actually being used by evaluating page layouts. Fields that are actually being used will need have their values exported from a legacy SF instance and imported into a new one. So instead of having to manually go through and log every field and find it’s data type which would be crazy slow an error prone, I wrote a nifty little script. You simply feed it page layout names and it will get all the fields on them, describe them, create suggested mappings and transformations then email you the results with each layout as a separate csv attachment so you can use it as a starting point for an excel mapping document. It can also describe the picklist values for every field on any object described. Hopefully your sandbox is the same as your production so you can just save this in there and run it without having to deploy to prod. Remember to turn on email deliverability when running from a sandbox! This is still pretty new, as in my first time using it immediately after building it so if you find errors with it’s output or have any suggestions I’m definitely open to hearing about them in the comments.

UPDATE: After adding some more features it became too large to be an execute anonymous script. It’s now been converted to a class. So save this into a new apex class then from execute anonymous call LayoutDescriber.SendLayoutInfo() to run with default settings or pass in a list of page layoutnames and if you want to get picklist values or not. If you want to run it as a script you can remove the picklist value builder lines 156-210 and the check for valid page layout names lines 26-41. That should get it small enough to run.

public class LayoutDescriber
{
    /**
    *@Description gets all the fields for the provided page layouts and emails the current user a csv document for each. It
                  also gets related field data and provides suggested mapping configuration for import. Ooptionally can get picklist values for objects.
    *@Param pageLayoutNames a list of page layout names. Format is [obectName]-[namespace]__[Page layout name]. 
            Omit namespace and underscores if layout is not part of a managed package.
            EX: Account-SkienceFinSln__Address 
            OR
            EX: Account-Account Layout
    @Param getPicklistValues flag that controls whether picklist values for described objects should be included.
    **/
    public static void sendLayoutInfo(list<string> pageLayoutNames, boolean getPicklistValues)
    { 
        List<Metadata.Metadata> layouts = Metadata.Operations.retrieve(Metadata.MetadataType.Layout, pageLayoutNames);
        
        for(string layOutName : pageLayoutNames)
        {
            boolean layoutFound = false;
            for(integer i = 0; i < layouts.size(); i++)
            {
                Metadata.Layout layoutMd = (Metadata.Layout) layouts.get(i);
                if(layoutMd.fullName == layOutName)
                {
                    layoutFound = true;
                }
            }
            if(layoutFound == false)
            {
                throw new applicationException('No layout with name' + layoutName + ' could be found. Please check and make sure namespace is included if needed');
            }
        }
        map<string,map<string,list<string>>> objectPicklistValuesMap = new map<string,map<string,list<string>>>();
        
        map<string,list<string>> objectFieldsMap = new map<string,list<string>>();
        
        for(integer i = 0; i < layouts.size(); i++)
        {
            Metadata.Layout layoutMd = (Metadata.Layout) layouts.get(i);
        
            list<string> objectFields = new list<string>();
            
            for (Metadata.LayoutSection section : layoutMd.layoutSections) 
            {        
                for (Metadata.LayoutColumn column : section.layoutColumns) 
                {
                    if (column.layoutItems != null) 
                    {
                        for (Metadata.LayoutItem item : column.layoutItems) 
                        {
                            if(item.field == null) continue;
                            objectFields.add(item.field);
                        }
                    }
                }
            }
            objectFields.sort();
            objectFieldsMap.put(pageLayoutNames[i].split('-')[0],objectFields);
        }
        
        system.debug(objectFieldsMap);
        
        Map<String, Schema.SObjectType> globalDescribe = Schema.getGlobalDescribe();
        
        Map<String, Map<String, Schema.SObjectField>> objectDescribeCache = new Map<String, Map<String, Schema.SObjectField>>();
        
        String userName = UserInfo.getUserName();
        User activeUser = [Select Email From User where Username = : userName limit 1];
        String userEmail = activeUser.Email;
        
        Messaging.SingleEmailMessage message = new Messaging.SingleEmailMessage();
        message.toAddresses = new String[] { userEmail };
        message.subject = 'Describe of fields on page layouts';
        message.plainTextBody = 'Save the attachments and open in excel. Fieldnames and types should be properly formated.';
        Messaging.SingleEmailMessage[] messages =   new List<Messaging.SingleEmailMessage> {message};
        list<Messaging.EmailFileAttachment> attachments = new list<Messaging.EmailFileAttachment>();
        
        integer counter = 0;    
        for(string thisObjectType : objectFieldsMap.keySet())
        {
            list<string> fields = objectFieldsMap.get(thisObjectType);
            
            Map<String, Schema.SObjectField> objectDescribeData;
            if(objectDescribeCache.containsKey(thisObjectType))
            {
                objectDescribeData = objectDescribeCache.get(thisObjectType);
            }
            else
            {
                objectDescribeData = globalDescribe.get(thisObjectType).getDescribe().fields.getMap();
                objectDescribeCache.put(thisObjectType,objectDescribeData);
            }
        
        
            string valueString = 'Source Field Name, Source Field Label, Source Field Type, Source Required, Source Size, Is Custom, Controlling Field, Target Field Name, Target Field Type, Target Required, Transformation \r\n';
            for(string thisField : fields)
            {
                if(thisField == null || !objectDescribeData.containsKey(thisField))
                {
                    system.debug('\n\n\n--- Missing field! ' + thisField);
                    if(thisField != null) valueString+= thisField + ', Field Data Not Found \r\n';
                    continue;
                }
                
                Schema.DescribeFieldResult dfr = objectDescribeData.get(thisField).getDescribe();
                
                if( (dfr.getType() == Schema.DisplayType.picklist || dfr.getType() == Schema.DisplayType.MultiPicklist) && getPicklistValues)
                {
                    List<String> pickListValuesList= new List<String>();
                    List<Schema.PicklistEntry> ple = dfr.getPicklistValues();
                    for( Schema.PicklistEntry pickListVal : ple)
                    {
                        pickListValuesList.add(pickListVal.getLabel());
                    }     
        
                    map<string,list<string>> objectFields = objectPicklistValuesMap.containsKey(thisObjectType) ? objectPicklistValuesMap.get(thisObjectType) : new map<string,list<string>>();
                    objectFields.put(thisField,pickListValuesList);
                    objectPicklistValuesMap.put(thisObjectType,objectFields);
                }
                boolean isRequired = !dfr.isNillable() && string.valueOf(dfr.getType()) != 'boolean' ? true : false;
                string targetFieldName = dfr.isCustom() ? '' : thisField;
                string targetFieldType = dfr.isCustom() ? '' : dfr.getType().Name();
                string defaultTransform = '';
                
                if(dfr.getType() == Schema.DisplayType.Reference)
                {
                    defaultTransform = 'Update with Id of related: ';
                    for(Schema.sObjectType thisType : dfr.getReferenceTo())
                    {
                        defaultTransform+= string.valueOf(thisType) + '/';
                    }
                    defaultTransform.removeEnd('/');
                }    
                if(thisField == 'LastModifiedById') defaultTransform = 'Do not import';
                valueString+= thisField +',' + dfr.getLabel() + ',' +  dfr.getType() + ',' + isRequired + ',' +dfr.getLength()+ ',' +dfr.isCustom()+ ',' +dfr.getController() + ','+ 
                              targetFieldName + ',' + targetFieldType +',' + isRequired + ',' + defaultTransform +'\r\n';
            }
        
            Messaging.EmailFileAttachment efa = new Messaging.EmailFileAttachment();
            efa.setFileName(pageLayoutNames[counter]+'.csv');
            efa.setBody(Blob.valueOf(valueString));
            attachments.add(efa);
            
            counter++;
        }
        //if we are getting picklist values we will now build a document for each object. One column per picklist, with it's rows being the values of the picklist
        if(getPicklistValues)
        {
            //loop over the object types
            for(string objectType : objectPicklistValuesMap.keySet())
            {
                //get all picklist fields for this object
                map<string,list<string>> objectFields = objectPicklistValuesMap.get(objectType);
                
                //each row of data will be stored as a string element in this list
                list<string> dataLines = new list<string>();
                integer rowIndex = 0;
                
                //string to contains the header row (field names)
                string headerString = '';
                
                //due to how the data is structured (column by column) but needs to be built (row by row) we need to find the column with the maximum amount of values
                //so our other columns can insert a correct number of empty space placeholders if they don't have values for that row.
                integer numRows = 0;
                for(string fieldName : objectFields.keySet())
                {
                    if(objectFields.get(fieldName).size() > numRows) numRows = objectFields.get(fieldName).size();
                }
                
                //loop over every field now. This is going to get tricky because the data is structured as a field with all its values contained but we need to build
                //our spreadsheet row by row. So we will loop over the values and create one entry in the dataLines list for each value. Each additional field will then add to the string
                //as required. Once we have constructed all the rows of data we can append them together into one big text blob and that will be our CSV file.
                for(string fieldName : objectFields.keySet())
                {
                    headerString += fieldName +',';
                    rowIndex = 0;
                    list<string> picklistVals = objectFields.get(fieldName);
                    for(integer i = 0; i<numRows; i++ )
                    {
                        string thisVal = i >= picklistVals.size() ? ' ' : picklistVals[i]; 
                        if(dataLines.size() <= rowIndex) dataLines.add('');
                        dataLines[rowIndex] += thisVal + ', ';
                        rowIndex++;        
                    }
                }
                headerString += '\r\n';
                
                //now that our rows are constructed, add newline chars to the end of each
                string valueString = headerString;
                for(string thisRow : dataLines)
                {            
                    thisRow += '\r\n';
                    valueString += thisRow;
                }
                
                Messaging.EmailFileAttachment efa = new Messaging.EmailFileAttachment();
                efa.setFileName('Picklist values for ' + objectType +'.csv');
                efa.setBody(Blob.valueOf(valueString));
                attachments.add(efa);        
            }
        }
        
        
        message.setFileAttachments( attachments );
        
        Messaging.SendEmailResult[] results = Messaging.sendEmail(messages);
         
        if (results[0].success) 
        {
            System.debug('The email was sent successfully.');
        } 
        else 
        {
            System.debug('The email failed to send: ' + results[0].errors[0].message);
        }
    }
    public class applicationException extends Exception {}
    
    public static void sendLayoutInfo()
    {
        list<string> pageLayoutNames = new List<String>();
        pageLayoutNames.add('Account-Account Layout');
        pageLayoutNames.add('Contact-Contact Layout');
        pageLayoutNames.add('Opportunity-Opportunity Layout');
        pageLayoutNames.add('Lead-Lead Layout');
        pageLayoutNames.add('Task-Task Layout');
        pageLayoutNames.add('Event-Event Layout');
        pageLayoutNames.add('Campaign-Campaign Layout');
        pageLayoutNames.add('CampaignMember-Campaign Member Page Layout');
        sendLayoutInfo(pageLayoutNames, true);
    }
}


The result is an email with a bunch of attachments. One for each page layout and one for each objects picklist fields (if enabled).

mmmmm attachments

For example this is what is produced for the lead object.

Nicely formatted table of lead fields and suggested mappings.

Nicely formatted table of lead fields and suggested mappings.

 

And here is what it built for the picklist values

Sweet sweet picklist values. God a love properly formatted data.

Anyway, I hope this might help some of ya’ll out there who are given the painful task of finding what fields are actually being used on page layouts. Till next time.

Salesforce development is broken (and so am I)

Before I begin this is mostly a humor and venting post. Don’t take it too seriously.

So I’m doing development on a package that needs to work for both person accounts and regular accounts. Scratch orgs didn’t exist when this project was started so we originally had a developer org, then a packaging org which contained the namespace for the package (this ended up being a terrible idea because all kinds of weird bugs start to show up when you do your dev without a namespace and then try to add one. Any dynamic code pretty much breaks and you have to remove the namespace from any data returned by apex controllers that provide data to field inputs in lightning, field set names, object names, etc all get messed up.

Still after adding some work arounds we got that working. However since the developer org doesn’t have person accounts we need another org that does to add in the extra bits of logic where needed. We wanted to keep the original dev org without person accounts as it’s sort of an auxiliary feature and didn’t want it causing any problems with the core package.

Development of the core package goes on for about a year. Now it’s time to tackle adding the extra logic for person accounts which in themselves are awful. I don’t know who thought it was a good idea to basically have two different schemas with the second being a half broken poorly defined bastardization of the original good version. Seriously they are sometimes account like, sometimes contact like, the account has the contact fields but a separate contact object kind of exists but you cannot get to it without directly entering the Id in the URL. The whole thing barely makes any sense. Interacting with them from apex is an absolute nightmare. In this case account and contact data are integrated with a separate system, which also has concepts of accounts and contacts. So normally we create an account, then tie contacts to it. In the case of person accounts we have to create some kind of weird hybrid of the data, creating both an account and contact from one object, but not all the data is directly on the account. For example we need to get the mailing address off the contact portion and a few other custom fields that the package adds. So we have to like smash the two objects together and send it. It’s just bizarre. Anyway at this point scratch orgs exist but we cannot create one from our developer org for some reason, the dev hub options just doesn’t exist. The help page says dev hub/scratch orgs are available in developer orgs, but apparently not in this specific one for no discernible reason.

We cannot enable them in our packaging org either as you cannot enable dev hub from an org with namespaces. So my coworker instead enables dev hub from his own personal dev org and creates me a scratch org into which I install the unmanaged version of the package to easily get all the code and such. Then I just manually roll my changes from that org into dev, and from dev into packaging. That works fine until the scratch org expires, which apparently it just did. Now I cannot log into it, and my dev is suddenly halted. There were no warning emails received (maybe he did, but didn’t tell me) and no way to re-enable the org. It’s just not accessible anymore. Thank goodness I have local copies of my code (we haven’t really gotten version control integrated into our workflow yet) or else I’d have lost any work.

I now have to set out to get a new org setup (when I’m already late for a deadline on some fixes). Fine, so I attempt to create a scratch org from my own personal dev org (which itself is halfway broken, it still has the theme from before ‘classic’. Enabling lightning gives me a weird hybrid version which looks utterly ridiculous).

I enable dev hub and set out to create my scratch org from VS code (I’ve never done this so I’m following a tutorial). So I create my project, authorize my org, then lo and behold, an error occurs while trying to create my scratch org “ERROR running force:org:create: Must pass a username and/or OAuth options when creating an AuthInfo instance.” I can’t find any information on how to fix this, I tried recreating the project, reauthorizing and still nothing. Not wanting to waste anymore time, I say fine I’ll just create a regular old developer org, install the un-managed package and enable person accounts.

I create my new dev org (after some mild annoyance and not being able to end my username with a number) and get it linked to my IDE. So now I need to enable person accounts, but wait you cannot do that yourself. You have to contact support to enable that and guess what Salesforce no longer allows you to create cases from a developer org. Because this package is being developed as an ISV type package I don’t have a prod org to login to create a case from. So now I’m mostly stuck. I’ve asked a co-worker who has access to a production org to log a case, and giving them my org ID, I’m hoping support will be willing to accept a feature request for an org other than the one the case is coming from. Otherwise I don’t know what I’ll do.


I’m sure once things mature more it’ll get better, and a good chunk of these problems are probably my own fault somehow but still, this is nuts.

Image

Salesforce Lightning DataTable Query Flattener

So I was doing some playing around with the Salesforce Lightning Datatable component and while it does make displaying query data very easy, it isn’t super robust when it comes to handling parent and child records. Just to make life easier in the future I thought it might be nice to make a function which could take a query returned by a controller and ‘flatten’ it so that all the data was available to the data table since it cannot access nested arrays or objects. Of course the table itself doesn’t have a way to iterate over nested rows so the child array flatted function is not quite as useful (unless say you wanted to show a contacts most recent case or something). Anyway, hopefully this will save you some time from having to write wrapper classes or having to skip using the data table if you have parent or child nested data.

Apex Controller

public with sharing class ManageContactsController {

    @AuraEnabled
    public static list<Contact> getContacts()
    {
        return [select firstname, name, lastname, email, phone, Owner.name, Owner.Profile.Name, (select id, subject from cases limit 1 order by createdDate desc ) from contact];
    }
}

Lightning Controller

({
   init: function (component, event, helper) {
        component.set('v.mycolumns', [
                {label: 'Contact Name', fieldName: 'Name', type: 'text'},
                {label: 'Phone', fieldName: 'Phone', type: 'phone'},
                {label: 'Email', fieldName: 'Email', type: 'email'},
            	{label: 'Owner', fieldName: 'Owner_Name', type: 'text'},
            	{label: 'Most Recent Case', fieldName: 'Cases_0_Subject', type: 'text'}
            ]);
        helper.getData(component, event, 'getContacts', 'mydata');
    }
})

Helper

({
    flattenObject : function(propName, obj)
    {
        var flatObject = [];
        
        for(var prop in obj)
        {
            //if this property is an object, we need to flatten again
            var propIsNumber = isNaN(propName);
            var preAppend = propIsNumber ? propName+'_' : '';
            if(typeof obj[prop] == 'object')
            {
                flatObject[preAppend+prop] = Object.assign(flatObject, this.flattenObject(preAppend+prop,obj[prop]) );

            }    
            else
            {
                flatObject[preAppend+prop] = obj[prop];
            }
        }
        return flatObject;
    },
    
	flattenQueryResult : function(listOfObjects) {
        if(typeof listOfObjects != 'Array') 
        {
        	var listOfObjects = [listOfObjects];
        }
        
        console.log('List of Objects is now....');
        console.log(listOfObjects);
        for(var i = 0; i < listOfObjects.length; i++)
        {
            var obj = listOfObjects[i];
            for(var prop in obj)
            {      
                if(!obj.hasOwnProperty(prop)) continue;
                if(typeof obj[prop] == 'object' && typeof obj[prop] != 'Array')
                {
					obj = Object.assign(obj, this.flattenObject(prop,obj[prop]));
                }
                else if(typeof obj[prop] == 'Array')
                {
                    for(var j = 0; j < obj[prop].length; j++)
                    {
                        obj[prop+'_'+j] = Object.assign(obj,this.flattenObject(prop,obj[prop]));
                    }
                }
        	}
        }
        return listOfObjects;
    },
    getInfo : function(component, event, methodName, targetAttribute) {
        var action = component.get('c.'+methodName);
        action.setCallback(this, $A.getCallback(function (response) {
            var state = response.getState();
            if (state === "SUCCESS") {
                console.log('Got Raw Response for ' + methodName + ' ' + targetAttribute);
                console.log(response.getReturnValue());
                
                var flattenedObject = this.flattenQueryResult(response.getReturnValue());
                
                component.set('v.'+targetAttribute, flattenedObject);
                
                console.log(flattenedObject);
            } else if (state === "ERROR") {
                var errors = response.getError();
                console.error(errors);
            }
        }));
        $A.enqueueAction(action);
    }
})

Component (Sorry my code highlighter didn’t like trying to parse this)

<aura:component controller=”ManageContactsController” implements=”forceCommunity:availableForAllPageTypes” access=”global”>
<aura:attribute name=”mydata” type=”Object”/>
<aura:attribute name=”mycolumns” type=”List”/>
<aura:handler name=”init” value=”{! this }” action=”{! c.init }”/>
<h3>Contacts (With Sharing Applied)</h3>
<lightning:datatable data=”{! v.mydata }”
columns=”{! v.mycolumns }”
keyField=”Id”
hideCheckboxColumn=”true”/>
</aura:component>

Result

Hope this helps!