Oh my god. It's full of code!

Posts tagged “governor limit

Salesforce going overboard on governor limits

This is more just a venting post than anything truly useful, so if you are looking for some helpful information, tips or tricks, your are just as out of luck as I am.

First let me say, I understand governor limits, and I think they are a decent idea. They force you to code efficiently, and make sure the platform remains stable for everyone. It’s a good practice and I think with some more refining it would be an awesome tool. There is however, a problem.

Salesforce has gone completely over the deep end with their governor limits. It’s to the point where even very efficient code that makes as few database hits as possible still cannot run because of arbitrary restrictions. If you are writing your code as lean as possible, governor limits shouldn’t even really need be considered. They should be there to stop insane loops, or out of control code. More and more I am finding that even well written, bulk safe code is butting up against these limits. There are some projects I have simply had to scrap because there was no way to make them run within the rules.

For the amount of money we pay Salesforce it seems they could scale their architecture to handle some more advanced queries and larger data sets. Hell google can stream video to every last internet connection on earth free to the end user, but yet Salesforce, backed by Oracle, paid handsomely by subscribers can’t get users the data they own in a reliable easy fashion. I’m not talking about moving gigs at a time here either. 100,000 rows shouldn’t be anything. A million rows should still be within reason. I’m just tired of fighting all the time to get anything done. Some companies deal with large volumes of data. It happens. Deal with it. We pay you to deal with it. If it was just me having problems I’d say fine, I suck I’m a moron, and maybe I am. At the same time, I don’t think I’m the only one tired of fighting, sneaking, tricking and compromise to get work done. Programming is hard enough ya know? For being an enterprise application, they sure seem to have problems handling enterprise levels of data.

Anyway, that’s my rant. I’m done. Please remember I am understand governor limits and I am okay with them, I think they just need to be loosened up a little. Maybe let the user set the limits. Say okay, this trigger should never pull more than 500,000 records, if it does, then you can throw an error. Am I crazy? I am the only one having these issues? If so, I’ll suck it up and admit I suck. But I really don’t think that is the case, this time.


Non-selective query against large object type

So recently one of our triggers started throwing this error.

caused by: System.QueryException: Non-selective query against large object type (more than 100000 rows). Consider an indexed filter or contact salesforce.com about custom indexing.
Even if a field is indexed a filter might still not be selective when:
1. The filter value includes null (for instance binding with a list that contains null) 2. Data skew exists whereby the number of matching rows is very large (for instance, filtering for a particular foreign key value that occurs many times)

Trigger.PaymentDuplicatePreventer: line 23, column 2

It was coming from a trigger that had been running flawlessly for many months so originally I was confused as to why this was happening. It then became obvious that the type of object I was querying (Payment__c) now had over 100,000 rows, and the query was just too damn big. It’s kind of odd becuase my actual query would normally only return a few rows, but the dataset it was reading from is too large, or some nonsense like that. After some reading I found the following ideas for fixed.

1) Mark the field as an external identifier, this would force indexing to be enabled for this field. I couldn’t do this though because it was a formula field.

2) Ask Salesforce to enable custom indexing on the field. I asked them, they said they couldn’t because it was a formula field.

3) Break the large query into smaller queries. No, just no. I’m not doing that. That’s dumb and would cause governor limit problems.

4) Add more where statements to your query. At first this seemed insane as well as my query was already pretty tight.

Below you can see the code for the trigger I’m taking about. It’s basically straight out of the Salesforce cookbook.

trigger PaymentDuplicatePreventer on Payments__c(before insert)
{
    //Create a map to hold all the payments we have to query against
    Map<String, Payments__c> payMap = new Map<String, Payments__c>();
    
    //Loop over all passed in payments
    for (Payments__c payment : System.Trigger.new)
    {
    
        // As long as this payment has a payment code and either it's an insert or it's doesn't conflict with another payment in this batch */
        if ((payment.UniquePaymentCode__c != null) && (System.Trigger.isInsert || (payment.UniquePaymentCode__c != System.Trigger.oldMap.get(payment.Id).UniquePaymentCode__c)))
        {
        
            // Make sure another new payment isn't also a duplicate. If it is, flag it, if not, add it
            if (payMap.containsKey(payment.UniquePaymentCode__c))
            {
                payment.UniquePaymentCode__c.addError('Another new payment has the same unique identifier.');
            }
            
            else
            {
                payMap.put(payment.UniquePaymentCode__c, payment);
            }
        }
    }
    
    /* Using a single database query, find all the payments in
    the database that have the same uniquepaymentcode as any
    of the payments being inserted or updated. */
    if(payMap.size() > 0)
    {
        for (Payments__c payment : [SELECT Id,UniquePaymentCode__c FROM Payments__c WHERE UniquePaymentCode__c IN :payMap.KeySet()])
        {
            try
            {
                Payments__c newPay = payMap.get(payment.UniquePaymentCode__c);
                if(newPay != null)
                {
                    newPay.UniquePaymentCode__c.addError('A payment for this person in this study already exists.');
                }
            }
            catch ( System.DmlException e)
            {
                payment.adderror('Payment' + payment.UniquePaymentCode__c + 'Error ' + e);
            }
        }    
    }
} 

The problem is the query
SELECT Id,UniquePaymentCode__c FROM Payments__c WHERE UniquePaymentCode__c IN :payMap.KeySet()

How could I refine that any further and still find all my duplicates? Well thankfully in this case, the only way duplicates can really happen is within a short time span. Someone getting multiple checks for the same event. So really I only need to look in the last few weeks to look for any duplicates. Anything older isn’t really a duplicate. So I changed my query to

SELECT Id,UniquePaymentCode__c FROM Payments__c WHERE UniquePaymentCode__c IN :payMap.KeySet() and CreatedDate = LAST_90_DAYS

And that did the trick. So the moral of the story, if you have a formula field that you are using as a unique key to prevent duplicates, eventually you are going to hit the problem described above. The only fix I have found is to try and have a condition that reduces the number of records the query has to look at. While it may not always be feasible to use the date, perhaps there are other ways you can narrow the result set while still finding any duplicates. Think outside the box a little, and you should be able to come up with a way to do what you need to do.