Oh my god. It's full of code!

jQuery

Salesforce Orchestra CMS Controller Extensions

So I’ve been working with Orchestra CMS for Salesforce recently, and for those who end up having to use it, I have a few tips.

1) If you intend on using jQuery (a newer version than the one they include) include it, and put it in no conflict mode. Newer versions of jQuery will break the admin interface (mostly around trying to publish content) so you absolutely must put it in no conflict mode. This one took me a while to debug.

2) While not official supported, you can use controller extensions in your templates. However the class, and all contained methods MUST be global. If they are not, again you will break the admin interface. This was kind of obvious after the fact, but took me well over a week to stumble across how to fix it. The constructor for the extension takes a cms.CoreController object. As an alternative if you don’t want to mess with extensions what you can do is use the apex:include to include another page that has the controller set to whatever you want. the included page does not need to have the CMS controller as the primary controller, so you can do whatever you want there. I might actually recommend that approach as Orchestra’s official stance is that they do not support extensions, and even though I HAD it working, today I am noticing it act a little buggy (not able to add or save new content to a page).

3) Don’t be araid to use HTML component types in your pages (individual items derived from your page template) to call javascript functions stored in your template. In fact I found that you cannot call remoting functions from within an HTML component directly, but you can call a function which invokes a remoting function.

So if we combine the above techniques we’d have a controller that looks like this

global class DetailTemplateController
{
    global DetailTemplateController(cms.CoreController stdController) {

    }

    @remoteAction
    global static list<user> getUsers()
    {
        return [select id, name, title, FullPhotoUrl from user ];
    }
}

And your  template might then look something like this

<apex:page id="DetailOne" controller="cms.CoreController" standardStylesheets="false" showHeader="false" sidebar="false" extensions="DetailTemplateController" >
	<apex:composition template="{!page_template_reference}">
		<apex:define name="header"> 
			<link href="//ajax.aspnetcdn.com/ajax/jquery.ui/1.10.3/themes/smoothness/jquery-ui.min.css" rel='stylesheet' />

			<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
			<script> var jqNew = jQuery.noConflict();</script> 
			<script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js"></script> 

			<script>
        	        var website = new Object();
			jqNew( document ).ready(function() {
				console.log('jQuery loaded');
			});

			website.buildUserTable = function()
			{
				//remoting request
				Visualforce.remoting.Manager.invokeAction(
					'{!$RemoteAction.DetailTemplateController.getUsers}', 
					function(result, event){
						if (event.type === 'exception') 
						{
							console.log(event.message);
						} 
						else 
						{
							var cols = 0;

							var tbl = jqNew('#bioTable > tbody');
							var tr;
							for(var i = 0; i < result.length; i++)
							{
								if(cols == 0){tr = jqNew('<tr></tr>');}                              

								var td = jqNew('<td></td>');

								var img = jqNew('<img class="profilePhoto">');
								img.attr('src',result[i].FullPhotoUrl);
								img.attr('title',result[i].Title);
								img.attr('alt',result[i].Name);
								img.data("record", result[i]);
								img.attr('id',result[i].Id);

								td.append(img);

								tr.append(td);

								if(cols == 2 || i == result.length-1){
									tbl.append(tr);
									cols = -1;
								}
								cols++;

							}

						}
					})			
			}
			</script>
		</apex:define>
		<apex:define name="body">
			<div class="container" id="mainContainer">
				<div class="pageContent">
					<div id="header">
						<apex:include pageName="Header"/>
						<div id="pageTitle">
							<cms:Panel panelName="PageTitle" panelController="{!controller}" panelheight="50px" panelwidth="200px"/>
						</div>
					</div>
					<div id="pageBody">
						<p>
							<cms:Panel panelName="PageContentArea" panelController="{!controller}"  panelheight="200px" panelwidth="400px" />
						</p>
						<div class="clearfloat"></div>
					</div>

					<!-- end .content --> 
				</div>
			</div>
			<div id="footer_push"></div>
			<div id="footer">
				<apex:include pageName="Footer"/>
			</div>
		</apex:define>
	</apex:composition>
</apex:page>

Then in our page we can add an HTML content area and include

<table id="bioTable">
	<tbody></tbody>
</table>
<script>website.buildUserTable();</script>

So when that page loads it will draw that table and invoke the website.buildUserTable function. That function in turns calls the remoting method in our detailTemplateController extension that we created. The query runs, returns the user data, which is then used to create the rows of the table that are then appended to the #bioTable’s body. It’s a pretty slick approach that seems to work well for me. Your mileage may vary, but at least rest assured you can use your own version of javascript, and you can use controller extensions, which I wasn’t sure about when I started working it. Till next time.


Saltybot – A descent into salty, salty madness

*Article below was written pertaining to Saltybat as it was in September 2013, I have no idea the current standing of the website or if the techniques below are still applicable/required (looking now it seems that the fighter names are available right next to the bet buttons, so the whole needing an OCR scanner to extract the names is no longer required. Bummer that made me feel so smart too)

I. CAN’T. STOP. SALTYBETTING.

I don’t even care about the bucks, it’s about solving the problem. I know that somewhere in the chaos there is data that will allow me to get every bet correct and ‘solve’ the problem. But wait, let me back up a step. First, what the hell is Saltybet? It’s an online video feed of your favorite characters new and old fighting it out controlled by computer AI. You bet on the matches and get ‘saltybucks’ if you win based on a weighted odds system. Before they duke it out both characters stand on screen for about 45 seconds allowing you to consider whom to bet on. After that window betting closes the screen blanks out for a moment, then the fight begins. At that point all you can do is sit and watch (and bitch in the chat window). No you can’t do anything with the money and there is very little rhythm or reason to what makes a good character (one of the best, if not the best currently is an interpretation of Ronald McDonald). It’s pretty addictive to watch and I’ve been having a good time just keeping it on my spare monitor during my work day.

THEN

IT

HIT

ME

I could write a small program, just a little javascript bookmarklet that could take the names of the fighters and using past data tell me who is most likely to win. Hell maybe I could even automate some of it. Overcome with excitement I pushed the major technical challenges out of my mind so not to kill my motivation. I just wanted to try to see what I could do. So my last weeks obsession began.

The Bot

First off, I just got this working no more than like 45 minutes ago so I’m pretty excited about it. That does mean that though some of the code I show you and approaches I use may not be the best, as they are mostly just POC to see if my architecture works. I don’t intent to release this final product so it may never get totally cleaned up. That aside, please find below my decent into madness as I write one of the most complicated cobbled` together things I’ve ever even heard of.

I needed data. A lot of it. For an analytic engine you need data points. Thankfully if you are willing to pitch in a few bucks for server costs you get access to all the characters win loss record and access to the results of every match you have bet on. So not only could you calculate win percentage if you know who various characters won and lost against you could implement a rating system like they use in chess (ELO) or maybe even Glicko. The issue here though was that of course that data was not presented in any kind of API, it was just an HTML table. So the first challenge was trying to get that HTML data into some kind of actual data structure. After a little bit of ajax and jQuery magic I was able to convert the table into a javascript object keyed by the fighters name. I ran their name though a simple regex to remove spaces, make it lower case to make matching easier. I was making progress, now typing in the two fighters name would net me their win loss percents, which are powerful numbers but of course don’t tell the whole story (a person with a 1-0 record might have a 100% win loss, but that isn’t nearly as good someone with a 30-5 record even though they’d have a lower win percent).

Next up was trying to implement a real rating system. I know there are systems out there that could tell you the true strength of a player based on their previous matches and who they won and lost against. Chess ratings implement this kind of system as well as various competitive online games. I knew that something like that could go a long way toward getting me to real odds. The problem is that you can only see the record of who a fighter won and lost against if you bet on that match. There was no way I’d have time to sit around and bet on every match, nor did I want to. I knew I’d have to setup some kind of automated betting to just put in small bets for me to gather data while I was doing other things. First though, I wanted to get the rating system in place so as data came in I could see it affect the results of the equation. After some research and consideration I decided on the ELO system (mostly because I am a bit familiar with it, and its fairly easy to implement and I found some sample code so I didn’t have to write it myself XD ). The basic idea is that every character starts at a rating of 1200, then based on who they win or lose against their rating changes. The more matches they go through the higher the confidence of the result. The simple function for calculating the change in score looks like this.

function CalculateEloRatingChange(player1Score,player2Score,result,gamesPlayed) 
{ 

	var Elo1 = player1Score; 
	var Elo2 = player2Score; 
	var K = Math.round(800/gamesPlayed); 
	var EloDifference = Elo2 - Elo1; 
	var percentage = 1 / ( 1 + Math.pow( 10, EloDifference / 400 ) ); 
	var win = Math.round( K * ( 1 - percentage ) ); 
	var draw = Math.round( K * ( .5 - percentage ) ); 
	if (win > 0 ) win = "+" + win; 
	if (draw > 0 ) draw = "+" + draw; 

	if(result == 'win')
	{
		return parseInt(win,10);
	}
	else
	{
		return parseInt(Math.round( K * ( 0 - percentage ) ),10); 
	}
}

Like I said this was pulled from another source, so I’m not 100% certain on the logic used, but it seems solid and it’s been returning what seem to be reasonable numbers for me. So now all I had to do is pull the betting history table and iterate over it back start to finish, each time feeding the result of the match into this function and recording the new ELO score for that character. Of course since I only have access to the data for matches I have bet on, this is not perfect or absolute, but it is better than nothing and as I keep gathering more data it will just get more and more accurate (addendum: later on I started farming results from other players. Offering them access to the tool in exchange for their betting history which I could feed into my engine).

Now I was able to get a win percent and an ELO score. I was well on the way to having some meaningful data that could point me in the right direction. Both these facts left out something that I thought was pretty crucial though. If this EXACT matchup has happened before the results of that are likely to be repeated and should definitely be taken into consideration. So In the betting history I also decided to look to see if this same match had happened before. If so I initially just printed out a warning to my utility to let me know. I knew that that should have it’s own numerical meaning as well but I couldn’t find any formula like that online so i decided to brew my own. I really don’t have much a background in probability and stats or anything like that so I am really not sure about the weights that I assigned the various outcomes. Maybe someone with those skills could help me tweak this. Overall my scoring formula looks like this

function calculateProjectedWinner(player1Name,player2Name)
{
	//find players rating difference and record
	fighterCareers[player1Name].ratingDiff = fighterCareers[player1Name].eloScore - fighterCareers[player2Name].eloScore;
	fighterCareers[player2Name].ratingDiff = fighterCareers[player2Name].eloScore - fighterCareers[player1Name].eloScore;

	//calculate their win probabilities. The Elo system has it's own function for calculating win probability
	//based on scores, so I just use that as my 'baseline' probabilities. Then I modify it using my other data later on.
	fighterCareers[player1Name].eloWinProbability = parseInt(calculateEloWinOddsPercent(fighterCareers[player1Name].ratingDiff) * 100,10);
	fighterCareers[player2Name].eloWinProbability = parseInt(calculateEloWinOddsPercent(fighterCareers[player2Name].ratingDiff) * 100,10);

	//calculate custom their win probabilities starting at ELO
	fighterCareers[player1Name].computedWinProbability = fighterCareers[player1Name].eloWinProbability;
	fighterCareers[player2Name].computedWinProbability = fighterCareers[player2Name].eloWinProbability;

	//now we need to see if these two players have had any previous matches together. If so we iterate over them
	//and modify their win probabilities accordingly.
	var prevMatches = findPreviousMatch(player1Name,player2Name)		

	for(var i = 0; i < prevMatches.length; i++)
	{
		var winner = prevMatches[i].winner;
		var loser = prevMatches[i].loser;

		//we don't want to make their probability much higher than 95 because we can never be that sure and also
		//anything over 100 is totally meaningless. I decided a factor of 8 percent per win seems about decent. Maybe
		//it should be a little more? I don't know it's still something I'm kind of playing with. 
		if(fighterCareers[winner].computedWinProbability < .92)
		{
			fighterCareers[winner].computedWinProbability = fighterCareers[winner].computedWinProbability + 0.08;
		}
		if(fighterCareers[loser].computedWinProbability > .08)
		{
			fighterCareers[loser].computedWinProbability = fighterCareers[loser].computedWinProbability - 0.08;
		}
	}

	//their win loss percent can be a good statistic if it is composed of enough data points to be meaningful.
	//here is where I wish I had more prob and stats background because I really don't know how many matches it would
	//take for this percent to be actually significant. I'm guessing at 10, so I decided to go with that. If both chars
	//have more than 10 matches under their belt, then lets include their win loss percents in our calculation.
	if(fighterCareers[player1Name].total >= 10 && fighterCareers[player2Name].total >= 10)
	{
		//get the difference between the two win percents. So if we had p1 with 50 and p2 with 75 the difference is 25
		//yes I know ternaries are hard to read, but its cleaner than a stupid one line if statment. Just know that this
		//will return a positive amount that is the difference in win percent between the two.
		var winPercentDifference = fighterCareers[player1Name].winPercent > fighterCareers[player2Name].winPercent ? fighterCareers[player1Name].winPercent - fighterCareers[player2Name].winPercent : fighterCareers[player2Name].winPercent - fighterCareers[player1Name].winPercent;

		//multiple that difference by how confident we are (total number of matches) topping out at. So a number from 20 to 100
		var confidenceScore = fighterCareers[player1Name].total + fighterCareers[player2Name].total > 100 ? 100 : fighterCareers[player1Name].total + fighterCareers[player2Name].total;

		var adjustment = Math.round((winPercentDifference) * (confidenceScore/100)/2);

		//make the actual adjustments to the players probabilities
		console.log('Proposed modifying win perceny by +/- '+ adjustment);
		if(fighterCareers[player1Name].winPercent > fighterCareers[player2Name].winPercent)
		{
			fighterCareers[player1Name].computedWinProbability += adjustment;
			fighterCareers[player2Name].computedWinProbability += adjustment*-1;	
		}
		else
		{
			fighterCareers[player1Name].computedWinProbability += adjustment*-1;
			fighterCareers[player2Name].computedWinProbability += adjustment;			
		}
	}

	//find the winner name
	var projWinner = fighterCareers[player1Name].computedWinProbability > fighterCareers[player2Name].computedWinProbability ? player1Name : player2Name;

	//dream mode is 'intelligently making the stupid bet'. Because long shot bets have such high payouts they can be worth betting on 
	//if you have nothing to lose. Since you are always given 'bailout' cash if you end up with 0 or in the hole, it makes sense to 
	//bet on super long shots. If they win you get a TON of cash. If they lose you are just right back to where you started. Of course
	//that's up to the player though if they want to use that mentality so I made it optional. Also most players would only want to make stupid bets
	//if they have under a certain amount to keep from losing their fortune, and because at higher dollar values you can bet a large enough
	//percent of the total pot to still make good returns.
	if(dreamMode && saltyBucks < dreamModeDisabledAtAmount)
	{
		var winPercentDifference = fighterCareers[player1Name].computedWinProbability > fighterCareers[player2Name].computedWinProbability ? fighterCareers[player1Name].computedWinProbability - fighterCareers[player2Name].computedWinProbability : fighterCareers[player2Name].computedWinProbability - fighterCareers[player1Name].computedWinProbability;
		if(winPercentDifference > dreamModePercentThreshold)
		{
			$('#statusDiv').html('Bet on the dream!');
			 projWinner = fighterCareers[player1Name].computedWinProbability < fighterCareers[player2Name].computedWinProbability ? player1Name : player2Name;
		}
	}
	$('#statusDiv').html('Projected winner is ' + projWinner);
	return projWinner;
}

Great, now I had pretty confidently who was going to win and lose. But I was still short on data and betting manually all the time was getting to be a pain. My bot could auto bet, but not know who was fighting, or I could manually bet and have to actually enter names to do it. At this point you are probably saying, ‘well just extract the character names from somewhere, feed them into the formula and be done with it!’. I wish it was that simple. The stream of the fight is an embedded flash object and the names of the characters do not appear anywhere. The names are simply not available by any conventional means. It seriously seemed like the author went out of his way to make the names not available to prevent this kind of thing. I knew I’d have to solve that problem but, for the time being I needed to collect data. I settled on just having a stupid bot bet small amounts on someone at random so I could harvest that sweet sweet result data.

Even with that decision it wasn’t totally easy. Because it’s an embedded flash object how would I know when the betting window is open? I’ve only got about 45 seconds from when betting opens to when it closes, so whatever I do has to be reasonably quick. I then realized that the status text below the video changes to ‘Betting is Now open’ when you can bet. I simply told my bot to keep on a DOM transform onchange function to that. When that element changes evaluate the text and figure out if it says betting is now open. If so, wait about 40 seconds (so I have time to enter a manual bet if I want to) and then if no bet has been placed enter one. Using that same technique I know when the fight starts, ends, and payouts have been distributed. That ended up working out pretty well, though occasionally there seemed to be some sever delays that prevented entering a bet if I was too close to the deadline.

What my javascript bookmarklet looks like

What my javascript bookmarklet looks like

Using the same kind of trick I was able to extract the players current saltybuck total so i could deduce how to bet a small percent amount of their total, instead of just some static amount. Things were coming together well. I could just leave the bot on all night and it would bet for me. There were one or two mornings i came back and it had won me over 100K (randomly of course, it had no idea who it was betting on at this point). I build a nice little interface using jQuery UI that could be launched via my bookmarklet and if i entered the names I could get some decent odds data. I even rigged up an auto complete on the fighter names based on all the known fighters from the win/loss totals table. I added a few more fun little features, a hotkey combination to show and hide the window. I even added a sound effect ‘Oh yeah‘ if the bot wins a big amount of money (currently defined as over 10K, though I should probably make it to something like over 200% of your current total topping out at like 50K or something). When I actually paying attention and betting I was doing well, and if I walked away the bot would take over and place small bets to keep that sweet data stream coming in.

I knew that this was about as far as I could take the bot running as just a javascript thing bookmarklet thing. If I wanted more (centralized data so ELO and such didn’t have to be recalculated every fight), potentially to actually know who is fighting, I’d have to step out and really tread into unknown territory. I was going to need to somehow get a screenshot of who was fighting during the betting time. I was going to need to extract the names from the image. I’d have to feed that into some kind of optical character recognition engine (OCR). Then I’d have to take those results and make them available via a web service. I’d have to modify the bot to reach out to that webservice to trigger the reading and get the names. This couldn’t be done in the browser so I was going to need to develop some kind of server mechanism. I’d also need about 5 pots of coffee.

The Server

I decided I’d tackle what I considered to the easier part first to keep my spirits up and keep me from quitting when I reached the part which i knew would be most difficult (the OCR). The server had a fairly simple job to do in my mind. I needed to listen for a call from the client (since the client knew when the betting screen was open, it could make the callout, where as the server would have no idea because that monitoring functionally was still built into the client part. I’d have to refactor this later). When it got the request it would need to take a screenshot of the browser window which would also have to be running on the server. Ideally it would extract just name of the fighters, and save those images. It would then trigger the OCR engine to read the files. When that was done it would then read out the resulting data back to the requester (huh, now that I type that out it sound kind of hard, but regardless it wasn’t really too bad). I decided the easiest and lightest weight answer for a server would be a node js instance. I have some experience with node and it’s quick to get running so it seemed like the natural candidate.

After a bit of reading to get back up to speed on how to setup node and getting my basic hello world up and running I found a library that would allow node to execute commands on the server (yeah I know that’s dangerous, but this is all local, so whatever). I just rigged it up to listen for a specific page request, and once it got that it would run a batch file which would handle the screenshot, image processing and OCR work. Once it got word the batch file had run it would read the contents of the two text files that were hosted on the server as well that would hold the names of the current fighters. Here is the node code.

var express = require('express');
var sh = require('execSync');
var app = express.createServer() ;
var fs = require('fs');

var port = process.env.PORT || 80;
//configure static content route blah
app.configure(function(){
  app.use(express.methodOverride());
  app.use(express.bodyParser());
  app.use(express.static(__dirname + '/public'));
  app.use(express.errorHandler({
    dumpExceptions: true, 
    showStack: true
  }));
  app.use(app.router);
});

app.listen(port, function() {
  console.log('Listening on ' + port);
});

app.get('/getFighters', function(request, response){

	console.log('Request made to get fighter data');
	var result = sh.exec('cmd /C screenshot.bat');

	console.log('Command ran ' + result.stdout);
 	fs.readFile( 'public\\fighter1Name.txt', "utf-8", function (err, fighter1) {
		if (err) console.log( err );
		fs.readFile( 'public\\fighter2Name.txt', "utf-8", function (err, fighter2) {
		  if (err) console.log( err );
		  var fighters = new Object();
		  fighters.fighter1 = fighter1.trim();
		  fighters.fighter2 = fighter2.trim();

		  response.send(request.query.callback+'('+JSON.stringify(fighters)+');');
		});
	});
});

Not too bad eh? As you can see the results are wrapped using a JSONP style callback system so this can be invoked from anywhere. Once that was up and running now I had to write the batch file that actually did all the hard work.

The Bat File

The node server pretty much has a black box kind of process. It just calls some batch file and expects results. Not that it really matters, but the execute process is async and so the server didn’t know when that process had completed (ended up having to have a loop that attempts to read the contents until it succeeds, shitty I know). It has no idea of course how the bat file does what it does, and honestly neither did I when i first started building it. I knew the bat file would have to take a screenshot, extract the names of the fighters from that screenshot, and invoke the OCR engine. At this point i knew I was at least going to use Tesseract for my OCR engine, and that ImageMagick (a suite a command line tools for image processing) where likely going to be how I did the image processing. For capturing the screenshot I found a simple utility on google code called screenshot-cmd that would take a screenshot of the primary monitor. I figured then I could use imagemagick  to crop down the un-needed stuff (since the video is in the exact same place on my screen every time I could use coordinate based cropping). Then with the images cleaned up I could forward them onto Tesseract.

After a bit of messing around I managed to get the screenshot and get ImageMagick to extract just the names of the fighters from the betting screen image. Later on I had a sudden moment of clarity and realized I could remove the background from the names if I just deleted everything that wasn’t the red color for the player 1 name, and the blue color for the player 2 name (since they are always exactly the same color). Also I decided to archive the old captures so I’d have them to help train the OCR engine. The final batch script looks like this

@ECHO OFF

FOR %%I IN ("public\screens\*.png") DO (
  SET lmdate=%%~tI
  SETLOCAL EnableDelayedExpansion
  SET lmdate=!lmdate:~6,4!-!lmdate:~3,2!-!lmdate:~0,2! !lmdate:~11,2!-!lmdate:~14,2!
  MOVE "%%I" "public\screens\old\!lmdate!-%%~nxI"
  ENDLOCAL
)

::Take screenshot of primary monitor at full resolution
screenshot-cmd 0 0 1920 1080 -o public\screens\fighters.png

::ImageMagick shave off the left 478 pixels and the top 135 pixels to cleanup the image
convert -shave 478x135  public\screens\fighters.png public\screens\fighters.png

::ImageMagick remove the bottom and right borders
convert public\screens\fighters.png -gravity South  -chop  0x150  public\screens\fighters.png

::Now we have a screenshot with just the fighters. Now we have to extract the names of the fighters and put them in separate files

::Extract fighter1 name by cropping out an 800px X 40px swatch from the top of the image
convert public\screens\fighters.png -crop 800x40+60+0 public\screens\name1.png

::Remove all colors except for the red used by the font
convert public\screens\name1.png -matte ( +clone -fuzz 4600 -transparent #e3522d ) -compose DstOut -composite public\screens\name1.png

::Extract fighter1 name by cropping out an 800px X 40px swatch from the bottom of the image
convert public\screens\fighters.png -crop 800x40+200+618 public\screens\name2.png

::Remove all colors except for the red used by the font
convert public\screens\name2.png -matte ( +clone -fuzz 4600 -transparent #2798ff ) -compose DstOut -composite public\screens\name2.png

::Feed the player names into tesseract for OCR scanning.Write results to two different text files. One for each fighter
tesseract public\screens\name1.png public\fighter1Name -l salty
tesseract public\screens\name2.png public\fighter2Name -l salty

The commands took a bit of time to get just right (what with having to find just the right offsets and messing with the color removing fuzz factor). The final output is pretty damn good actually. Check this out.
name1name2

All things considered I’d say those are some damn fine extractions from a screenshot of a flash video. Now all that was left is the final part, tackling the Tesseract OCR training process to teach it about this strange font.

Tesseract OCR

Tesseract is pretty much the premier freeware OCR engine. There really isn’t anything else that competes with it. It’s hard as hell to figure out and takes a ton of time to get setup properly for new languages but I had heard when it works, it works pretty damn well. I know next to nothing of OCR, so I knew tackling this was going to be a challenge. The basic outline breaks out like this

1) Gather samples of your new font. At least a few occurrences of every possible character.

2) Create a ‘box’ file which is basically just a coordinate mapping of where each character starts and stops and what it represents. (finding a functional tool for this part took forever, because it turns out I was using a bad image that caused them all to have problems or act very slowly. Pro tip, when saving your TIF file to feed into a box editor, if using photoshop discard the layer data. It makes the file way too big and slow to use).

3) Train Tesseract using the box file

4) Generate the rest of the weird files it needs that I don’t know what do.

5) Package all the files and see if your new language works.

eng.salty.exp0The shortcut method here is create your training image with all your chars, use jTessBoxEditor to do your modifications to the box file. Then use SerakTesseractTrainer to do the training and create the files. Honestly if i had known about those two things right off the bat, my life would have been a lot easier. Over half my battle was just trying to find what tools to use and getting them to work right.

Also retraining it after I was able to remove the backgrounds from the names made it about a billion times more accurate. I would highly recommend that approach if you have the ability to. Good training data makes all the difference. Trying to train it with crummy data with backgrounds and weird shit going on makes it next to impossible. On the right you can see what my training data looked like and it ended up working out pretty well. It’s still lacking some numeric characters, but I’ll have to add those in later.

I was amazed to find it actually worked. The names were being read properly and written to the file. The node server was grabbing the contents of the file and returning it to the requesting bot. The bot took the names and fed them into the scoring system and placed bets accordingly. It was a beautiful symphony built from a total clusterfuck. I am almost sad now because I have solved my project. Sure i can make a little better, implement a database, maybe tweak the scoring engine some, but overall it’s been solved. All that’s left to do now is sit back and watch the salt roll in. Later on I did a bit of re-factoring, moving the calculation onto the server and out of the client (where it belongs). I also created an extension just for the server that would invoke the screen reading process instead of accepting the request for the normal client (since I figured I may end up distributing the code I didn’t want everyone’s clients telling my server to constantly try to re-read the screen and such). Eventually the client got dumbed down to just polling the server when it detected that bets were open until it got back the fight odds and it then could set a suggested bet amount for the player. I also ended up adding a few other features to the client like ‘dream mode’ where in if the odds against a character were so insane as to make payout on the favorite nominal but the payouts for underdog amazing, bet on the underdog in hopes of a huge payout. You could set some variables like always bet in dream mode until you reached a certain threshold. You could also bet all in mode which would automatically bet all your money until a certain threshold since payouts at lower levels of betting were always so minimal. This is what the ‘final’ version of the client ended up looking like.

saltyclient final

As a postscript to this story to gather more data I ended up offering a trade to other players. If they could provide my their betting history data and enough of it was unique (I didn’t already have the results of that fight, which I identified by timestamp I would give them access to the tool). With their betting data added onto mine I ended up having an accuracy rate of around 85% which isn’t too bad. The overall results were somewhat disappointing though because for whatever reason the SaltyBet community was really good at guessing as well and the odds would end up so heavily staked in the winners favor that usually my payouts were pretty small.

Right now the Saltybot server isn’t running and the data is probably badly out of date, but hey if you want to download the source and get it running again, knock yourself out. You can download the source here

https://drive.google.com/file/d/0B04fc3zIG4iyMURsemloR040NFE/edit?usp=sharing

I don’t remember the exact setup steps, but I believe you’ll want to drop all the server files in a directory on your machine. Spin up a node.js console and launch core.js. Open up saltybet.com and keep it fullscreen. Then on your server install the saltyBotServerExtension into chrome. That should watch for fight changes and do the OCR process and put the results into the public folder. You’ll want to setup a web server where the public folder is available for your client to get at. Then install the client extension in your machine you intend to use as your ‘betting’ machine and point it at your webserver (yeah you’ll probably have to modify the source, thankfully in chrome you can just modify the source and load the unpacked extension). That should get you pretty close. If you have questions, feel free to ask, I’ll do what I can to help. I am interested in seeing where this goes, I’m just too lazy right now to do much with it myself. If there is interest maybe I’ll try and get it running again.


Floating/Sticky Headers For Visualforce PageBlockTable

So this is it. This is going to be the definitive guide for how to get some floating headers on your Visualforce page block table. I know there are many approaches, lots of debates about how to do it, but what I’ve got here is likely the best, simplest way to do it. It’s a jQuery plugin that will make the headers of a page block table stick to the top of the tables parent div. Check out a demo here

http://xerointeractive-developer-edition.na9.force.com/partyForce/floatingHeaders

You can download the plugin here.

https://www.box.com/s/lr73ibecfvo4bi0qzzbn

Upload it as a static resource, or just copy paste the contents into your visualforce page. Either way is fine. Also, in your visualforce page you’ll need to include the css class .floatingStyle and set it’s position to relative. So just

<style>      
.floatingStyle 
{ 
    position:relative; 
} 
</style>

To use it, simply put your pageblocktable inside a div or apex:outputpanel (with layout set to block). Give that container a height. Invoke the plugin on the table either by class or id. So if my pageblock tables had the styleClass of ‘floatingHeaderTable’ I could invoke it this way.

    <script>
    $(document).ready(function() {
        $('.floatingHeaderTable').vfFloatingHeaders();
    });
    </script> 

and that’s it. You are good to go. Here is a full sample page.

Visualforce Page

<apex:page controller="floatingHeadersController">

    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js"></script>
    <script src="{!URLFOR($Resource.jquery_vfFloatingHeaders)}"></script>

    <style>
        .tableContainer
        {
            height:290px; 
            width: 100%;
            overflow: auto;
        }       
        .floatingStyle 
        { 
            position:relative; 
        } 
    </style>

    <script>
    $(document).ready(function() {
        $('.floatingHeaderTable').vfFloatingHeaders();
    });
    </script>   

    <apex:pageBlock >
        <apex:outputPanel styleClass="tableContainer" layout="block">
            <apex:pageBlockTable value="{!contactList}" var="item" title="Contact List" styleclass="floatingHeaderTable" >
                <apex:column value="{!item.firstname}"/>
                <apex:column value="{!item.lastname}"/>
                <apex:column value="{!item.email}"/>
                <apex:column value="{!item.phone}"/>
            </apex:pageBlockTable>
        </apex:outputPanel>
    </apex:pageBlock>
</apex:page>

Apex Class

public class floatingHeadersController 
{
    public list<contact> contactList
    {
        get
        {
          if (contactList == null)
          {
              contactList = [select firstname, lastname, email, phone from contact];
          }  
          return contactList;
        }
        set;
    }
}

I should totally mention that the bulk of this code came from the blog at

http://rajputyh.blogspot.com/2011/12/floatingfixed-table-header-in-html-page.html

I just wrapped it up and modified it a bit to work with page block tables since their table headers didn’t originally have ids, and put it into a nifty plugin.


Lets Build a Tree (From Salesforce.com Data Categories)

Salesforce Data categories. If you’ve had to code around then on the Salesforce.com platform, you are probably aware of the complexity, and how much of a pain they can be. If you havn’t worked with them much, you are fortunate 😛 They are essentially a way to provide categories for any sObject in Salesforce. They are most frequently used with knowledge articles. The Apex calls, describes and schema for them is unlike anything else in the Salesforce schema. Categories are their own objects and they can be nested to infinite complexity. In short, they are complicated and take a while to really get your head around them (I still don’t know if I really do). Thankfully I’ve done a bunch of hard work and discovery so that you don’t have to. For this particular project, we are going to build a nifty tree style selector that allows a user to select any data category for a given sObject type. You can then do whatever you want with that info. Yes I know there are some built in visualforce components for handling data categories, but they aren’t super flexible and this is just a good leaning experience. In the end, you’ll have an interactive tree that might look something like this.

treeDemoWord of Warning: I had to live modify some of code I posted below to remove sensitive information that existed in the source project. I haven’t used the EXACT code below, but very very close. So please let me know if something doesn’t quite work and I’ll try to fix up the code in the post here. The idea works, it’s solid, but there might be a rough syntax error or something.

Our application is going to consist of a visualforce page that displays the tree. A component that contains the reusable tree code, a static resource that contains the javascript libraries, css file and images for the tree structure. Of course we will also have an apex class that will handle some of the heavy lifting of getting category data, and returning it to our visualforce page. We’ll use javascript/apex remoting to communicate with that Apex class. First off, lets grab the static resource and get that uploaded into your org. You can snag it here

https://www.box.com/s/04u0cd8xjtm0z84tbhid

upload that, make it public, call it jsTree. Next we’ll need our Apex class. It looks like this.

global class CaseSlaController
{
    //constructors for component and visualforce page extension
    public CaseSlaController() {}
    public CaseSlaController(ApexPages.StandardController controller) {}

    //gets category data and returns in JSON format for visualforce pages. Beware that since we end up double JSON encoding the return 
    //(once from the JSON.serialize, and another time because that's how data is returned when moved over apex remoting) you have to fix
    //the data on the client side. We have to double encode it because the built in JSON encoder breaks down when trying to serialize
    //the Schema.DescribeDataCategoryGroupStructureResult object, but the explicit call works.
    @remoteAction 
    global static string getCategoriesJson(string sObjectType)
    {
        return JSON.serialize(CaseSlaController.getCategories(sObjectType));
    }

    public static  list<Schema.DescribeDataCategoryGroupStructureResult> getCategories(string sObjectType)
    {

        //the describing of categories requires pairs of sObject type, and category name. This holds a list of those pairs.
        list<Schema.DataCategoryGroupSObjectTypePair> pairs = new list<Schema.DataCategoryGroupSObjectTypePair>();

        //list of objects to describe, for this app we only take 1 sObject type at a time, as passed into this function.
        list<string> objects = new list<string>();
        objects.add(sObjectType);

        //describe the categories for this object type (knowledgeArticleVersion)
        List<Schema.DescribeDataCategoryGroupResult> describeCategoryResult =  Schema.describeDataCategoryGroups(objects);

        //add the found categories to the list.
        for(Schema.DescribeDataCategoryGroupResult s : describeCategoryResult)
        {
            Schema.DataCategoryGroupSObjectTypePair thisPair = new Schema.DataCategoryGroupSObjectTypePair();
            thisPair.sObject = sObjectType;
            thisPair.dataCategoryGroupName = s.getName();
            pairs.add(thisPair);            
        }

        //describe the categories recursivly
        list<Schema.DescribeDataCategoryGroupStructureResult> results = Schema.describeDataCategoryGroupStructures(pairs,false);

        return results;
    }    
    private static DataCategory[] getAllCategories(DataCategory [] categories)
    {
        if(categories.isEmpty())
        {
            return new DataCategory[]{};
        } 
        else
        {
            DataCategory [] categoriesClone = categories.clone();
            DataCategory category = categoriesClone[0];
            DataCategory[] allCategories = new DataCategory[]{category};
            categoriesClone.remove(0);
            categoriesClone.addAll(category.getChildCategories());
            allCategories.addAll(getAllCategories(categoriesClone));
            return allCategories;
        }
    }
}

So there are three functions there and two constructors. The constructors are for later on when we use this thing in a component and a visualforce page, so don’t really worry about them. Next is the getCategoriesJson, that is the remote function we will call with our javascript to get the category data. It just invokes the getCategories function since that returns an object type that Salesforce can’t serialize with it’s automatic JSON serializer without blowing up (in my real app I had to use getCategories for another reason, hence why I didn’t just combine the two functions into one that always returns JSON). The last one is just a private function for spidering the data category description. Other than that, you can check out the comments to figure out a bit more about what it’s doing. In short it describes the categories for the given sObject type. It then creates dataCategoryGroupSobjectTypePairs from those categories and describes those and returns the huge complicated chunk.

Alright, so we got the back end setup, let’s actually make it do something. For that we need our component and visualforce page. First up, the component. Wrapping this picker in a component makes it easy to use on lots of different visualforce pages. It’s not required but it’s probably a better design practice.

<apex:component Controller="CaseSlaController">
    <!---- Two parameters can be passed into this component ---->
    <apex:attribute name="sObjectType" type="string" description="the sObject type to get data category tree for" />
    <apex:attribute name="callback" type="string" description="Name of javascript function to call when tree drawing is complete" />

    <!--- include the required libraries --->
    <link rel="stylesheet" href="{!URLFOR($Resource.jsTree, 'css/jquery.treeview.css')}" />
    <apex:includeScript value="{!URLFOR($Resource.jsTree, 'js/jquery.min.js')}" />
    <apex:includeScript value="{!URLFOR($Resource.jsTree, 'js/jquery.treeview.js')}" />

    <script>
        //put jQuery in no conflict mode
        j$=jQuery.noConflict();     

        //object to hold all our functions and variables, keep things organized and dont pollute the heap
        var categorySelect = new Object();

        //invokes the getCategoriesJson function on the apex controller. Returns to the callback function with the
        //fetched data
        categorySelect.getCategoryData = function(sObjectType,callback)
        {
            Visualforce.remoting.Manager.invokeAction(
                '{!$RemoteAction.CaseSlaController.getCategoriesJson}', 
                sObjectType,
                function(result, event){
                   callback(result,event);
                }, 
                {escape: true}
            );          
        }    

        //as soon as the dom has loaded lets get to work
        j$(document).ready(function() {

            //first off, find all the data category data for the given sObject type.       
            categorySelect.getCategoryData('{!sObjectType}',function(result,event)
            {
                //the json data we get back is all screwed up. Since it got JSON encoded twice quotes become the html
                //&quote; and such. So we fix the JSON and reparse it. I know its kind of hacky but I dont know of a better way

                var fixedJson = JSON.parse(categorySelect.htmlDecode(result));         

                //lets create the series of nested lists required for our tree plugin from the json data.
                var html = categorySelect.buildTreeHtml(fixedJson);                          

                //write the content into the dom
                j$('#categoryTree').html(html);              

                //apply the treeview plugin
                j$("#categoryTree").treeview({
                    persist: "location",
                    collapsed: true,
                    unique: true
                });  

                //if the string that was passed in for callback is actually representative of a function, then call it
                //and pass it the categoryTree html.
                if(typeof({!callback}) == "function")
                {
                    {!callback}(j$("#categoryTree"));                                               
                }
            });    
        });

        //function that is meant to be called recursivly to build tree structure html
        categorySelect.buildTreeHtml = function(category)
        {
            var html = '';     

            //iterate over the category data  
            j$.each(category,function(index,value)
            {
                //create list item for this item.
                html+='<li><a href="#" category="'+value.name+'" class="dataCategoryLink" title="Attach '+value.label+' SLA to Case">'+value.label+'</a>';

                //check to see if this item has any topCategories to iterate over. If so, pass them into this function again after creatining a container               
                if(value.hasOwnProperty('topCategories') && value.topCategories.length > 0)
                {
                    html += '<ul>';
                    html += categorySelect.buildTreeHtml(value.topCategories);                    
                    html +='</ul>';                 
                }   
                //check to see if this item has any childCategories to iterate over. If so, pass them into this function again after creatining a container                           
                else if(value.hasOwnProperty('childCategories')  && value.childCategories.length > 0)
                {
                    html+='<ul>';                   
                    html += categorySelect.buildTreeHtml(value.childCategories);
                    html+='</ul>';
                }
                html += '</li>';
            });
            return html;                
        }

        //fixes the double encoded JSON by replacing html entities with their actual symbol equivilents
        //ex: &quote; becomes "
        categorySelect.htmlDecode = function(value) 
        {
            if (value) 
            {
                return j$('<div />').html(value).text();
            } 
            else
            {
                return '';
            }
        }            
    </script>
    <div id="categoryTreeContainer">
        <ul id="categoryTree">

        </ul>
    </div>
</apex:component>

Now then finally we need a visualforce page to invoke our component and rig up our tree items to actually do something when you click them. We wanted to keep the component simple, just make the interactive tree cause different pages might want it to do different things. That is where that included callback function comes in handy. The visualforce page can invoke the component and specify a callback function to call once the component has finished its work so we know we can start manipulating the tree. Our page might look like this.

<apex:page sidebar="false" standardController="Case" showHeader="false" extensions="CaseSlaController">
    <c:categorySelect callback="knowledgePicker.bindTreeClicks" sObjectType="KnowledgeArticleVersion"/>

    <script>           
        var knowledgePicker = new Object();

        knowledgePicker.bindTreeClicks = function(tree)
        {
            j$('.dataCategoryLink').click(function(event,ui){
                event.preventDefault();
                alert('clicked ' + j$(this).attr('category'));
            }); 
        }                           
    </script>   
</apex:page>

We invoke the component passing it a callback function name and the type of sObject we want to make the category tree of. We then create a function with the same name as the callback. Inside that function we simple attach an onclick event handler to the tree category links that sends us an alert of which one the user clicked. Of course we could then do anything we wanted, make another remoting call, update an object, whatever.

Anyway, I hope this was helpful. I know I was a bit frustrated as the lack of sample code for dealing with categories so hopefully this helps some other developers out there who might be trying to do the same kind of thing. Till next time!

-Kenji/Dan


Javascript Console

So I just wrapped up another Cloudspokes challenge.I spent way too much time on this. Really it’s just a dumb little $300 dollar challenge, and I think I put like 10 or so hours into this, but whatever, it was fun. I probably went overboard, but I think I made something actually kind of useful. First, before I get too deep into it, check out the challenge details.

http://www.cloudspokes.com/challenges/1999

The basic idea being that they wanted a way to run javascript on a web page, similar to Firebug or the Chrome developer tools. They also suggested adding a list of the user/developer defined functions and maybe an auto complete system to make entering the code to run easier. I wanted to take it a step further and build it all as pure javascript so that it could be turned into a bookmarklet. With it all delivered as a bookmarklet that means you can inject this console on any webpage anywhere without needing access to the source and start running functions on it. You don’t need to include any additional libraries or hooks or anything. You need NO access to the source code of the webpage for this to work, which is what really makes it cool.

It does this by using javascript to dynamically inject jQuery, jQuery UI, and bootstrap (only if needed, it’s smart enough not to double include libraries), and builds the console directly by inserting it into the DOM. It uses some tricky looping and evaluation to find all exisitng functions, list them, and get the function bodies as well. It creates an input area where javascript code can be entered, and the results are displayed in a console using intelligent return type handling. Using a bit of bootstrap for autocomplete, buttons, and the collapsible side list view it creates a simple yet powerful interface. As a neat side effect I found what is probably the end all, be all way to inject scripts that may or may not be present and rely on each other. I’m hoping to make a blog post out of that technique pretty soon. It uses some neat recursion and callbacks to work it’s magic and the end result is very efficient and reliable.

Anyway, check out the video for a better understanding/demo of what it can do.

http://www.screencast.com/t/jVUo5Qsh5 <- Demo video

To try it out, make a bookmark a new bookmark, and set it’s path to this. Sorry I can’t make a bookmarklet link here for you, wordpress keeps killing the link.  It might take a little bit of time to load on very complicated pages. Also it does support pressing up and down in the input box to flip between previously entered commands.

javascript: (function(){var jsCode=document.createElement('script');jsCode.setAttribute('src','https://dl.dropbox.com/u/65280390/jsConsoleScript.js');document.body.appendChild(jsCode);}());

Building a mobile site on Salesforce site.com, with cool menu to mobile list code!

Mobile. Mobile mobile mobile. Seems like the only word you here these days when it comes to technology. That, or social. Point being if you don’t have a mobile site most of the world will figure your company is way behind the times. Problem is designing mobile websites sucks. So many different devices and resolutions, features and capabilities. It’s worse than regular web design by a long shot when it comes to trying to make a website that works correctly across all browsers/devices/configurations. It can really be a nightmare even for the most skilled designers.

Thankfully jQuery mobile is here to help. While not perfect (its definitely still getting some issues worked out) it makes creating mobile websites infinitely more bearable. It takes care of the styling and such for you creating a nice interface and handling most of the junk you don’t want to deal with as far as writing event handlers, dealing with CSS adjustments, creating the various input widgets etc. I’ve used it a fair amount and after the initial learning curve I can safely say it’s way better than trying to do it all yourself.

Site.com is another technology offered by Salesforce that is supposed to make building websites easier. It is mostly used for small websites with limited interactivity (seeing as it doesn’t support sessions, and there is no server side language access aside from a few minimalistic  apex connectors). Great for marketing websites, mini sites, etc. It makes it very easy for your non technical team to create and edit content. It has a great WYSIWYG editor, various automated tools (such as a navigation menu, which we’ll talk about shortly) and some other goodies that generally make it a fairly competent CMS.

So here we are. We want to build a mobile site. We want to use site.com to do it. We would also like our mobile site to take full advantage of the features of site.com including the menu generator/site map. The idea also here is that the same content can be used for both our mobile site and our regular site. Really hoping to utilize that ‘write once, run everywhere’ mentality that I love so much (I don’t care what all the native platform fans say, it can be done!). We’ll need to architect our site in a way that allows for this. That means keeping in mind that our content could be loaded on any kind of device. We’ll also want to try and keep things light weight for our mobile friends lest their little smart phones choke trying to handle our content. I’ve come up with a solution for this, which I like pretty well and I’ll outline below but I’m not claiming it’s the best way by any means.

There are two basic approaches I’ve used for building things on site.com:

One  is to have a single page which contains all the headers, footers, standard elements, etc (I’ll call this the framework page). Then using a bit of javascript to transform all the links into ajax links which load the content from the requested page into a div within the same page. By transforming the links using javascript you ensure that non javascript browsers don’t try and use ajax to load content, and your marketing team doesn’t have to worry about trying to write any javascript either. It’s also good for SEO since the crawlers will load your page and be able to follow the links since they won’t run javascript. Just select all links on the page with a certain class, and enhance them (code for this below). When the content is loading, we run that same script again to enhance all those new links and the cycle continues. This is nice because because the ajax loading is faster and looks slick. Also if you are willing to have a javascript only (as in you aren’t interested in graceful degradation for non javascript client, which there really aren’t any) then your content pages can contain JUST the relevant content. As in no headers, footers, CSS, anything like that. You just grab the page and inject it into your framework page’s content area and you’re done. The problem with this approach is that since the detail pages do not have styling if they are directly linked to, the user will just see plain text and images on a white page. This is bad news unless you have some kind of auto redirect script to get users back to the index page if they have loaded just a detail page. You’ll also have to worry about bookmarking, direct linking, browsers back button, and other such things. I have a post detailing how to deal with these located at https://iwritecrappycode.wordpress.com/2012/07/06/experimental-progressively-enhance-links-for-ajax-with-bookmarking-and-back-button-support/ with the basic idea being your ajax links cause a hash change in the url. That hash change results in a unique URL that users can bookmark and share. Your site just needs to check the URL for any after hash content and try to load the specified page on page load into the content frame instead of whatever your default page is.

Option two is a little safer. Every page has all the headers and footers, and again you have a special div where all the real content goes. Again using javascript to ajax enahnce the links. When the page is requested you ajax load the page, grab the content from just that div (on the fetched page) and inject it. That way if javascript isn’t enabled your link just functions like a regular link, taking the user to that page. You don’t have to worry about the user accidentally getting to a plain detail page without the headers, footers and styles because every page has them. If javascript is enabled the link is enhanced and turns into a ajax loading link. The requested page gets fetched via ajax and the relevant content is extracted from the DOM and inserted into your framework page. Not as a fast and clean as having just the content on your sub pages, but it’s a bit safer. I’m using this approach for now while I decide if I want to use the other.

 

Okay, so we’ve come this far. You’ve decided on a site architecture, created some content and are ready to make it mobile. For example, mine looks like this.

Capture

You can see I’ve got my main menu system with a few sub categories. Also, I have the directions sub menu minimized to make the image smaller, but there are several entries e

First thing is you’ll have to setup your jQuery mobile home page. Just find a basic tutorial online that explains how to get it up and running, not much to it. A special meta tag, include the CSS and JS, create a home page div on your page and you are up and running. jQuery mobile actually has this fairly interesting idea that all content will be contained within a single page, it make it more ‘app like’. It by default uses ajax requests to load content and just shows and hides the stuff relevant to what the user wants to see. So as a user clicks a link to load content, an ajax request fetches it, a new ‘page’ is created on your template and the users view is shifted to it. But how do we build that navigation? We want it to by dynamic so when someone from marketing creates a new page, it just shows up on your site. You also want to maybe use the build in jQuery mobile list view for it, since this is a simple site and list views provide easy navigation on mobile sites.

Site.com as we know does include an automatic menu generator, but it just generates a lame unordered list or ugly dropdown system. How can we use that to build our jQuery mobile list view? Using their built in list maker, from the content above, it’s going to generate code that looks like this.

CaptureYou can see it creates a div, inside of which is an unordered list. Each sub menu is another unordered list inside of a list element. Seems like we could probably use a little jQuery magic to spruce this list up and turn it into a jQuery mobile list. For those who just want the functioning JS, just copy and paste this into a JS file, upload it to site.com and include it in your mobile index page. Make sure your mobile menu has a css class called ‘ajaxMenu’. That is how jQuery finds the menu to enhance.

$(document).ready(function () {
    console.log('Document ready fired');
    sfMenuTojQueryList();
    markupLinks();

    $( document ).live( 'pagecreate',function(event){
        markupLinks();
        setFooters();
    });

});

function sfMenuTojQueryList()
{
    //Special stuff for the mobile site. Enchace the navigation menu into a list few, and turn it's links into ajax links
    $('.ajaxMenu a[href]').each(function(){

        if($(this).parent().children().length == 1)
        {
            $(this).addClass('ajaxLink');
        }
    });
    $('.ajaxMenu > ul').listview({
        create: function(event, ui) { 

        }
    });    
}

function markupLinks() {

    $('.ajaxLink').each(function (index) {
        if($(this).attr('href') != null)
        {
            $(this).attr('href', $(this).attr('href').replace('/', '#'));
        }
    });

    $('.ajaxLink').bind('click', function (event,ui) {
        event.preventDefault();
        loadLink($(this).attr('href'));        
    });
}

function loadLink(pageUrl) {

    console.log('Loading Ajax Content');
    pageId = 'jQm_page_'+pageUrl.replace(/[^a-zA-Z 0-9]+/g,'');
    pageUrl = decodeURIComponent(pageUrl).replace('#','');

    console.log(pageUrl + ' ' + pageId);
    if($('#'+pageId).length == 0)
    {
        console.log('Creating New Page');
        $.get(pageUrl, function (html) {
            //in this case the content I actually want is held in a div on the loaded page called 'rightText'. If you are just loading all your content             //you can just use $(html).html() instead of $(html).find("#fightText").html(). 
            $('body').append('<div id="'+pageId+'" data-role="page"><div data-role="header"><h2>'+pageUrl+'</h2></div><div data-role="content">'+$(html).find("#rightText").html()+'</div></div>');                                

            $.mobile.initializePage();

            $.mobile.changePage( '#'+pageId, { transition: "slideup"}, false, true);    

        }).error(function () {
            loadLink('pageNotFound');
        });
    }
    else
    {
        console.log('Changing to Existing Page #'+pageId);
        $.mobile.changePage( '#'+pageId, { transition: "slideup"} );    
    }

}

So here is what happens. When the page loads, it’s going to find your menu. It will call the jQuery mobile list view on it, to make it into a nifty list view that can be clicked. It looks like this now.

Capture
Each of those things can be clicked, at which time if it has a sub menu the contents of that sub menu will be displayed. If the item is actually a link, the content is loaded via ajax and new jQuery mobile page is created and injected into your main page, which it then changes to. If it finds that the page has already been loaded once, instead of fetching it again, it just changes page to it. It’s a pretty slick system that will allow a very fast loading website since the content is loaded on the fly and pulled completely via ajax.

You now have a mobile version of your website with a dynamic hierarchy enabled menu system that can be totally managed by your marketing team. Cool eh?


New Cloudspokes Entry – Salesforce Global Search

Hey everyone,

I’m just wrapping up my entry for the CloudSpokes Salesforce global search utility challenge, codename Sherlock. I’m fairly pleased with how it turned out and figured I’d post the video for all to enjoy. I can’t post the code since this is for a challenge, but feel free to ask any questions about it.

http://www.screencast.com/t/GrYnfBlJFM

Features:

  • Fast operation using JS remoting. No page reloads. Ajax style searching.
  • Search any objects. Not limited to a single object type search
  • Smart formatting adjusts search result display based on data available about the object
  • Easy customization using callbacks and jQuery binds
  • Flexible, can be modified to suit many different requirements
  • Easy to use jQuery plugin based installation
  • Efficient each search consumes only one database query
  • Reliable return type makes processing search results easy
  • CSS based design makes styling results quick and easy
  • Structured namespaced code means smaller memory footprint and less chance of collisions
  • Deployable package makes for easy install
  • Over 90% code coverage with asserts provides assurance of functionality

You can check out the full documentation and feature list here

https://docs.google.com/document/d/17-SUja_SO_Enhh8LrjzDMB7VIX6S87TmPr9yBW5z-yo/edit

I don’t know why exactly they wanted such a thing, but it was certainly fun to write!


Node.js, Socket.io, and Force.com canvas and you

So I got back from Dreamforce a week ago, and my head hasn’t stopped spinning. So many cool technologies and possibilities, I’ve been coding all week playing with all this new stuff (canvas, node.js, socket.io, nforce, heroku, streaming api, etc). Cloudspokes conveniently also had a challenge asking us to write a force.com canvas application that used a node.js back end. I wanted to take this chance to see if I could put what I had learned into practice. Turns out, it’s not too hard once you actually get your dev environment set up, and get things actually uploading and all the ‘paper work’ done. I wanted to leverage the Salesforce.com streaming API, Node.Js, and Socket.io to build a real time data streaming app. I also wanted to use Force.com canvas to get rid of having to worry about authentication (honestly the best part about canvas, by a wide margin). You can see the end result here:

http://www.screencast.com/t/Qfen94pl (Note the animations don’t show too well in the video due to framerate issues with the video capture software).

You can also grab my source project from here

Demo Source

Getting Set Up

First off, huge word of warning. All this process is just what I’ve found to work from trial and error and reading a ton of shit. I have no idea if this is the recommended process or even a good way to do things. It did/does however work. This was my first ever node.js application, as well as my first time using canvas and only my second time using Heroku. So ya know, definitely not pro level here but it’s at least functional. Also the actual idea for this application was inspired by Kevin O’Hara (@kevinohara80) and Shoby Abdi (@shobyabdi) from their streaming API session in dreamforce. They are also the authors of the kick ass nforce library, without which this app would not be possible, so thanks guys!

So how can you get started doing the same? Well first of course get yourself setup with Heroku. Heroku is where we are going to store our project, it’s a free hosting platform where you can host node.js, python and java applications. So if you don’t have a Heroku account, go get one, it’s free.

You’ll also want to download the Heroku toolbelt. This is going to get us our tools for dealing with Heroku apps (heroku client), as well as testing our stuff locally (foreman), and working with git. You can grab it here. https://toolbelt.heroku.com/. On that page it walks you through creating your project. For a more in depth guide, check out https://devcenter.heroku.com/articles/nodejs. Get a node.js project created and move onto the next step.

So now I assume you have a basic node.js project on Heroku. Now to actually make it do stuff, we’ll need to install some libraries using NPM. Open a command prompt and navigate to your local project folder. Install express (npm install express), socket.io (npm install socket.io) and nforce (npm install nforce). This should add all the required libraries to your project and modify the package.json file that tells Heroku the shit it needs to include.

You’ll also need a winter 13 enabled salesforce org to start building canvas apps. So go sign up for one here (https://www.salesforce.com/form/signup/prerelease-winter13.jsp) and get signed up. Depending when you are reading this you may not need a prerelease org, winter 13 may just be standard issue. Whatever the case, you need at least winter 13 to create canvas apps. As soon as you get your org, you’ll also probably want to create a namespace. Only orgs with namespaces and publish canvas apps, which you may want to do later. Also having a namespace is just a good idea, so navigate to the setup->develop->packages section, and register one for yourself.

In your org, you’ll need to configure your push topic. This is the query that will provide the live streaming data to your application.Open a console or execute anonymous window, and run this:

PushTopic pushTopic = new PushTopic();
pushTopic.ApiVersion = 23.0;
pushTopic.Name = 'NewContacts';
pushtopic.Query = 'Select firstname, lastname, email, id from contact;
insert pushTopic;
System.debug('Created new PushTopic: '+ pushTopic.Id); 

This will create a live streaming push topic in your org of all the new incoming contacts. You could change the query to whatever you want of course but for the purpose of this example, lets keep it simple.

Next, you’ll want to configure your canvas application. In your org, go to setup->create->apps. There should be a section called connected apps. Create a new one. Give it all the information for your Heroku hosted application. Permissions and callbacks here are a bit un-needed (since canvas will be taking care of the auth for us via a signed request) but should be set properly anyway. The callback url can be just the url of your application on Heroku. Remember only https is accepted here, but that’s okay because Heroku supports https without you having to do anything. Pretty sweet. Set your canvas app url to the url of your Heroku application and set access method to post signed request. That means when your app is called by canvas, it’s going to be as a post request, and in the post body is going to be a encoded signed request that contains an oAuth key we can use to make calls on behalf of the user. Save your canvas application.

The actual code (there isn’t much of it)
So we have everything configured now, but no real code. Our app exists, but it doesn’t do shit. Lets make it do something cool. Open up node server file (it’s probably called like web.js, or maybe app.js if you followed the guide above. It’s going to be whatever file is specified in your Procfile in your project). Paste this code. You’ll need to modify the clientId and clientSecret values from your canvas application. They are the consumer key and consumer secret respectively. I honestly don’t know if you’d need to provide your client secret here since the app is already getting passed a valid oAuth token, but whatever, it can’t hurt.

var express = require('express');
var nforce = require('nforce');
var app = express.createServer() , io = require('socket.io').listen(app);

var port = process.env.PORT || 3000;
//configure static content route blah
app.configure(function(){
  app.use(express.methodOverride());
  app.use(express.bodyParser());
  app.use(express.static(__dirname + '/public'));
  app.use(express.errorHandler({
    dumpExceptions: true, 
    showStack: true
  }));
  app.use(app.router);
});

app.listen(port, function() {
  console.log('Listening on ' + port);
});

io.configure(function () { 
  io.set("transports", ["xhr-polling"]); 
  io.set("polling duration", 10); 
});

var oauth;

var org = nforce.createConnection({
      clientId: 'YOUR CAVANAS APPLICATION CONSUMER KEY',
      clientSecret: 'YOUR CANVAS APPLICATION CLIENT SECRET',
      redirectUri: 'http://localhost:' + port + '/oauth/_callback',
      apiVersion: 'v24.0',  // optional, defaults to v24.0
      environment: 'production'  // optional, sandbox or production, production default
});

//on post to the base url of our application
app.post('/', function(request, response){
    //get at the signed_request field in the post body
    var reqBody = request.body.signed_request;   

    //split the request body at any encountered period (the data has two sections, separated by a .)
    var requestSegments = reqBody.split('.'); 

    //the second part of the request segment is base64 encoded json. So decode it, and parse it to JSON
    //to get a javascript object with all the oAuth and user info we need. It actually contains a lot of 
    //data so feel free to do a console.log here and check out what's in it. Remember console.log statments 
    //in node run server side, so you'll need to check the server logs to see it, most likely using the eclipse plugin.   
    var requestContext = JSON.parse(new Buffer(requestSegments[1], 'base64').toString('ascii'));
    
    //create an object with the passed in oAuth data for nForce to use later to subscribe to the push topic
    oauth = new Object();
    oauth.access_token = requestContext.oauthToken;
    oauth.instance_url = requestContext.instanceUrl;
    
    //send the index file down to the client
    response.sendfile('index.html');

});


//when a new socket.io connection gets established
io.sockets.on('connection', function (socket) {
      
    try
    {
      //create connection to the NewContacts push topic.
      var str = org.stream('NewContacts', oauth);
    
      //on connection, log it.
      str.on('connect', function(){
        console.log('connected to pushtopic');
      });
    
      str.on('error', function(error) {
         socket.emit(error);
      });
    
      //as soon as our query has new data, emit it to any connected client using socket.emit.
      str.on('data', function(data) {
         socket.emit('news', data);
      });
    }
    catch(ex)
    {
        console.log(ex);
    }
    
});

Now you’ll also need the index.html file that the server will send to the client when it connects (as specified by the response.sendfile(‘index.html’); line). Create a file called index.html, and put this in there.

<!DOCTYPE html>
<html>
    <head>
        <!-- - does a commit work remotely as well? -->
        <title>New Contacts</title>
        <meta name="apple-mobile-web-app-capable" content="yes" />
        <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent" />
        
        <link href='http://fonts.googleapis.com/css?family=Lato:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
        
        <link rel="stylesheet" href="/reveal/css/main.css">
        <link rel="stylesheet" href="/reveal/css/theme/default.css" id="theme">    
        
        <script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
    
        <script src="/socket.io/socket.io.js"></script>
    
        <script>
          $.noConflict();    
          var socket = io.connect();

          socket.on('news', function (data) {
            jQuery('.slides').append('<section><h2><a href="https://na2.salesforce.com/'+data.sobject.Id+'">'+data.sobject.FirstName+' '+data.sobject.LastName+'</a></h2><br>'+data.sobject.Email+'<br/></section>');
            
            Reveal.navigateNext();
          });



        </script>    
    </head>
    <body>

        <div class="state-background"></div>
        
        <div class="reveal">
            <div class="slides"> 
                <section>New Contacts Live Feed</section>
            </div>

            <!-- The navigational controls UI -->
            <aside class="controls">
                <a class="left" href="#">◄</a>
                <a class="right" href="#">►</a>
                <a class="up" href="#">▲</a>
                <a class="down" href="#">▼</a>
            </aside>

            <!-- Presentation progress bar -->
            <div class="progress"><span></span></div>                
        </div>


            
            <script src="/reveal/lib/js/head.min.js"></script>
            <script src="/reveal/js/reveal.min.js"></script>    
            <script>
                
                // Full list of configuration options available here:
                // https://github.com/hakimel/reveal.js#configuration
                Reveal.initialize({
                    controls: true,
                    progress: true,
                    history: true,
                    mouseWheel: true,
                    rollingLinks: true,
                    overview: true,
                    keyboard: true,
                    theme: Reveal.getQueryHash().theme || 'default', // available themes are in /css/theme
                    transition: Reveal.getQueryHash().transition || 'cube', // default/cube/page/concave/linear(2d)
    
                });

        
            
                    
            </script>            
    </body>
</html>

We are also going to need the CSS reveal framework to create the awesome slideshow. Grab it https://github.com/hakimel/reveal.js. In your Heroku project create a folder called public. In there create a folder called reveal. In that folder dump the css, js, lib and plugin folders from reveal. So it should be like root_folder->public->reveal->js->reveal.js for example. There is probably a more ‘git’ way to include the reveal library, but I don’t know what it is. So for now, moving folders around should work.

Now use git to push this all up to Heroku. I’d really highly recommend using the Heroku plugin for eclipse to make life easier. There is an install guide for it here https://devcenter.heroku.com/articles/getting-started-with-heroku-eclipse. However you do it, either from eclipse or command line, you gotta push your project up to Heroku. If you are using command line, I think it’s something like “git add .” then “git commit” then “git push heroku master” or something like that. Just use the damn eclipse plugin honestly (right click on your project and click team->commit, then right click on the root of your project and click team->push upstream).

If your app pushes successfully and doesn’t crash, it should run when called from Canvas now. Canvas will call your Heroku application using a post request. The post request contains the signed request data including an oAuth token. We use that oAuth token and store it in our node.js app for making subsequent api calls. Node.js returns the index.html file to the client. The client uses socket.io to connect back to the server. The server has a handler that says upon a new socket.io connection, create a connection to the push topic newContacts in salesforce, using the oAuth token we got before. When a new event comes from that connection, use Socket.IO to push it down to the client. The client handler says when a new socket.io event happens, create a new slide, and change to that slide. That’s it! It’s a ton of moving parts and about a million integration, but very little actual code.

Enjoy and have fun!


Displaying and Caching Salesforce Attachment Images in Sites

This time around we are going to be talking about images. How to store them, how to query for them, display them and cache them, in Salesforce, using javascript remoting. We’ll be building a simple application using jQuery, Salesforce and Apex to query for attachments, display them and cache them reduce load times and overhead.

Abstract:
First off, I’m having a bit of a hard time organizing all my thoughts on this topic. It’s kind of big, so please forgive if I skip around a bit. Feel free to ask for clarifications in the comments. So let’s say you are building an application to be hosted on Salesforce. Your application is going to need to publicly accessible (so you are going to be using sites) and the application is going to need to show images that may change frequently and hence would be configured by some non developer types. Your application is going to show all the products you have available, along with pictures of said products.

There is of course many ways you can go about storing your images and relating them to your products but the most straight forward option is to use the notes and attachments feature. That would allow users to easily manage the pictures related to each opportunity without having to go to some central picture repository, or building any additional relationships between objects or URLS. The problem of course is that attachments don’t have a publicly accessible URL to them. You can view them from within Salesforce but you don’t have any way to display them on a site. This could be an issue. Not so fast!

Images as Data
You know those images you uploaded to Salesforce via the attachments feature exist somewhere on Salesforce servers. We also know that Salesforce hates file storage and loves databases. It should come as little surprise that the attachments are actually stored in a table as blob data. That data can be queried for just like any other data. Another little know thing is that in HTML while the img tag normally has it’s src attribute set to a URL, it can in-fact accept base64 encoded image data by specified the data type (). Perhaps we can put all this information together into something useful. Yes, yes we can.

Getting The Image Data
So go ahead and get a visualforce page and controller set up. I’m calling mine productList and productListController respectively. Let’s get the code for our controller in place. Copy and paste this.

global class productListController
{

    //get all the products in the org along with their attachments.
    @remoteAction
    global static remoteObject getProducts()
    {
        remoteObject returnObj = new remoteObject();

        try
        {

            list<Product2> products = [select 
                                                Name,
                                                ProductCode,
                                               Description,
                                               Family,
                                               isActive,
                                               (SELECT Attachment.Name, Attachment.Id FROM Product2.Attachments)
                                               from product2
                                               where isActive = true];
            returnObj.sObjects = products;
        }
        catch(Exception e)
        {
            returnObj.success = false;
            returnObj.message = 'Error getting products';
            returnObj.data = 'Error Type: ' + e.getTypeName() + ' ' + e.getCause() + ' ' + ' on line: ' +e.getLineNumber(); 
        }

        return returnObj;       
    }

    //gets a single attachment (photo) by id. The data is returned as a base64 string that can be plugged into an html img tag to display the image.
    @RemoteAction
    global static remoteObject getAttachment(id attachmentId)
    {   
        remoteObject returnObj = new remoteObject();
        try
        {
            list<Attachment> docs = [select id, body from Attachment where id = :attachmentId limit 1]; 
            if(!docs.isEmpty())
            {
                returnObj.data = EncodingUtil.base64Encode(docs[0].body); 
            }    
        }
        catch(exception e)
        {
            returnObj.success = false;
            returnObj.message = e.getMessage();
            returnObj.data = 'Error Type: ' + e.getTypeName() + ' ' + e.getCause() + ' ' + ' on line: ' +e.getLineNumber();        
        } 
        return returnObj;    
    }   

    global class remoteObject
    {
        public boolean success = true;
        public string message = 'operation successful';
        public string data = null;
        public list<sObject> sObjects = new list<sObject>();
    }    
}

As you can see it’s a pretty simple little controller. We have one method that gets a listing of all the products and the Id’s of the associated attachments using a subquery. That prevents us from having to run another query to get the attachment Id’s. The second function takes a specific attachment id and will return an object with the base64 encoded version of the image. That’s what I was talking about earlier. You can query for an attachment and get it’s raw binary/blob data. Then you can base64 encode it for transfer from the controller back to the requesting page. With that you can get the image data out Salesforce and to your public application.

This does introduce another problem though. Caching. Normally images would be cached by the browser when they are loaded. It uses the filename to create a cached version of the image so next time your browser needs to load it it can just pull it off the hard drive instead of across the internet. The problem with base64 images is they can’t really be cached easily. By the time you have enough data to find it in the cache, you already loaded the whole thing, totally defeating the entire point of the cache. How can we fix this? Caching is too important to just skip in most applications, but yet we need to use base64 encoded images in our app.

Local Storage
With HTML5 we now have something called local storage. Basically it lets us store just about anything we want on the users computer for use at a later time. Basically cookies on steroids. Also where as cookies had to be small little text files, local storage gives us much more flexibility with size. We can leverage this to build own our cache.

Here is the game plan. We’ll run our query to find all the products. We’ll loop over each product we find and create a create an img tag that contains the ID of the image/attachment that needs to go there. After that, we’ll loop over each image tag and populate it with the image. We’ll check to see if we have a local storage item with the ID of the image/attachment. If so, we’ll load that data from the local cache. If not, we’ll make a remoting call to our Apex getAttachment method, and cache the results with local storage then load the data into the img tag. Here is what that looks like.

<apex:page controller="productListController">
    <head>
    <title>Product List</title>
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.js"></script>

    <script>
        $(document).ready(function() {
            getProducts(function(){
                $('.cacheable').each(function(index){
                    var img = $(this);
                     getImage($(this).attr('id'),function(imageId,imageData){
                         $(img).attr('src', 'data:image/png;base64,'+imageData);
                     });
                });               
            });            
        });  

        function getProducts(callback)
        {
                    Visualforce.remoting.Manager.invokeAction(
                        '{!$RemoteAction.productListController.getProducts}',
                        function(result, event)
                        {
                            if (event.status && (result.success == true || result.success == 'true')) 
                            {    
                               var html='';
                               for(var i = 0; i<result.sObjects.length;i++)
                               {
                                   var imageId = 'default image id here';
                                   if(result.sObjects[i].hasOwnProperty('Attachments'))
                                   {
                                        imageId = result.sObjects[i].Attachments[0].Id;
                                   }
                                   html += '<li><img class="cacheable"  id="'+imageId+'">'+result.sObjects[i].Name+'</li>';

                               }
                               $('#products').html(html);
                               callback();
                            } 
                            else
                            {
                                $("#responseErrors").html(event.message);
                            }
                        }, 
                        {escape: true});                   
        } 

        function getImage(imageId,callback)
        {
             var imageData;

              if ( localStorage.getItem(imageId))
              {   
                console.log('Getting image from local storage!');
                imageData = localStorage.getItem(imageId);
                callback(imageId,imageData);    
              }
              else 
              {
                   console.log('Getting image remote server!');
                    Visualforce.remoting.Manager.invokeAction(
                        '{!$RemoteAction.productListController.getAttachment}',
                        imageId,
                        function(result, event)
                        {
                            if (event.status && (result.success == true || result.success == 'true')) 
                            {    
                                 imageData = result.data;
                                 localStorage.setItem(imageId,imageData);      
                                 callback(imageId,imageData);    
                            } 
                            else
                            {
                                $("#responseErrors").html(event.message);
                            }
                        }, 
                        {escape: true});                   
              }      
        } 
    </script>
    </head>

    <body>
            <ul  id="products"></ul>
    </body>            
</apex:page>

So if you are familiar with jQuery and callbacks it’s pretty easy to make sense of what’s going on here. Once the DOM loads we are going to call the getProducts function. getProducts is going to use remoting to run the getProducts apex method. It will iterate over the results and create a list item for each product as well as that empty tag with the id attribute we talked about earlier. It also assigns the img tag the cacheable class so we can easily iterate over them once we are done. Once the looping and list building is complete, we call the callback functions. Since remoting requests are asyncronous we need to use callbacks when we only want to call one function when the other has completed first. Callbacks are a bit beyond the scope of this article, but just know that if we didn’t use them the get $(‘.cachable’).each() loop would run before the list had finished being populated.

So anyway getProducts finishes running and creating the list. Then comes the loop that uses jQuery to find any element that has the ‘cacheable’ class. For each element it finds, it calls the getImage() function on it, passing in the Id of that element. GetImage is where the cacheing magic happens. It will check to see if a local storage item exists with the id it gets passed. If so, it calls back with that content, if not, it queries Salesforce for an attachment with that id, creates a local storage element for it, and then again returns that content. The loop takes the returned content and sets the src tag of the img element with the base64 encoded data and boom! We have an image.

There you have it. Using Salesforce attachments to house images, using Apex and jQuery to query for them and display them, and HTML5 local storage to cache them. Pretty cool eh? I could write more, but I’m tired and I don’t feel like it. Hit me with questions if ya got em.


Experimental: Progressively enhance links for ajax with bookmarking and back button support!

Hey All,

So this is a bit of an ‘in progress/experimental’ kind of post. I think many of us have run into this dilemma. We want to make an awesome ajax powered website with fast loading pages and neat interface. Problem is ajax content sucks for search engine optimization, can be tricky to get bookmarking to work, and of course the back and forward buttons cause problems too. All this seems like it might make ajax a bad idea for navigation, but it’s just too cool to give up. So how can we resolve all these issues and use the awesome ajax navigation we want to? We address each challenge one by one (or just skip the bottom and copy and paste my full code example. Whatever works for you).

Progressive Enhancement (AKA dealing with shitty browsers or search engine)
My first attempts with Ajax navigation where simply to replace the href attributes of my links with javascript function calls. This really is the most straight forward approach, but the most flawed as well. Anyone who doesn’t have javascript support, or search engines trying to crawl your site won’t be able to follow your links. Your site won’t get indexed, and you’ll be abandoned by the search engine gods. Also, if you are using any kind of CMS (such as site.com from Salesforce as I am) the links created will be standard links. Your website people would have to call on you all the time to change their links, if it’s even possible! The answer to all of these problems is progressive enhancement. Use javascript to transform your regular links into javascript ajax links. This ensures that those people/bots not using javascript can still browse your site in the traditional manner. So for this example, my regular link might look like

<a href='contactUs' title='contactUs' class='ajaxLink'>Contact Us</a>

Pretty simple standard link. However I’ve added a class to it that will easily allow me to select it later with some jQuery magic later on. Now we need some javascript to turn that link from a plain jane href into a sexy ajax link. Something like this oughtta do the trick.

    $('.ajaxLink').each(function(index){
        $(this).attr('href', $(this).attr('href').replace('/','#'));
    });
    
    $('.ajaxLink').bind('click', function() {
            loadLink($(this).attr('href').replace('#','/')) 
    });  

(Yes I know it’s a bit sloppy with the replace statement. With the CMS we are using it’s a flat hierarchy, so I don’t need to worry about multiple slashes in the URL. Also I’m purposely leaving it a little less than maximally elegant to increase readability for my readers. I know I could consolidate the two loops.)

What’s happening here is that we are using jQuery to modify every link that has the ajaxLink class to replace the initial slash (which all my links will have) with a # sign. That # is magical. It’s called a hash mark and it’s original use was to make bookmarkable locations in your document. You click the link with the # you go to that location in the same document designated by the #. The # and it’s content is never sent to the server, it exists entirely client side (not that that really matters right now though). So when a user clicks it, their URL changes in their browser now, but it doesn’t cause a page reload. You hear that? Let me repeat. IT CHANGES THE URL, BUT DOES NOT CAUSE A PAGE RELOAD. That’s important. The second part creates a bound function that when you click our link it’s going to call a function called loadLink which expects to receive a valid URL (relative to the current document) so we need to flip the hash back to being a slash (I guess we could probably leave the slash out and just remove the # but whatever). We now have a system that will leave function links for those without javascript and transform them into ajax links for those who do. Sweet.

Bookmarking and unique page urls (The magic of the hash)
You may ask why even bother with the hash at all if we are just flipping it back to a slash. The reason is since it causes the URL to change, the user now has something they can bookmark. It also gives each page a unique URL with which to access it. As the user is navigating around your site, if they end up at a some buried 3 level deep page but the URL hasn’t changed at all they have no idea where they are really at. They don’t have a bookmarkable link, or one they can share with their friends. Of course the each page does have it’s own unique URL (thats how search engines and non javascript browsers will get to them) but your ajax enabled users won’t know that without the hash. With our function we wrote above, regular links now act as javascript and since they links have a # in front of them, the browser treats them as anchors. The URL changes when the link is clicked, but no page reload is preformed. This is a good thing. But wait, just because the hash is in the URL that doesn’t mean it’s really doing anything yet. If someone bookmarks your page with the hash in it, but you don’t have any handler for it nothing really happens. When our page loads we need to check and see if there is a hash in the URL. If so load the page indicated by it, if not then just load your default page. That functionality looks a bit like this.

$(document).ready(function()
{
          var hashMark = getHash();
          if(hashMark.length > 0)
          {
              loadLink (hashMark);  
          }          
    });    
});

function getHash() 
{
          var hash = window.location.hash;
          return hash.substring(1); // remove #
}

Pretty simple. All you are doing is saying when the document loads, see if there is a hash mark. If so, load the link indicated by it (by passing the hash mark content to the loadLink function). This works great. Now you can have bookmarkable links that actually work. But the back button is still broken….

Dealing with the back button

Man, I love jQuery. Every time I have some crazy issue to deal with, it’s got my back. Like if me and jQuery were in a bar and some big biker dude was trying to hassle me, jQuery would like tap on his shoulder, the biker would turn around and jQuery would just knock like all his teeth out with one right hook. I’d then jQuery a drink and we’d talk about how much mootools sucks (just kidding, I don’t know anything about it). Anyway where I’m going with this is that something that could be really hard to do jQuery makes really easy for us. What we need to do to get the back button to work is to detect when the hash in the URL changes. When a user clicks back or forward using your links the only thing that is going to change is that hash mark content. Nothing gets sent to the server. There is no get/post request going on here. Many hackey approaches are out there from disabled the back button to overriding it’s behavior. Thankfully we aren’t savages. We have an elegant solution. It looks like this.

          $(window).bind('hashchange', function() {
              var hashMark = getHash();
              if(hashMark.length > 0)
              {
                 loadLink (hashMark);  
              } 
          });

Just that easy. A topic that has stumped top web developers for years all wrapped up 7 lines. This just says bind a function to the hashchange function. When it changes, get the hash and pass it to the loadLink function. Boom. Done.

Loading the content (The grand finally)

So we are just about home now. We have progressive enchantments, bookmarking/link sharing ability, and even back/forward functionality. But now we need to actually load the content. Here is one last issue to deal with. Since all of your pages contain all the content/styles/scripts needed to be seen on their own (again for the non ajax users) if you try and load the entire page when you click the link you are going to end up with recursive pages nesting inside each other, duplicate script errors all kinds of crazy shit. So you need to leave the full pages intact for your non ajax users, but still be able to extract just the content you want from it to display on your page. Here we are going to use a little bit of jQuery’s find magic to extract just the content we want.

function loadLink(link)
{
     try
     {
         $("#leftText").html('');
         $('#loader').show();
         
         $.get(link, function(html) {
               $('#loader').hide();
               $("#leftText").html($(html).find("#leftText").html());
         });
     }
    catch(ex)
    {
        console.log(ex);
        $('#loader').hide();
    }
}

So here is whats happening here; the function loadLink expects to receive a valid URL fragment to load. It’s going to blank out my content area (which is called leftText) and then show an ajax loading spinner. jQuery is going to create a get request to get the link and with the result it’s going to extract the content from it’s leftText div, and insert it into this pages leftText div. Since every page is structured basically the same, it works pretty slick. That’s it. You’re done! Of course these scripts need some refining, error handling, edge case handling but I’ll leave that to the reader. The hard shit is done, what do you want me to do your whole job for you? XD Below is the entire script. Enjoy!

$(document).ready(function () {

        markupLinks();

        $(window).bind('hashchange', function () {
            var hashMark = getHash();
            if (hashMark.length > 0) {
                loadLink(hashMark);
            }
        });

        var hashMark = getHash();
        if (hashMark.length > 0) {
            loadLink(hashMark);
        }
});



function markupLinks() {
    $('.ajaxLink').each(function (index) {
        $(this).attr('href', $(this).attr('href').replace('/', '#'));
    });

    $('.ajaxLink').bind('click', function () {
        loadLink($(this).attr('href').replace('#', '/'))
    });
}

function loadLink(link) {
    try {
        $("#leftText").html('');
        $('#loader').show();

        $.get(link, function (html) {
            $('#loader').hide();
            $("#leftText").html($(html).find("#leftText").html());
        });
    } catch (ex) {
        console.log(ex);
        $('#loader').hide();
    }
}

function getHash() {
    var hash = window.location.hash;
    return hash.substring(1); // remove #
}

Apex Captcha with Javascript Remoting and jQuery

So at one time or another, we’ll likely all have to create a public facing form to collect data. We will also find about 2 seconds afterwards that it is getting spammed to hell. To stop the flood of crap, we have reCaptcha. An awesome little utility that will prevent bots from submitting forms. You already know what captcha is though, that’s probably how you found this post, by googling for apex and captcha. First off, there is already an awesome post on how to do with by Ron Hess (Here), but his approach is a bit complicated, and visualforce heavy. Of course being kind of anti visualforce, and the complexities of properties and all that, I made my own little approach. So here we go.

This is assuming you already signed up with reCaptcha. You can go here and sign up for recaptcha (yes you can just enter force.com as the domain)
After that, course add an entry for google to your remote sites in the admin setup under security. Disable protocol security.
Then create your visualforce page, and apex class. I called my class utilities, since this is kind of a re-usable function and I wanted to keep it generic.

Now put this crap in your controller. Also, your controller needs to be global (to use javascript/apex remoting)

@RemoteAction
    global static boolean validCaptcha(string challenge, string response)
    {
      boolean correctResponse = false;
      string secret = 'your recaptcha secret key here. Maybe make this into a custom setting?';
      string publicKey = 'your recaptcha public key here. Maybe make this into a custom setting?';
      string baseUrl = 'http://www.google.com/recaptcha/api/verify'; 

      string body ='privatekey='+ secret +  '&remoteip=' + remoteHost() + '&challenge=' + challenge + '&response=' + response + '&error=incorrect-captcha-sol';
      
      HttpRequest req = new HttpRequest();   
      req.setEndpoint( baseUrl );
      req.setMethod('POST');
      req.setBody ( body);
      try 
      {
        Http http = new Http();
        HttpResponse captchaResponse = http.send(req);
        System.debug('response: '+ captchaResponse);
        System.debug('body: '+ captchaResponse.getBody());
        if ( captchaResponse != null ) 
        {  
            correctResponse = ( captchaResponse.getBody().contains('true') );
        }          
       
      } 
      catch( System.Exception e) 
      {
         System.debug('ERROR: '+ e);
      }                             
      return correctResponse;
    }

    global static string remoteHost() 
    { 
        string ret = '127.0.0.1';
        // also could use x-original-remote-host 
        try
        {
            map<string , string> hdrs = ApexPages.currentPage().getHeaders();
            if ( hdrs.get('x-original-remote-addr') != null)
            {
                ret =  hdrs.get('x-original-remote-addr');
            }
            else if ( hdrs.get('X-Salesforce-SIP') != null)
            {   
                ret =  hdrs.get('X-Salesforce-SIP');
            }
        }
        catch(exception e)
        {
        
        }
        return ret;
    }

Ok, great, now your controller is ready. You just need to pass the right info and it will tell you if it’s right or wrong. Lets get a visualforce page set up to do that.

<apex:page controller="utilities" standardStylesheets="false" sidebar="false"  >

<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" />


<script>
$(function() {

    $( "#validateButton" ).click(function(){
        
        validCaptcha(function(valid){
            if(valid)
            {
                $('#validationResultDiv').html('Valid Captcha!');
                
                //Do whatever here now that we know the captcha is good.
            }
            else
            {
                $('#validationResultDiv').html('Invalid Captcha Entered');
            }
           
        });           
    });
});

function validCaptcha(callback)
{
    var challenge = document.getElementById('recaptcha_challenge_field').value;
    var response = document.getElementById('recaptcha_response_field').value;

    utilities.validCaptcha(challenge,response, function(result, event)
    {
        if(event.status)
        {
           callback(result);
        }
    }, {escape:true});
}

</script>

<div id="captchaEnter" title="Form Submission Validation">
    <center>
    <script type="text/javascript" src="https://www.google.com/recaptcha/api/challenge?k=YOUR PUBLIC KEY GOES HERE DONT FORGET IT"></script>
    <noscript>
       https://www.google.com/recaptcha/api/noscript?k=YOUR_PUBLIC_KEY
     </noscript>  
     <div id="validationResultDiv"></div>   
     <button id="validateButton" class="inline">Submit</button>
       
     </center>
</div>


</apex:page>

Boom! Just that easy. Hook up an event handler to the submit button that runs the validCaptcha function. It will get the proper values, and send them to the apex class, which sends them to reCaptcha to verify. Once an answer comes back, it is passed into the callback function, which the can run whatever action you require. Don’t forget to replace the place holder public key in the script line above. Have fun!


jQuery UI Checkbox better feedback

Hey all,
This is just a quick snippet for any people out there googling how they can make jQuery UI checkboxes provide better feedback to users. One weakness is that the default checkbox in jQuery UI just looks like a button. It doesn’t really indicate to the user that it is in fact a check box. I was developing an application and watching the end user attempt to use it. They skipped right over the checkbox because they thought it was a button! Even worse, after I told them it was a checkbox, they couldn’t keep track of if it was actually on or off, because the change in style isn’t exactly obvious. So anyway, I knew I needed to somehow give the user more tips that it was in fact an item they can interact with. My fix uses stock UI icons that change based on the checked state to help the user know exactly whats up. Just jam this in the onload portion of your script and the change should be pretty easily apparent.

$( "input[type=checkbox]" ).button({ icons: {primary:'ui-icon-circle-minus'} });
    $( "input[type=checkbox]" ).click(function(){
        if($(this).is(':checked'))
        {
            $(this).next().children('.ui-button-icon-primary').addClass("ui-icon-circle-check").removeClass("ui-icon-circle-minus");
        }
        else
        {
            $(this).next().children('.ui-button-icon-primary').addClass("ui-icon-circle-minus").removeClass("ui-icon-circle-check");
        }
    });

Or for a full example page

<html>
<head>
<link rel="stylesheet" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.17/themes/redmond/jquery-ui.css" type="text/css" media="all" />

<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" ></script>
<script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js" ></script>

<script>
$(function() {
    $( "input[type=checkbox]" ).button({ icons: {primary:'ui-icon-circle-minus'} });
    $( "input[type=checkbox]" ).click(function(){
        if($(this).is(':checked'))
        {
            $(this).next().children('.ui-button-icon-primary').addClass("ui-icon-circle-plus").removeClass("ui-icon-circle-minus");
        }
        else
        {
            $(this).next().children('.ui-button-icon-primary').addClass("ui-icon-circle-minus").removeClass("ui-icon-circle-plus");
        }
    });
});
</script>
</head>
<body>
<form name="testForm">

<input type="checkBox" id="myCheckbox"><label for="myCheckbox">Click me Beyatch!</label>

</form>
</body>
</html>

Anyway, hopefully this helps someone. Feel free to of course expand on this example and let me know if you are able to make it more efficient.



Ask Kenji: Cross domain ajax requests?

I was just killing some time, chilling out after karate, and this message popped in my inbox.

Hi Kenji,

I have read some articles about Salesforce in your bolg. So I have a question want to ask you, I think maybe you can give me some advices.

I want to ues jquery.ajax method to invoke the apex class.
These jquery codes are written in a HTML page not a visualforce page. I have some method to get the access_token based on the OAuth 2.0.
Then I follow your article (https://iwritecrappycode.wordpress.com/2011/07/08/salesforce-rest-api-musings/) create a apex to listen my request. I use curl to test this class and successful.
So, I think I can use jquery.ajax do the same thing.

I post the same question on force.com boards, you can see the detail at there.(http://boards.developerforce.com/t5/Apex-Code-Development/Ues-jquery-ajax-to-invoke-apex-class/td-p/394713)

Do you have experience on this?

Thank you!

A valid question. I feel like I might have touched on it before, but hey no harm in writing about it again. It’s a common situation, and one with probably more than one solution. Below is my approach. Take it or leave it.

First off, as far as I know you can’t invoke a rest resource with pure javascript. The cross domain security issue just doesn’t allow for it. The only way to do cross domain ajax stuff is by tricking the browser to loading the remote resource as if it was a script resource, since those can be loaded from anywhere. This technique in jQuery is called jsonP. The problem with this is that you cannot set headers, include authorizations, or anything else that you do with a more complex http request. It’s a simple GET to the url, and that’s is. REST resources typically require an authorization header to be set, and need to support POST, PATCH, PUT, along with just GET. So most REST resources, including the ones you can make in Salesforce can’t be access directly via javascript. If someone can prove me wrong, I love you.

So what are we do to? The method that follows is what I’ve been doing when I need a pure javascript solution. It’s not the most elegant, but it works. Here is what you have to do (this method will also get around having to use REST services, or oAuth). First, setup an visualforce page with your apex class as the controller. Wrap the return data in the callback function provided via the jQuery get request, and print the results out. Host the visualforce page on a publicly accessible salseforce site (don’t forget to set permissions on the page and class to allow the public profile user to get access) jQuery will get the response, pass the data to the inline handler function and you can process the results as you need.

<apex:page showHeader="false" sidebar="false" standardStylesheets="false" contentType="application/x-JavaScript; charset=utf-8" controller="jsonp_controller">{!returnFunction}</apex:page>

Your controller will look something like this

public class jsonp_controller
{
    public string returnFunction{get;set;}
    
    public pageReference getReturnFunction()
    {
        //get the parameters from the get/post request and stash em in a map
        map<string,string> params  = ApexPages.currentPage().getParameters();
        
        //set your data to return here
        string returnData = 'blah';
        
        if(params.containsKey('callback'))
        {
            returnFunction = params.get('callback') + '(' + returnData + ');';
        }
        
        return null;
    }
    
    @isTest
    public static void test_jsonp_controller()
    {
        jsonp_controller controller = new jsonp_controller();
        system.assertEquals(controller.returnFunction,'blah');
        
        
    }
}

And finally your page that actually makes the request would look like this

<html>
<head>
    <title>Cross domain ajax to request to salesforce</title>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.5.2/jquery.min.js"></script>
    <script>
    var url = "http://yourSite.force.com/jsonp_getData";

    function loadData()
    {
        jQuery.getJSON(url+'?callback=?',function(data)
        {
            jQuery('#results').html(data);
        });

    }

    $(document).ready(function() {
        loadData()
    });
    
    </script>
</head>
    <body>
    <div id="results">
    Your remotly fetched data will get loaded here
        </div>
    </body>
</html>

Remember, your visualforce page that serves the content must be publicly available, that means hosting it on a force.com site. Please note I wasn’t able to actually test the above code, because my org is freaking out on me right now (seriously, it’s doing some weird stuff), but it should be pretty close to accurate. Anyway, I hope this helps some people out there.

PS: I knew this topic seemed familiar. It’s because i wrote about it before!
Salesforce SOQL Query to JSON


Reliably injecting jQuery and jQuery UI with callback!

Hey all,
So this is kind of a cool thing. Sometimes you end up needing to inject jQuery in a page (like with advanced custom buttons in Salesforce) or in other cirumstances where you can write scripts, but you don’t have direct access to the source doc. Some of these times you want to include jQuery, along with jQuery UI and it’s CSS. Most of us know you can use the head.addScript function of javascript to inject the code, but how do you know when it’s loaded? How do you make sure you only load the UI library after the core library has loaded? Well worry no more, as I have an awesome javascript function here to reliably inject jQuery and the UI and then call a function of your choosing. Here ya go!

function loadJQuery(callback)
{
    try
    {
        if(typeof jQuery == "undefined" || typeof jQuery.ui == "undefined")
        {
            var maxLoadAttempts = 10;
            var jQueryLoadAttempts = 0;
            //We want to use jQuery as well as the UI elements, so first lets load the stylesheet by injecting it into the dom.
            var head= document.getElementsByTagName('head')[0];
            var v_css  = document.createElement('link');
            v_css.rel = 'stylesheet'
            v_css.type = 'text/css';
            v_css.href = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/themes/redmond/jquery-ui.css';
            head.appendChild(v_css);

            //Okay, now we need the core jQuery library, lets fetch that and inject it into the dom as well
            var script= document.createElement('script');
            script.type= 'text/javascript';
            script.src= 'https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js';
            head.appendChild(script);
                    
            checkjQueryLoaded = setInterval(function()
            {             
                if(typeof jQuery != "undefined")
                {
                    //Okay, now we need the core jQuery UI library, lets fetch that and inject it into the dom as well
                    var script= document.createElement('script');
                    script.type= 'text/javascript';
                    script.src= 'https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.js';
                    head.appendChild(script);
                    window.clearInterval(checkjQueryLoaded);
                }
                else if(maxLoadAttempts < jQueryLoadAttempts)
                {
                    window.clearInterval(checkjQueryLoaded);
                }
                jQueryLoadAttempts++;
            },300);
            
            jQueryLoadAttempts = 0;        
        
            checkLoaded = setInterval(function()
            {             
                if(typeof jQuery != "undefined" && typeof jQuery.ui != "undefined")
                {
                    window.clearInterval(checkLoaded);
                    callback(true);
                }
                else if(maxLoadAttempts < jQueryLoadAttempts)
                {
                    window.clearInterval(checkLoaded);
                    callback(false);
                }
                jQueryLoadAttempts++;
            },500);
        }
    }
    catch(exception)
    {
        callback(false);
    }
}

Then you can invoke it and have a callback like this

loadJQuery(function(loadSuccess){
    if(loadSuccess)
    {
	//Do your jQuery stuff here. Basically you can think of this as a replacement for your 
	//document.onReady code
        $(document.getElementsByTagName('body')[0]).append("<div id=infoNotice title='Success'>jQuery and jQuery UI loaded!</div>"); 
        $( "#infoNotice" ).dialog({ modal: true});  
    }
    else
    {
        alert('Couldn\'t load jQuery :(');    
    }
});

Have fun!


Cloudspokes Simple Timer & Timecard System for Salesforce

Hey all,
Well another week another Cloudspokes challenge. Sadly it seems the judges were not impressed by my last submission of my jQuery google maps salesforce mashup, so let’s hope this week goes better. This time around we have a Timer/Timecard system that should allow users in Salesforce to track their interactions with any record during the day, which all rollup and aggregate to a daily timecard. There are some validations that prevent a user from racking up too many hours (based on a field in their profile), having more than one timecard running, and playing with submitted timecards.

The actual link to the challenge is here http://www.cloudspokes.com/challenges/1358

This time I also made two videos. One that highlights the functionality, and another that is a quick tech overview of how the thing works.

See it in action!

See how it works!

If there are any questions I’d be happy to talk about how I built this, but other than that, I think the videos do a decent job of covering the high points. If they don’t like this one, well I give up. If I don’t place, I’ll be releasing the source code and installable package link. Anyway, wish me luck!


Salesforce Custom Calendar with jQuery and Visualforce

Hey all,

I know I’ve been promising a new calendar for a while, and I’m sorry it’s taken so long. I didn’t quite know how in depth I wanted to go, and how much stuff I should build. I finally just decided to release a nice simple framework for other developers to build on. This is based on the super awesome excellent jQuery fullCalendar plugin by Adam Shaw. What this allows you to do is create full calendar records (a custom object). Each record represents a calendar. Each calendar has a source object, a start and end field, and a list of detail fields. When the calendar is loaded, it then queries the specified object for all records with a start date and end date falling in the visible range of the calendar. When an event is clicked a popup box appears with further information that is configurable on the fullcalendar record.

All the more configuration that is needed to create a calendar

Sample popup info when event is clicked


Click here for a demo (go to December 2011 to see some sample events).

You can grab the unmanaged package here

Or just grab the raw project and source from here (first time hosting a file on box.net, we’ll see how this goes).

Anyway, I hope this helps some people who are looking for a simple calendar system, or one to build on. I’m happy to review suggestions and ideas, but I can’t commit to getting anything done. Hope ya dig it!


Salesforce, google maps, and jQuery fun

Another Cloudspokes challenge entry submission. This one was for the challenge http://www.cloudspokes.com/challenges/1345. Basically the idea is get Sales data from Salesforce, plot it on a google map using geocoding without creating duplicate map points and allow a user to click one of the points to see all the sales data for it. The tricky part here is that the data comes from different objects based on what country you are filtering on (oh yeah, it has to support multiple countries). So data for the united states comes from a custom object called Sales Orders, while data for Japan and Germany come from opportunities. It had to allow to be easily expanded for more countries in the future as well with an easy way to set the data source. It was recommended to use address tools (an application from the app exchange that has prepopulated lists of countries and their states, along with some other data) for country and state data, but then there was some confusion because address tools is a for pay app with a 14 day trial. What is a developer to do?

I decided to try and make the best of both worlds. Using a flag in the javascript you can tell the controlling class whether to try and pull country and state data from address tools, or return a hard coded set of data (which was said to be an acceptable alternative in the comments of the challenge). The bummer here is that the org does at least have to have the address tools objects otherwise the class won’t compile. Nice thing is my installable package does include the objects and fields, so while it doesn’t have all the data that a full functionally addressTools would have it should at least install and not error. If you do have a functional addressTools install then no need to worry at all. The application will just work, because it defaults to attempting to pull it’s data from there.

To solve the issue of pulling data from different objects with different field names, I decided to create a wrapper object. A simple class that contains only the data needed to plot the address on the map. So whether the data be originally coming from Sales Orders or opportunities, they both end up returning a list of salesData objects (which is what I ended up calling my wrapper class). I created two separate methods (though they probably could have been consolidated into one, but it would have been a bit messy) for getting data from either object. The correct method was called by another which gets invoked by the user interface. Something like

User picks country
Javascript uses apex remoting to call getSales(string formData);
deserialize the form data from a url query string into a map of string to string (key to value)
find useOpp key in the deserialized data (this got set by the javascript in the application before the request was sent)
call the buildQueryFilter method and pass the form data. This method evaluates the data passed in the form and creates a SOQL where condition that will filter the records as the user has requested.
If useOpp is true call the getOpportunitySales() method. If not, call the getSalesOrderSales() method.
Both methods return a map of address (string) to salesData objects, using the filter created above.
Return the map of addresses to salesData objects to the javascript to be plotted on the map.

Those few parts where really the trickiest part of this challenge. I feel creating the wrapper object was probably the slickest solution, and even allows for other potential data sources in the future, and easily expand-ability to return more data to the front end if desired. I’ll be honest and stay a little bit of my code is redundant because of a feature I added at the very last moment, so I end up deserializing the form data twice, which I should really only need to do once, but it’s a short string of data so it’s not a big deal. I also not 100% sure the application is safe from SOQL injection. You could probably get the application to error by passing junk data with firebug or something, but I doubt you could make it do anything besides just error. I mean SOQL is select only anyway, and the filters it runs through and the way the query string gets built is pretty solid. So I am pretty sure at worst an attack could just get the application to toss some errors for their instance of it. Nothing that should be able to bring the app down, especially with governor limits in place.

As usual, I can’t release the source code myself until I have lost, or Cloudspokes gives me the okay. They generally host all code on their github anyway so in that case I’ll updated this post with the link to it.

Anyway you can see the video here: http://www.screencast.com/t/cLUc7dqpHEkC
Or play with the Demo App!


Salesforce Siteforce user configurable ajax links

So I’ve been doing a big project recently, using the new Salesforce siteforce builder. For those who are unaware siteforce is basically a content management tool that allows regular non developer users to develop and manage websites, in theory anyway. Of course any website of significant complexity/usefulness is going to require a developer to at least make some CSS and get some templates in place for the end user. One of the biggest things the users would do is edit content and links. Always with the updating of links, I tell ya. Problem is siteforce links are all just regular HREF links. In this ajax powered age, who wants that? I mean we want fast loading, partial page reloads, ajax baby! So how do I let a non coding user create ajax links to fetch the content and inject it into the page? Simple, you let them make links as normal, and use some javascript to modify them.

1) User creates standard HREF link with HREF pointing to desired page.
2) Have some javascript modify the links at runtime to remove the HREF attribute, and replace with an onclick function
3) onclick function fetches the content that was found in the original HREF and injects it into the page where desired.

My implementation of this idea looks like this.

<script type="text/javascript">
$(document).ready(function(){
  
      $('.ajaxLink').each(function(index) {
        var linkTarget = $(this).attr('href');    
        $(this).attr('href','#')
        $(this).click(function(){
            loadContent(linkTarget,'news_content');
            return false;
        });        
    });    

    jQuery.ajaxSetup({
      beforeSend: function() {
         $('#loadingDiv').show()
      },
      complete: function(){
         $('#loadingDiv').hide()
      },
      
      error: function() { alert('Error loading page');},
      success: function() {}
    });
    
});

function loadContent(contentPath,contentTarget)
{

    console.log(contentPath + ' ' + contentTarget);
    $.get(contentPath, function(data) 
    {
        $('#'+contentTarget).fadeOut('fast',function(){
            $("#"+contentTarget).html(data);
            
            $("#"+contentTarget).fadeIn()
        })
    })    
}
</script>

and the HTML

            <style>
                #news_picker
                {
                    width:20%;
                    height:100%;
                    overflow:auto;
                    float:left;
                }
                
                #news_details
                {
                    width:79%;
                    height:100%;    
                    overflow:auto;
                    float:left;
                }
                #loadingDiv
                {
                    background-image:url(ajaxLoader.gif);
                    background-repeat:no-repeat;
                    background-position:center;
                    z-index:150;
                    display:none;
                }    
            </style>
            
            <div id="news_picker">
                <div class="listHeader">All of our news</div>
                
                <a href="news1.html" class="ajaxLink" >Ajax link override</a>
                
            </div>
            
            <div id="news_details">
                <div id="loadingDiv">Content Loading, Please Wait.</div>
                
                <div id="news_content"> I am news data</div>
                
            </div>

this will transform any link with a class of ‘ajaxLink’ and convert it from a regular link into an ajax loading link. Right now it is coded to push the fetched content into a div called ‘news_content’ (you could make this dynamic, or even per link by including some attribute in the link itself that tells the function where to put the fetched content). You may want to add special case handling for content other than text/html, such as detecting if the requested resource is an image, and wrap it in an img tag, etc. Anyway, hope this helps someone, I thought it was pretty cool to allow users to easily create Ajax links 😛


Salesforce jQuery Calendar

Over the last year there has been a lot of people excited about my Salesforce jQuery calendar. Problem is, for one the code isn’t available due to a lack of hosting. Problem two is that it sucks. It uses some goofy visualforce page to pass information off the the apex class, and it can only query against one type of object. Overall, it’s pretty lame and not a good sample of the kind of work that is possible these days. So I am rebuilding it. In fact, I already have the core up and running. But now I want to know what kind of features you guys are interested in. Do you want a super bare bones easy to understand release, or do we want a little more robust full featured kind of thing? Let me know in the comments what you’d like to see in a new Salesforce calendar, and I’ll see what I can do.

For those who just want my basic functional super skeletal framework I’ll be releasing it sometime tomorrow. I need to clean up a few little things, and I’ll probably release it as an unmanaged package for easy install, and I’ll host the code on my new box.net account.


CloudVote emerges as Appirio Social Enterprise Toolkit

Today is a bit of a proud moment for me. My entry for the CloudSpkes contest Social Enterprise Toolkit Ideas App has graduated and been deployed for use by the public. Marketwatch a nice little write did a up on the application. I’ve been helping the Appirio team make the last required tweaks, as well as overhaul the design for the last few days (their graphic design team is quite awesome) and it looks like it is now live. You can check it out http://m.socialenterprisetoolkit.com. It’s pretty cool to see my work move from concept, to beta, to production in a span of like 3 weeks. Although most will never know who wrote it, and won’t care I’ll at least now, and that’s good enough for me 🙂


Cloudspokes Challenge jQuery Clone and Configure Records

Hey everyone,
Just wrapped up another CloudSpokes contest entry. This one was the clone and configure records (with jQuery) challenge. The idea was to allow a user to copy a record that had many child records attached, then allow the user to easily change which child records where related via a drag and drop interface. I have a bit of jQuery background so I figured I’d give this a whack. The final result I think was pretty good. I’d like to have been able to display more information about the child records, but the plugin i used was a wrapper for a select list so the only data available was the label. Had I had more time I maybe could have hacked the plugin some to get extra data, or maybe even written my own, but drag and drop is a bit of a complicated thing (though really not too bad with jQuery) so I had to use what I could get in the time available. Anyway, you can see the entry below.

jQuery Clone and Configure Record


Cloudspokes Challenge, QuickLinks

Cloudspokes wanted a simple bookmark replacer. Something a little easier to use, maybe a little faster. I had been working on this for a while, stepped away to work on the open social voting challenge and forgot how bad of shape I had left this in only hours before the due date. So in a hurry I tried to finish it up and at least have something worth submitting. You can see a video of it in action below. Again, I think they’ll be releasing the code later, not that there is much to see.

Watch the video


Cloudspokes Challenge – Open Social Toolkit Voting App

Hey all,
Another week another CloudSpokes challenge done. This one is the open social toolkit, voting application. It allows users to create topics to vote on, lets other users vote on those things, and have discussions about them. Integrated with facebook, and totally force.com based. I used jQuery mobile here to make sure it works on phones and iPads and whatnot, and the super awesome force.com platform for hosting and schema. Really a match made in heaven if you ask me.

You can see the demo app here

See the videos of it in action too!
Interface and Front end Video
Backend and Schema Video

I’ll be doing a post later about the nifty facebook integration, cause to me, that is the coolest part.