The Apitite Blog

How to get Tweets from the Twitter API in 10 Minutes

In this tutorial I will teach you how to create a Code Endpoint to retrieve the latest tweets for a user given their twitter handle!

The Finished Product

Try it out below. It works! Or click here to go to the endpoint page.

Getting Started - Get your Twitter API Key!

Accessing the Twitter REST API requires you to register your application and obtain an API key to make API calls. To obtain your key, you will need to register your application here. After registering your app, click the Keys and Access Tokens tab and make note of four values (see below): Consumer Key (API Key), Consumer Secret (API Secret), Access Token, and Access Token Secret. If this is your first time at this dashboard, you probably do not have an access token created yet. Simply click the Generate Consumer Key Button to obtain an access token (see below).

Create Your Code Endpoint

A few weeks back we announced our latest and most powerful feature: Code Endpoints. These API endpoints execute a user-defined block of code when they are called to calculate and return a value. In our case, we will will be fetching the tweets from the Twitter API and returning them to the API user.

To create a new Code Endpoint in Apitite, navigate to your API dashboard and click Create Endpoint. Follow the steps to create a Node.js Code Endpoint and enter the necessary documentation  about your endpoint (see below) like the endpoint's name, description, and root URL, then click Create.

Defining a Parameter

The endpoint we are building will take in a Twitter handle as a parameter and use it to search the Twitter API for their latest Tweets. To define a parameter for a code endpoint, click the Add Parameter button and enter a parameter name, in my case I chose "handle"  (see below). Then enter a useful description.

To retrieve this parameter in your code, make a call to apitite.param('handle') with the parameter name you are trying to retrieve, in our case "handle".

Coding Your Code Endpoint

Now comes the fun part: entering the Node.js code that will retrieve the Tweets. Luckily there is an npm package that we can use to easily make calls to the Twitter API. Enter the following code below into the source code window. Then replace the Consumer Key (API Key), Consumer Secret (API Secret), Access Token, Access Token Secret, with the values from your registered Twitter app. Hit Save, and you're done!

// Import Twitter npm package
var Twitter = require('twitter');

// Twitter API Credentials
var client = new Twitter({
  consumer_secret: 'ENTER YOUR CONSUMER SECRET',
  access_token_key: 'ENTER YOUR ACCESS TOKEN',
  access_token_secret: 'ENTER YOUR ACCESS TOKEN SECRET'

// Get twitter handle from API user request
var handle = apitite.param('handle');

// Make call to Twitter API to get user's timeline
client.get('statuses/user_timeline', {screen_name: handle}, function(error, tweets, response){
  if (!error) {
    apitite.done(tweets); // return the tweets to the API user
  } else {
   console.error('An error occurred!'); //error handling

A Gentle Introduction to Node.js Promises with Q: Part 1

Note: I use the terms "Node.js", "Node", and "JavaScript" somewhat interchangeably throughout this post.


I love Node.js. My first encounter with it was in 2012 while working in IBM Research. We used it, along with (at the time) some other recent and hip technologies, to build a web app. I loved how terse and flexible Node was, compared with Java. And I loved how JavaScript doesn't quite know what kind of a language it is (prototype-based? object-oriented? functional?).

Node.js can be described as JavaScript for the backend. But as anyone who has programmed anything serious in the language knows, this is not completely accurate. What really makes writing JavaScript for the backend different than writing JavaScript for the web browser is that, on the backend, nearly everything is asynchronous, non-blocking, and callback-based. For instance, to read a file, you would write something like:
fs.readFile('myfile.txt', 'utf8', function(err, contents) {
  if (err) {
    console.error('error: ' + err.message);
  } else {

Sometimes Node provides synchronous versions of asynchronous calls, but those are for the weak. A real man or woman embraces Node's asynchronous nature. But it isn't always easy, especially if you're first starting out. One of the first annoyances a new Node.js coder encounters is the infamous "pyramid of doom" that results from nested callbacks:
asyncFunc1(arg1, function(err, result) {
  if (err) {
    // Handle error
  asyncFunc2(arg2, function(err, result) {
    if (err) {
      // Handle error
    asyncFunc3(arg3, function(err, result) {
      if (err) {
        // Handle error
        // Etc.

Before too long, you're indenting lines to half the width of your screen. Not only that, but you're having to write a million error handlers, most of which probably do the same thing, for every asynchronous call. And no one likes writing error handlers.

One oft-cited solution for this predicament is to create named callback functions at global scope. This works, but it isn't ideal. The most annoying thing about this solution is that you have to come up with names for all of these continuation functions. What do you call such functions? "everythingAfterReadingTheFirstFile"? "everythingAfterRequestingTheWebPage"? Additionally, these functions, declared at global scope, lose the benefit of closures.

There is a much better solution, and it is called a promise. Promises are neither a new concept in programming nor are they specific to Node, but I had never used them before. There are several promise libraries out there, but one of the most popular -- and the one I use -- is called "Q". Like its Star Trek counterpart by the same name, "Q" is powerful and annoying. But it is only annoying if you don't understand it. I'm here to make it your friend.

Using Q

Everything I'm about to tell you can be learned from the Q documentaton. But like man pages, the Q documentation can be a bit overwhelming, especially for someone who hasn't yet adjusted to its paradigm. So I'm going to walk you through the very basics of using Q in this first blog post. Subsequent posts will expand upon this one, until we've completely mastered the Q library and you're using it like a pro.

A promise is an object that represents the future return value -- or the thrown exception -- of an asynchronous call. There are two operations you need to know to use promises. The first is how to create a promise. The second is how to tell that promise what to do once it becomes "fulfilled" or "rejected". A promise is fulfilled when its asynchronous calls eventually return a value. A promise is rejected when an error is thrown.

Creating a Promise

There are many ways to create a promise. Here is the simplest (and least useful):
var promise = Q();
promise eventually becomes fulfilled with the value undefined. Not particularly useful. It is slightly more useful to pass in a value:

var promise = Q(7);

promise now eventually becomes fulfilled with the value 7. Apart from passing values to initialize a promise, you can also pass synchronous functions to be executed to the method fcall:

var promise = Q.fcall(function() {
  return 7;

Here, we turn a synchronous function into an asynchronous function that returns the value 7. At this point, you're wondering why you're even reading this blog post, as this seems completely useless. All right. Let's see a useful example. Let's see how we make a promise with fs.readFile.

fs.readFile is an asynchronous function that returns nothing (undefined) and takes a callback. We can make a new function that mimics fs.readFile, but does *not* take a callback, and instead returns a promise. This is how we do that:

var readFileQ = Q.denodeify(fs.readFile);

Q.denodeify takes in a function that expects a callback of the usual Node.js form (err, result) and produces a new function that takes no callback, and instead returns a promise. So we can now write:

var promise = readFileQ('myfile.txt', 'utf8');

When that line is executed, promise does not contain the contents of the file. After all, readFileQ is an asynchronous call. It instead contains an object that, at some point in the future, will contain the contents of the file (or an exception, if myfile.txt does not exist). So how do we get at those precious file contents? Do we wait around for a few seconds, perhaps with a setTimeout call? NO! Read on.

Telling a promise what to do once it becomes fulfilled or rejected

To tell a promise what to do once it becomes fulfilled, you pass in a fulfillment handler to the promise method then:

promise.then(function(contents) {
To tell a promise what to do in the unfortunate circumstance that it is rejected, you pass in an error handler as the second argument to then:

promise.then(function(contents) {
}, function(err) {
  console.error('error: ' + err.message);

What does the method then return? Another promise! So, you could have written:

promise = promise.then(function(contents) {
}, function(err) {
  console.error('error: ' + err.message);
Now, promise represents the return value of either its fulfillment handler or rejection handler, depending on which was actually called. But neither handler actually returned a value. They both just outputted some message, so promise eventually represents the value undefined. While the handlers could have returned a value like 7, it would have been even more powerful if they'd returned another promise! In that case, promise would have "become" that new promise, representing its eventual return value. This is best illustrated with an example:

var promise = readFileQ('myfile.txt', 'utf8')
.then(function(myFileContents) {
    return readFileQ('anotherfile.txt', 'utf8');
}, function(err) {
    console.error('error: ' + err.message);
.then(function(anotherFileContents) {
}, function(err) {
    console.error('error: ' + err.message);

Now, if all went well, the program would have read in two files and outputted their contents. But there are two annoyances here. First, we had to write the same error handler twice -- once for the first call to readFileQ, and again for the second call to readFileQ. Couldn't we just write one error handler? And second, what if, after the first readFileQ call fails, we don't want to try to read the second file? The answer to both problems is another promise method, called fail:

readFileQ('myfile.txt', 'utf8')
.then(function(myFileContents) {
  return readFileQ('anotherfile.txt', 'utf8');
.then(function(anotherFileContents) {
.fail(function(err) {
  console.error('error: ' + err.message);

This code snippet illustrates how we can attach one error handler to the end of the "promise chain" that will be called in the event that either the original call to readFileQ or any of the handlers throws an exception. If an error occurs at any point in the chain, the rest of the chain is skipped, and the error handler passed to fail is executed.

Hopefully you see what I mean by the "promise chain". Promises can be chained together, using the method then (and other methods I'll discuss in subsequent posts). The result of such a chain is one big promise, that becomes fulfilled when the last of the asynchronous functions returns.

There is a lot more I have to say on the subject of promises and Q. I've shown you just a fraction of what can be done with them, and I've only just started to explain their countless benefits. But hopefully this wets your whistle for more.

For the next part of this tutorial, I'll go into more detail about how promises work, and explain how to do things like make a promise for multiple asynchronous calls executed in parallel. Stay tuned!


How Do I Create Code Endpoints?

How Do I Create Code Endpoints?

Earlier today, we announced our newest and most powerful feature to date: code endpoints!

When you make an API call to a code endpoint, a block of user-defined code is run and returns a result. Code endpoints make it possible to create API endpoints with sophisticated functionality, without the necessary but time-consuming API legwork like security, rate-limiting, scalability, and server administration.

To create a code endpoint, all you need to do is enter source code directly into Apitite using your web browser and hit Save. That's it! No build system, no deploying, and no spinning up servers. And as usual, we take care of the security, rate-limiting, scalability,  and server administration of your API for you. Node.js is the first programming language we support for code endpoints. Eventually we will release support for additional languages like Python, Java, and .NET.

The apitite Object

To make things easy, we provide an object called apitite for performing endpoint actions like returning a result or getting a request parameter, as well as providing convenience methods for calling other existing Apitite endpoints. Here are all the methods in the apitite object: 

// Returns the given result to the API endpoint caller.

// Retrieves the value of the request parameter with the given name.

apitite.get(apiSlashEndpoint, params, callback)
// Convenience method for performing a GET request on an Apitite endpoint., params, callback)
// Convenience method for performing a POST request on an Apitite endpoint.

apitite.put(apiSlashEndpoint, params, callback)
// Convenience method for performing a PUT request on an Apitite endpoint.

apitite.delete(apiSlashEndpoint, params, callback)
// Convenience method for performing a DELETE request on an Apitite endpoint.

All callbacks have the typical Node.js signature of
    (err, result) -> undefined
If the apitite method failed for some reason, err will be an Error object with a message property explaining the error. If the apitite method succeeded, err will be null and the result of the operation will be stored as the second argument. The "-> undefined" part of the signature simply means that the callbacks you provide should not return a value. If any of this seems strange to you, please read up on how Node.js works -- it is a non-blocking, asynchronous implementation of JavaScript based mostly on callbacks.

Returning Data

To return data when your code endpoint is finished executing, you simply call apitite.done. Here is an example of calling apitite.done and passing a string: 

     apitite.done("the return value!"); // returning a string

You can pass any type of data to apitite.done, for example :

           apitite.done(1336);       // you can return a number
           apitite.done(false);      // or a boolean
           apitite.done(null);       // or null
           apitite.done(['foo', 'bar', baz']); // or an array
           apitite.done({            // or an object!
              success: true,
              result: "mo data, mo problems"

Calling Existing APIs

A powerful feature of code endpoints is that you can easily call existing Apitite endpoints, making it easy to combine results of multiple endpoints to create powerful API mashups. To make a request to an existing Apitite endpoint simply use the apitite.getapitite.postapitite.put, or apitite.delete methods, and pass it the target endpoint's unique API and endpoint path without a leading slash. Here is an example of how to call an existing Apitite endpoint with the apitite utility class.

           apitite.get('sales-api/sales-endpoint', {
               startDate : "2015-01-01",
               endDate   : "2015-07-01",
           }, function(err, salesData) {
               if (err)
                 apitite.done('Error: ' + err.message);

NOTE: All Apitite GET requests made in this manner return a JSON object, so you do not need to append /json to the end of the path, as you would if you were making a request in your own application.

Off-the-Shelf Functionality With npm Packages

npm is an amazing repository with tons of extremely useful software packages (libraries) that you can call from your code endpoint. By default, we've installed the most popular npm packages we could find. To import an npm package, simply require the desired package name, like you would in any Node.js app. Here's how to import the extremely useful underscore.js library:

                           var _ = require("underscore");

If there is an npm package that you need but is not listed on the manage code endpoint page, please email Note that certain npm packages may be rejected for security reasons.

Get Free Help Creating Your Code Endpoints

For a limited time we will help you set up your code endpoints for free! That's free engineering! Just enter your contact information and your API or data problem in the form below:

Introducing Node.js Code Endpoints: Create powerful endpoints without the work

Code Endpoints Are Here!

Today we are releasing our newest and most powerful feature to date: code endpoints. 

When an API call is made to a code endpoint, a block of user-defined code is run and returns a result. Code endpoints make it possible to create API endpoints with sophisticated functionality, without the necessary, but time-consuming, legwork like security, rate-limiting, scalability, and server administration. 

In short: if you can code it, it's possible on Apitite. For example, I created code endpoint in 10 minutes that performs sentiment analysis on a given block of text to report how positive (i.e. happy) or negative (i.e. sad or angry) it is.

Apitite database endpoints are still great for quickly setting up endpoints that interact with a database; the backbone of any API. Code endpoints, on the other hand, allow you to define endpoints with complex functionality, as well as make API requests to existing Apitite endpoints. This means you can quickly and easily create API mashups by combining existing endpoints with a code endpoint. You can also create endpoints that make use of off-the-shelf software libraries.

How Do Code Endpoints Help My Business?

You might be asking yourself, "Why not just build my own API from scratch if I have to enter code anyways?" 

Apitite does the heavy lifting for you, so that you can get an enterprise-grade API, in a fraction of the time. We handle the hosting of your API, which means you don't have to provision a server, we take care of security concerns likes DDoS attacks and rate-limiting, and we handle the performance and scalability of your API. Moreover, you can modify and control your API directly from your Apitite dashboard, which means you never have to deploy code or worry about build systems.

What Can I Build With Code Endpoints?

The possibilities are endless! Here are a few examples of possible things you can create and do with code endpoints :
  • Combine your data with data from other APIs like Twitter, Facebook, Linkedin, etc.
  • Create API mashups
  • Build an authentication API
  • Build an analytics API
  • Create a text analytics service API

Get Free Help Creating Your Code Endpoints

For a limited time we will help you set up your code endpoints for free! That's free engineering! Just enter your contact information and your API or data problem in the form below:

How Do I Create Code Endpoints?

To learn about how to build code endpoints, check up our follow up blog post!

Press Release - Apitite Selected for MassChallenge 2015 Accelerator

Apitite Selected for MassChallenge 2015 Accelerator
Apitite to gain access to industry resources and expert mentors

Boston (May 20, 2015) – MassChallenge, a Boston-based startup accelerator, today announced that Apitite is among the list of high-potential startups accepted to its 2015 program, which kicks off this summer. The four-month long program includes access to resources that are vital for an early-stage company, including expert guidance, office space, and a network of hundreds of entrepreneurs, mentors, investors, and executives.

“This is an amazing recognition of all the hard work we’ve put in to making our technology as high-performing as possible,” said Apitite co-founder and CEO Tristan Ratchford. “At Apitite, we believe that an easy-to-use API can have immense business impact; it’s great to see that MassChallenge agrees with our vision. Tapping into the collective brain power of so many experts will help take Apitite to the next level and ensure that we’re delivering even more value to our customers.”

Founded in January 2015 by two former IBM engineers, Apitite currently serves customers of varying sizes across several vertical industries. Its pricing model offers options for both startups and enterprise-class organizations alike, and the solution itself can be put to many different uses – including creating an API from a MongoDB or using an API to power a Geckoboard.

“The 200+ startups that will be entering our 2015 accelerator include many of the highest-impact startups in the world,” said MassChallenge CEO John Harthorne. “From Boston to London to Israel and beyond, the global distribution of these early-stage companies highlights the universal need and opportunity to provide startups with access to the people and resources that will help them launch and succeed.”

Ratchford, along with Apitite co-founder and CTO Todd Soule, will move into MassChallenge’s Seaport headquarters in June. For additional information, including Apitite pricing details, please visit

About Apitite:
Named a “startup to watch” by Bostinno, Apitite’s mission is to help businesses leverage APIs effectively and efficiently. Its product lets you enjoy the benefits of an API – without all the work. With Apitite organizations can create a complete and secure API in minutes without having to write a single line of code, enabling companies of all sizes to share data easily and flexibly.

About MassChallenge:
MassChallenge runs startup accelerators designed to catalyze a global renaissance by connecting high-impact startups with the resources they need to launch and succeed. Anyone can apply to MassChallenge, with any early-stage startup, in any industry, from anywhere in the world. MassChallenge does not take equity or place any restrictions on the startups it supports.

With programs in Boston, Israel and the UK, MassChallenge provides entrepreneurs with mentorship, office space, education, access to a vast network, and other resources during four months of acceleration. MassChallenge awards over $1.75 million in non-dilutive grants to the startups demonstrating the highest impact and highest potential. A nonprofit organization, MassChallenge is funded by corporate, public and foundation partners. To date, the 617 MassChallenge alumni have raised over $950 million in funding, generated $486 million in revenue and created 5,104 jobs. For more information, visit

Top MassChallenge Boston partners include: Jamestown, Fidelity Investments, Verizon, the Richard and Susan Smith Family Foundation, CASIS, the Commonwealth of Massachusetts, The Deshpande Foundation, The Biscay Government, The Boston Foundation, Combined Jewish Philanthropies of Greater Boston, B├╝hler, American Airlines, Boehringer Ingelheim, Bose, EMC, Henkel, iNNpulsa Colombia, Microsoft, PepsiCo, Pfizer, Smith & Nephew, Thomson Reuters, Turnstone, WGBH Boston, UMass Amherst, and Zapopan.

MassChallenge UK partners include: NatWest, Unilever, Visa Europe, Satellite Application Catapult, Digital Catapult, Hult International Business School, Tech City UK, EMC, The Manchester Growth Company, GCHQ, Fried Frank, ViiV, UK Business Angels Association, Angels Den, Media City UK, The Landing, Taylor Wessing, The BIG Partnership, The Collective, Crowdcube and Tobacco Dock.

Apitite Media Contact
Tristan Ratchford

Boston Media Contact
Shannon Sullivan
+ 1 888 782 7820 x 726

UK Media Contact
Hailey Woldt
+ 44(0) 7872 629 960

Israel Media Contact
Moran Nir
+ 972 52 3314421

Recap : MongoDB and Apitite Webinar

Thank you for attending our webinar on How to Create an API for MongoDB in Minutes with Apitite! This was the first Webinar Apitite has ever done and we really appreciate people taking time out of their day to check it out.

Webinar Recording

In case you missed the webinar, or if you wanted to view it again, here is a full video recording.

Webinar Slides

If you don't like the sound of our voices and just want the slides you can download them here:

Webinar API and Sample Application

During the webinar we completed two tasks:  how to create an API endpoint in five minutes to get the latest orders from a MongoDB sales database and how to build an interactive web application to compare orders or revenue from different years.

Task 1 - Creating an API in minutes for MongoDB

For the first task, we demonstrated how to connect a MongoDB datasource to Apitite and create an API endpoint that retrieves ten sales orders with pagination. Pagination means that a user of the endpoint can pass a "skip" parameter to retrieve a different page of results depending on how many records to skip. For example, if the user passed in a skip parameter of 0, the first ten records would be returned. If the user passed a skip parameter of 5, records 6-15 would be returned.

Task 2 - Visualize your data with an interactive lightweight, client-side-only web app  

A screenshot of the finished product. 
In the second task, we demonstrated how to create an API endpoint that used the MongoDB aggregation pipeline to filter, group, and sort sales data into a time series of orders per day for a given year that we could graph. The finished API endpoint can be viewed here and users can pass parameters to the endpoint to specify the date range of data to retrieve. Click below to check out a live working demo of the application we built.

Live Demo - Comparing Sales Data

Webinar Sample Application Source Code

Click the link below to get the source code of the sample application we built from GitHub.

Comparing Sales Data (Sample Application) GitHub Repository

Thanks again!

We wanted to thank everyone again for attending and if you have any questions or feedback please send an email to or

Webinar: How to Create an API for MongoDB in Minutes

Join us for a free webinar on April 29th at 1:30pm (ET). The Apitite team will show you how to create an API for MongoDB in minutes, without having to write a single line of backend code!

This webinar is for you if...

You have loads of data in your MongoDB database, just waiting to be put to use. Maybe you want to visualize it. Maybe you want to build an application that lets you search, sort, and sift through it. Maybe you want to feed it into another system. Maybe you're not a technical person, and you just want to download some data to your computer, or share it with a colleague. The problem is:

  • You don't have the time to build a whole web application. You have a day job.
  • You may not have the technical expertise to do this.
  • You don't want to deal with deploying, hosting, and maintaining another application.
  • You don't want to deal with authenticating and authorizing users of your API.
  • You want to make rapid changes and tests to your API without having to rebuild and redeploy.
  • Even if you have the technical resources to build an API yourself, you would like to spend minutes rather than days doing it.
If you can relate to ANY of these points, this webinar is for you. We'll show you how to use Apitite to create and host an API for your MongoDB database, without having to worry about things like coding, hosting, or security.

The topics

1. Create an API in five minutes that allows someone to download your MongoDB data. Neither you nor the end user needs to write a single line of code.

2. Visualize your data in a lightweight, client-side-only web application.

3. Build an interactive web application to view your data dynamically.

The first topic does not require any coding whatsoever. The second two topics require knowledge of HTML and JavaScript, but no back-end coding is required.

The team

Tristan Ratchford
Co-founder and CEO of Apitite.
Todd Soule
Co-founder and CTO of Apitite.
Tristan and Todd worked at IBM Research prior to founding Apitite. They have a passion for creating great, easy-to-use products that make arduous tasks painless.

The details

The webinar will take place Wednesday, April 29th at 1:30pm (ET). There will be free giveaways and all material will be available for download, so you can create your own API to visualize your data on the same day! In the meantime, check out Apitite for yourself!
Click Here to Register

Tutorial: Using Apitite to Power your Geckoboard (30 mins)

Figure 1. A line chart in Geckoboard


In this tutorial, I will show you how to hook your database directly into Geckoboards using Apitite to create custom dashboards. It doesn't matter if your database is relational or NoSQL, Apitite makes it possible! (Time: ~30 mins)

Geckoboard - Build Powerful Custom Business Dashboards

I recently stumbled across Geckoboard, a powerful tool for creating custom dashboards for your business. In a nutshell, you build your dashboard by adding widgets that "transform your raw data into simple visualizations". Visualizations include gauges, meters, leaderboards, and charts of all kinds. I'm blown away by Geckboard because of its flexibility and how sleek it looks. While working at IBM Research I spent two years developing analytics dashboards, so I understand the importance of having actionable insights at your finger tips.

A really powerful aspect about Geckoboard is how you can integrate existing services your business uses, such as Twitter, Google Analytics, and Hubspot, to create useful dashboard visualizations. If you browse Geckoboard's integrations page you will see there are more integration points than you can shake a stick at.

Visualizing Custom Data with Geckoboard

Geckoboard can also visualize custom analytics that you create as well. All you need to do is point your widget at a URL that returns some JSON or XML and you're ready to rock. However, suppose you want to create a custom visualization from data that is stored in your database. The natural answer would be to create a web app that accesses your database and serves up the data as an API endpoint. This approach requires coding up a web app, deploying it on a server, and maintaining this server.

Some databases also offer a REST interface to query them over HTTP. The issue with this approach is that your database layout must match the Geckoboard widget JSON structure exactly. Figure 2 shows the Geckoboard line chart documentation along with the JSON structure it expects.

Figure 2. 

It is impossible for a traditional relational database (e.g. MySQL and SQL Server) to output JSON like this because relational database tables are organized as rows and columns and JSON objects very often contain nested objects and arrays.

Using Apitite to Create Custom Analytics

Great news! You can use Apitite to create your custom analytics and with the announcement of the Apitite JavaScript Transformation Code feature you can translate the data returned by an endpoint into any JSON format. This means you can create an API endpoint that hooks your database directly into Geckoboard and without having to build an application to translate the data! The best part is that it doesn't matter if your database is relational (e.g. MySQL or PostgreSQL) or NoSQL (e.g. MongoDB).


In an older post, I created a Toronto Raptors API from a dataset of historical player totals. I will be using the same dataset and API for this tutorial to create a line chart that plots the average total player points per season. The dataset is stored in a MySQL database, which means out of the box the data returned from an endpoint will not be in the JSON format Geckoboard expects. Figure 3 shows some sample data from this dataset. As you can see, the data is stored in the standard relational format of rows and columns.

Figure 3. A sample of the Toronto Raptors player total dataset. 

Step 1 - Create a New Custom SQL Endpoint

To get started, I created a new Custom SQL Endpoint for my Toronto Raptors API called Average Player Points per Season and added some descriptive information about the endpoint. A Custom SQL Endpoint allows you to interact with your data source (e.g. select, insert, update, and delete data) by providing a simple SQL query.

Figure 4. Creating a new endpoint for my Geckoboard visualization
Figure 3 displays how I created my Custom SQL endpoint. The important thing to note is the URL Root: This is the URL that Geckboboard will use to poll my API. You can view the actual endpoint here and play around with it.

Step 2 - Create Custom SQL Query

Figure 5. SQL query used to calculate average payer totals per year
I want my Geckoboard line chart to plot the average player totals per season. So I create a SQL query (shown in Figure 5) that computes this and enter it in my endpoint's SQL Query text field.

Figure 2 above shows the JSON layout the Geckoboard line chart expects. However, if you access your new endpoint at this time, the data will be flat because it is coming from a relational table, as shown in Figure 6. All the data is there, but it's frustratingly not in the correct JSON format. Apitite JavaScript Transformation Code to the rescue!

Figure 6. The data returned from my endpoint without data transformation

Step 3 - Transform you Data with the Apitite JavaScript Transformation Code

The JavaScript Transformation Code feature allows you to have Apitite run a piece of JavaScript on the data returned from a endpoint, which is extremely handy for performing transformations on data (e.g. rounding values) or translating it into a particular JSON structure. To use this feature, go the Manage Endpoint page for your Custom SQL endpoint, scroll to the bottom, and twist open the JavaScript Transformation Code box. There you will find a text field with a JavaScript stub function and instructions on how you can build your own transformation code. The parameter passed to the stub function is the JSON data returned from your endpoint.

Figure 7. Apitite JavaScript Sandbox code used to translate rows and columns into JSON Geckoboard can use.

Figure 7 shows the function I created to transform the data in Figure 6 to JSON that Geckoboard can recognize. Since the Geckoboard line chart expects Strings as labels, I had to convert the year returned from my endpoint into a string (see line 7 in Figure 7). Also, I wanted to make the average point stats nice and round, so I used the Math.round(...) function on line 8 of Figure 7. With the Apitite JavaScript Sandbox I was able to make these transformations without having to modify my underlying database. Now, if you look at Figure 8 below, the data returned from our endpoint is all ready for Geckoboard! Click here to check out the endpoint for yourself.
Figure 8. Data output from my endpoint after data transformation

Step 4 - Hooking Geckoboard to Apitite

We're almost done! Next, all I needed to do was log into Geckoboard and add a Line Chart widget.

Figure 9. Connecting my Geckoboard Line Chart to my Apitite endpoint
Figure 9 shows the options I entered for my line chart. The most important step is specifying the URL for your endpoint in the URL data feed and setting the line chart to Polling. This means that Geckoboard will periodically poll your endpoint for data to populate your line chart. Another thing to note is that the endpoint URL you enter must have /json appended to it because it will be a programatic request. For example, the URL I specified is Try it out! It works.

Step 5 - That's it! Kick back and enjoy your new visualization!

Navigate back to your dashboard and check out your new visualization! Figure 10 shows the finished product. Before you start commenting on why the Raptors did not score anything in the year 2014, it turns out that my dataset has an error in it. The season_end column is really the season_start column and 2014-2015 is only half over, which explains why only an average of 224 points have been scored.

Figure 10. The finished product! Go Raps Go!


Geckoboard is great for creating custom dashboards, but you need to put in some legwork to feed your Geckoboard custom analytics from your database, especially if your data is in a relational database. However, with Apitite you can create an API that feeds Geckoboards directly, regardless of whether you are running a traditional relational or a NoSQL database.