Idea Tracker Tutorial Part 2
Introduction
This two-part tutorial will show you how to connect a custom app to the HubSpot CRM. The example app we’ll be using creates a simple customer idea forum with an interface where users can sign up, log in, and post ideas.
This is Part 2 of the tutorial—as a reminder, in Part 1 we connected the app to a HubSpot account via OAuth, created properties, and synced contacts and companies.
Concepts
Just like in Part 1, we’ll be introducing a few key concepts here:
- Deploying your application so it can be reached outside of your local environment
- Creating an using a task queue
Both of these are important for handling webhooks from HubSpot.
Technology
Part 2 has a few additions to the technology used in Part 1. However, keep in mind that you can use any technology that fits your needs. Here’s what we recommend:
- Kafka for the message queuing system
- Zookeeper for managing the Kafka configuration
- Google Compute Engine for hosting the app
The Tutorial
Part 2 picks up right where Part 1 left off. The directory structure is the same. The biggest change is the introduction of two new services in the `docker-compose.yml` file, Kafka and Zookeeper. To help interact with these services, the `.env.template` file has been updated with new environment variables that you can use.
Extending the CRM
You can display rich information about app activity in the Hubspot CRM using the Timeline API. Timeline events are structured data that describe things that happen at a particular point in time. Timeline events are immutable (cannot be changed) because once something has happened, you can never go back in time to change that event. In this particular use case, a user can add an idea. They can later go and update the idea or delete the idea but that doesn’t change the fact that at some point they created the idea. If you want to capture those updates and deletes, you create new timeline events. Each timeline event must have its structure defined before you can tell HubSpot about it. To define an event template, log into your developer account (the same one you used to get your OAuth credentials). In your app settings, create a Timeline Event Type. Set the “Target Record Type” to Contact. In the Data tab, create two properties with the type `String` and named `idea_title` and `idea_detail.` Finally, you can either set the Event header and detail templates to whatever you want, or use this example:
With a body template of:
Be sure to take note of the eventTypeId
, which you’ll find at the end of the URL where you’re editing the templates.
Now it’s time to actually start sending the events. First, you’ll need to trigger an API call from the web_service
to the hubspot_service
when a new idea is created. Do this by creating a function that takes the idea and sends it to the hubspot_service
:
Further down in the same file, modify the idea creation handler to actually call that function.
This code receives the idea from the client
, saves it to the database, and uses it to populate information about the author of the post. This gives you the actual name of the author rather than just the ID. From there, you’re generating a new access token and then passing the populated idea over to the hubspot_service
.
Next, you’ll need to handle this API call and send it over to HubSpot.
This code takes that populated idea and maps its author to the contact you want to associate with the timeline event. It also fills in information about the idea itself in the `tokens` object. These tokens will populate the template you created earlier. Your code should now look like this.
Hosting
Up until this point, your app has only been available on your local machine. Now you need to start receiving webhooks from HubSpot and your app needs to be available to the larger Internet. You’ll need a couple of things in order to do this:
- A hosting provider
- A domain
- A SSL certificate
For this tutorial, we chose Google Compute Engine as the hosting provider, as well as a domain and SSL certificate that work behind the HubSpot corporate firewall. It’s worth noting that the HubSpot platform uses both AWS and Google Compute. You can use the providers you feel most comfortable with. To host with Google Compute Engine, follow Google’s instructions. The only adjustment we made was to use a container-optimized image, and we recommend you do the same.
Once you have a compute instance running, you need to ssh
into it using the tools provided in Google Console or your favorite CLI. Next, you’ll need to get your files up to this compute instance. If you’re pushing your code to a public Github repository, the easiest way to do this is to clone your Github repository while ssh’d into the instance. If you’re not doing that, you can use SCP (Secure Copy Protocol) from your terminal to transfer your files up to the compute instance. To optimize upload time, make sure you remove your node_modules
folders before starting.
You’ll also need a SSL certificate for your domain. For now, you can place the certificate and private key in a folder in the web_service
directory. We’ll cover how to use them later.
Deploying your app
Up until this point, your app has been using a development server to provide hot-reloading on the front-end of app. That set up is great for development because you don’t have to refresh the page to see changes to your code. It’s not so great for production because it’s not optimized for speed. For production you will need to take some of the processes that used to be carried out by the client
docker service and move them to the web_service
and have Express serve the files instead of the development server from the client
service. To do this, you will need to create two different files at the root of the project. First is the Dockerfile:
This is a new set of instructions that docker-compose
will use to create a production-ready web_service
. The second file you need to create is docker-compose.prod.yml
:
This tells docker-compose
how all the different services fit together once they’re built according to the specifications in their Dockerfiles. You can now run docker-compose -f docker-compose.prod.yml up --build
while ssh
’d into your compute instance and view your application on a live URL.
Receiving your first webhook
To receive webhooks to your publicly inaccessible hubspot_service
, you need to proxy requests through the web_service
.
To avoid repeating concepts you’ve already completed, the webhook router is already set up for you. The specifics of communicating with the Kafka service are also already set up in ./hubspot_service/src/webhook.js
.
Kafka is just a particular technology to accomplish your goal of queuing up webhooks to process the specifics weren’t important. Here’s a general idea of how Kafka works: The hubspot_service
opens up a connection to Kafka based on information coming from the .env
file. From there it sets up a topic, which can be subscribed to elsewhere in the app. If needed, this lets other services know about webhooks coming from HubSpot. You can now start receiving webhooks from HubSpot.
The easiest way to see this in action is to go into your developer account and set up a subscription for contact property change. Create subscriptions for firstname
, lastname
, faction_rank
, and email
. Go into any of these subscriptions and test them by entering the URL where you hosted the app plus the route /webhook/platform
. When you click the “Test” button, you should see the “200 OK” response from hubspot_service
.
Processing webhooks
One of the keys to building a successful integration that uses webhooks is that you shouldn’t attempt to process them before sending the response back to HubSpot. In other words, the processing should be handled asynchronously. The only thing you need to do in the webhook handler is add the webhook payload to the Kafka queue.
While the code above is fairly straightforward, one thing worth noting is that you need to map over the request body to get the actual events the webhook is sending you. This is because each webhook payload can contain multiple events to help conserve resources for you.
Webhooks events don’t always come in the order they were generated. Because of this, you can’t blindly apply updates you get from HubSpot. You need to first check the timestamp that comes with the event against your own database. There are many ways to handle it, and the right answer depends on your exact setup. In this case, you’re going to use a new schema called UserHistory. This will track the history of properties you’ve set up webhook subscriptions for.
In web_service
, you can now consume the Kafka message.
This sets up a very simple Kafka client which passes off each message to a userHandler
function defined in ./web_service/src/Users.webhook.js
. This handler determines if the incoming message has new or stale information.
First, the basic logic is to check if the change source of the webhook is API. If so, it was probably triggered by this app and can be ignored. You may want to handle this differently depending on how you view your app’s interactions with other applications.
Next, it checks if the property that came through the webhook is one that we actually care about. This prevents extra webhook subscriptions from being set up in the developer account. From there, it checks if the value has changed. If it has, the handler checks the timestamp of that change and only saves it if the webhook has more up-to-date information than the database.
To help keep things manageable, you will want to refactor your app with a utility file.
You now have a fully functioning app and source code that matches this branch.
Conclusion
You now have an app that uses several best practices when creating a two-way sync with HubSpot. You can use different languages and technologies, but the basic ideas should apply to any project. HubSpot has also created sample apps in other languages showing individual concepts in PHP, Node.js, Ruby and Python.