Join For Free

Video topic extraction and transcription Posted 1 year ago

This is a quick tutorial on how to transcribe and extract topics from videos using the Yactraq Speech2Topic, API in Mashape.

(Note: Yactraq is giving away $500 to the winner Yactraq category for this coming Mashape April Hackathon Apr 6th, 2013.  We require a number of teams to submit to this category to unlock the prize :))

This is particularly useful if you’re trying to get context in the video, which can be used to query ad services to pull relevant content.

Read More

Face Recognition using Javascript and Mashape Posted 1 year ago

This is a tutorial on how you can use the Face Recognition API in a web application using Javascript.  The Face Recognition API makes it ridiculously easy to add face recognition capabilities to your app, whether it’s web, mobile, etc.  We will use photobooth.js to take pictures from our webcam. (Make sure you have one in your machine or plug one.)

There are 2 main things to remember when using the Face Recognition API.  You need to:

A.  ”Train” the Face Recognition API, which means uploading a bunch of pictures that will constitute the “database” from which pictures will be recognized.

B.  ”Recognize” a picture by uploading it to the Face Recognition API.

These will be the two main parts of this tutorial.  Let’s get started!

Train the Face Recognition API

1.  Get a Mashape account and key.  If you don’t have an account yet, you sign up here.  You need a Mashape account because we will use Mashape’s Test Console to upload pictures to Face Recognition to build our database.  Once you have signed up, you will be taken to your Mashape dashboard where you will have access to your Mashape keys,  as in the picture below:


2.  Go to the Face Recognition API page.  This page shows the API endpoints on the left, and their corresponding documentation and test console to the right.  

What we’re interested right now is in creating an album where we can upload pictures to “train” the service.  You can create an album through the endpoint called “Create Album”.  You can click here to directly navigate to it in the page.

Think of a unique album name and hit “Test Endpoint”.  This will call the endpoint and return a response similar to below:


The response above indicates that we have successfully created an album in Face Recognition, and that we should remember/keep the values album and albumkey because we will be using it throughout the tutorial to refer to this album.  (Note: It’s probably obvious by now, but everything is hosted on the cloud - no need to maintain an on-premise setup).

3. Let’s now start “training” this album using the Train Album endpoint.  As you can see, one of the parameters in the Train Album endpoint requires us to upload pictures:


To take pictures in our webcam, we will use photobooth.js.  Click here to access the demo page where we will take our pictures.  We will also save the pictures in our drive so we can use them to upload/train the album later.

Note that photobooth.js will ask for our permission to use the webcam.  Just click “Allow”.


To take pictures, just click the camera icon to the right of the photo canvas.


It will take a picture every time you click. Take one picture and save it to your drive by right-clicking “Save Image As”.  Take three pictures of yourself and save it to your drive.  Then take one other picture of your buddy, as we will need this to differentiate the “entryid” in our album as required by the Train Album endpoint.

Once you’re done taking pictures, upload them to Train Album endpoint by filling out the required parameters, changing the “entryid” field for each person we want to upload (e.g. Entryid “Chris1” for Chris’ pictures and another entryid “Hazel1” for Hazel’s pictures.


You can upload one picture for each call so you have to do this for each upload.

(Note: You can also provide URLs pointing to your images.  For this tutorial we’ll use fresh hot pictures instead.  The API works well if you have a variety of pictures from different people, so as to provide more contrast when recognizing pictures later.

Also, remember that everything you do in the Test Console can be programmatically accessed using Mashape’s different client libraries.)

Here’s a response example when you upload a picture:


Once you’ve done this for all your pictures, it’s time to “Rebuild” them.

4. The "Rebuild Album" endpoint only requires your album and albumkey.  It will also make sure you have uploaded enough pictures and entries to make a recognition.  Hit “Test Endpoint” to rebuild the album.  A successful response would look like this:


Let’s now move on to the 2nd main section of this tutorial which is to recognize a picture based on the album we have created in Face Recognition.

Recognize a Face using the Face Recognition API

In this section we will refer to a web app we have put up in and explain how it works.  You can access it here.


(Screenshot of the Face Recognition sample code in

The main parts of this section will delve on 1) taking a webcam picture using photobooth.js, and 2) uploading the picture to the Recognize endpoint.  Let’s lay out the source code here so we can easily refer to each line code in explaining the two steps above.  

Line 1: Use jQuery to initialize stuff when web page is “ready”

Line 3:  We are using the photobooth.js library to access the webcam with HTML5.  This single line instantiates the photobooth object on the “#photo” element and sets up the event handler when the camera icon is clicked.  It will execute the code inside the anonymous function when the camera icon is clicked.

Line 4 and 12:  photobooth returns a “data url” that represents the image we took with the camera.  We need to convert this into a binary blob, a format fit for uploading.

Line 5 - 6:  We just want to make sure that we got an image, and size is not zero.

Line 7 - 28:  We will now upload the picture to the Recognize endpoint.

Line 29 - 33:  This sets up a FormData that would hold the query parameters needed by the Recognize endpoint.  This is similar to the parameters that you see in the Test Console for the Recognize endpoint.

Line 35 - 43:  We set up a jQuery “ajax” call for Recognize endpoint with the required parameters:

- URL (the Face Recognition API endpoint in Mashape), type (HTTP POST), data (the form data parameters), headers (Mashape auth headers).

Line 44 - 48:  When we get a response back from the endpoint, display it with an alert.

The rest of the code required for this application (HTML, CSS) can be viewed in jsfiddle here.


The application is straightforward;  you need to plug in your Mashape key, and two other parameters from Face Recognition - album and album key.  Whenever you take a picture, it gets uploaded using the Face Recognition Recognize endpoint, and we display the result back.

A response would look similar to this:

As you can see, the response includes coordinates for the eyes, nose, mouth, gender, smile, and recognition based on the pictures that were “trained”.  Obviously, the more pictures you have (of you and other people), the better the recognition.

We hope that this tutorial helps you understand the Face Recognition API better.  We’d like to see your own Face Recognition projects.  Send us an email at!

Mashape Sample Code for executing an AJAX request using jQuery Posted 1 year ago

This code snippet below is CORS-enabled so you don’t have to worry about cross-domain requests.

Calling Mashape APIs in Node.js through REST Posted 1 year ago

UPDATE (Oct 28, 2013): The post below is already outdated.  To consume Mashape APIs through node.js, we recommend that you use Unirest for node.js

UPDATE (Feb 27, 2013):  Please note that we have improved the way authentication keys are handled in Mashape.  The header value below will be replaced by either the new Testing or Production Keys.  Read this post to learn more.

Hi guys, here’s a short example on how you can call APIs in Mashape through Node.js / REST.  It’s pretty straightforward.  You just need to remember that you need to make a secure call using https://, and have a generated Mashape Authorization header and plug it in the header parameter.

(You can download this source code from Github)

And then run the node.js script like so..

You would get a nice JSON reply that you can start parsing.

If you’re new to Node.js, you can check out their site at .  You can get either the source code or binaries for Node.js there.

We’d like to invite node.js developers out there to share with us any applications that you are planning/or have created using the APIs in Mashape.  There are tons of APIs to try!  You can head over to our Facebook page if you have questions.

Happy coding!