Skip to content

Latest commit

 

History

History
128 lines (82 loc) · 5.73 KB

INSTRUCTIONS.md

File metadata and controls

128 lines (82 loc) · 5.73 KB

Watson Hands On Labs - Visual Recognition

During this lab, you will use the Visual Recognition service to train a classifier and recognize images.

You can see a version of this app that is already running here.

So let’s get started. The first thing to do is to build out the shell of our application in Bluemix.

Creating a IBM Bluemix Account

  1. Go to https://bluemix.net/
  2. Create a Bluemix account if required.
  3. Log in with your IBM ID (the ID used to create your Bluemix account)


Note: The confirmation email from Bluemix mail take up to 1 hour.

Deploy this sample application in Bluemix

  1. Clone the repository into your computer and navigate to the new directory.

    git clone https://github.com/watson-developer-cloud/visual-recognition-nodejs.git
    cd visual-recognition-nodejs
    
  2. Sign up in Bluemix or use an existing account.

  3. If it is not already installed on your system, download and install the Cloud-foundry CLI tool.

  4. Edit the manifest.yml file in the folder that contains your code and replace visual-recognition with a unique name for your application. The name that you specify determines the application's URL, such as your-application-name.mybluemix.net. The relevant portion of the manifest.yml file looks like the following:

    applications:
    - name: visual-recognition-demo
     command: npm start
     path: .
     memory: 512M
     env:
      NODE_ENV: production
  5. Connect to Bluemix by running the following commands in a terminal window:

cf api https://api.ng.bluemix.net
cf login
  1. Create and retrieve service keys to access the Visual Recognition service by running the following command:
cf create-service watson_vision_combined free visual-recognition-service
cf create-service-key visual-recognition-service myKey
cf service-key visual-recognition-service myKey
  1. Provide the credentials from step 6 to the application by creating a .env file using this format:
VISUAL_RECOGNITION_API_KEY=<your-alchemy-api-key>
  1. Install the dependencies you application need:
npm install
  1. Start the application by running:
npm start
  1. Test your application locally by going to: http://localhost:3000/

Deploying your application to Bluemix

  1. Push the updated application live by running the following command:
cf push

After completing the steps above, you are ready to test your application. Start a browser and enter the URL of your application.

              <your-application-name>.mybluemix.net

You can also find your application name when you click on your application in Bluemix.

Classifying Images in the Starter Application

The application is composed of two sections, a "Try" section and a "Train" section. The Try section will allow you to send an individual image to the Visual Recognition service to be classified.

Test out the existing service by selecting one of the provided images or pasting a URL for an image of your choice. You will see the service respond with a collection of recognized attributes about the image.

Next, try running the following image through the classifier by pasting the URL into the "Try" panel.

You'll see that it's recognized some general attributes about the image, but we want it to be able to specifically recognize it as a fruitbowl. To do that, we will need to train a customer classifier.

Training a Custom Classifier in the Starter App

Navigate over to the "Train" window in the application.

Here, you will see a collection of training sets that have been provided for you. If you select any one of these, you will see that set expand to show a series of classes that will be trained, as well as negative examples of that group. For example, the Dog Breeds classifier contains 4 classes of dogs to be identified, as well as a negative example data set of Non-dogs.

To train the service to specifically classify a fruitbowl, we are going to use two collections of images to teach Watson what to recognize when classifying a fruitbowl. Click on the "Use your Own" box, and afterward a series of boxes will appear to allow you to upload .zip files for the classes.

Download and select the following .zip files for the classifier:

Once the two zip files are included, name the classifier "fruitbowl" and select the "Train your classifier" button

The classifier may take a couple minutes to train, and once it is complete the application will update to allow you to submit new images against that classifier. If you submit the original image that we used in the new prompt on the "Train" window, you will see that it will be specifically classified based on our new training!

Congratulations

You have completed the Visual Recognition Lab! :bowtie:

Congratulations