Android client library to assist with using the Watson Developer Cloud services, a collection of REST APIs and SDKs that use cognitive computing to solve complex problems.
'com.ibm.watson.developer_cloud:android-sdk:0.1.0'
Download the jar with dependencies here.
Now, you are ready to see some examples.
The examples below assume that you already have service credentials. If not, you will have to create a service in Bluemix.
You will need the username
and password
(api_key
for AlchemyAPI) credentials for each service. Service credentials are different from your Bluemix account username and password.
To get your service credentials, follow these steps:
-
Log in to Bluemix at https://bluemix.net.
-
Create an instance of the service:
- In the Bluemix Catalog, select the service you want to use.
- Under Add Service, type a unique name for the service instance in the Service name field. For example, type
my-service-name
. Leave the default values for the other options. - Click Create.
-
Copy your credentials:
- On the left side of the page, click Service Credentials to view your service credentials.
- Copy
username
andpassword
(api_key
for AlchemyAPI).
If you are having difficulties using the APIs or have a question about the IBM Watson Services, please ask a question on dW Answers or Stack Overflow.
The java-sdk is quite large. If you are encountering an error similar to
ClassNotFoundException: Didn't find class on path: DexPathList
or
Error converting bytecode to dex
it's possible you have exceeded the dex limit. To solve this you can enable Multidex Support. Information about enabling Multidex Support can be found on the Android Developers site.
This SDK is built for use with the java-sdk. The examples below are specific for Android as they use the Microphone and Speaker.
Use the Speech to Text service to recognize the text from a .wav file.
SpeechToText service = new SpeechToText();
service.setUsernameAndPassword("<username>", "<password>");
File audio = new File("src/test/resources/sample1.wav");
SpeechResults transcript = service.recognize(audio, HttpMediaType.AUDIO_WAV).execute();
System.out.println(transcript);
Speech to Text supports WebSocket, the url is:
wss://stream.watsonplatform.net/speech-to-text/api/v1/recognize
SpeechToText service = new SpeechToText();
service.setUsernameAndPassword("<username>", "<password>");
File audio = new File("src/test/resources/sample1.wav");
RecognizeOptions options = new RecognizeOptions();
options.continuous(true).interimResults(true).contentType(HttpMediaType.AUDIO_WAV);
service.recognizeUsingWebSocket(audio, options, new BaseRecognizeCallback() {
@Override
public void onTranscription(SpeechResults speechResults) {
System.out.println(speechResults);
}
);
// wait 20 seconds for the asynchronous response
Thread.sleep(20000);
Convience function for creating an InputStream
from device microphone
InputStream myInputStream = new MicrophoneInputStream();
An example using a Watson Developer Cloud service would look like
speechService.recognizeUsingWebSocket(new MicrophoneInputStream(),
getRecognizeOptions(), new BaseRecognizeCallback() {
@Override
public void onTranscription(SpeechResults speechResults){
String text = speechResults.getResults().get(0).getAlternatives().get(0).getTranscript();
System.out.println(text);
}
@Override
public void onError(Exception e) {
}
@Override public void onDisconnected() {
}
});
##StreamPlayer Provides the ability to directly play an InputStream
StreamPlayer player = new StreamPlayer();
player.playStream(yourInputStream);
Use the Text to Speech service to get the available voices to synthesize.
TextToSpeech service = new TextToSpeech();
service.setUsernameAndPassword("<username>", "<password>");
List<Voice> voices = service.getVoices().execute();
System.out.println(voices);
##CameraHelper Provides simple camera access within an activity.
CameraHelper cameraHelper = new CameraHelper(this);
cameraHelper.dispatchTakePictureIntent();
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == CameraHelper.REQUEST_IMAGE_CAPTURE) {
System.out.println(cameraHelper.getFile(resultCode));
}
}
###Visual Recognition Use the Visual Recognition service to recognize a picture.
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == CameraHelper.REQUEST_IMAGE_CAPTURE) {
File image = cameraHelper.getFile(resultCode);
VisualClassification result = visualRecognitionService.classify(image).execute();
System.out.println(result);
}
}
##GalleryHelper Like the CameraHelper, but allows for selection of images already on the device.
To open the gallery:
GalleryHelper galleryHelper = new GalleryHelper(this);
galleryHelper.dispatchGalleryIntent();
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == GalleryHelper.PICK_IMAGE_REQUEST) {
System.out.println(galleryHelper.getFile(resultCode));
}
}
Use Gradle (version 1.x) to build and test the project you can use .
Gradle:
$ cd android-sdk
$ gradle jar # build jar file (build/libs/android-sdk-0.1.0.jar)
$ gradle test # run tests
Find more open source projects on the IBM Github Page
This library is licensed under Apache 2.0. Full license text is available in LICENSE.
See CONTRIBUTING.md.