Code Current: Watson Developer Cloud — App Development in the Cognitive Era

Chris Ackerson
4 min readMar 20, 2017

Over the weekend, I came across the video for a class I taught almost a year ago on Cognitive Computing. Wow time flies..

App Development in the Cognitive Era

The goal of this introductory class was 3 fold: 1) Define cognitive computing (trying to move away from the buzzword), 2) introduce machine learning and machine perception concepts to a non-technical audience and 3) highlight machine learning best practices that we’ve learned through building apps with Watson Ecosystem partners. And wrap all of this in as many demos as possible.

First thing I concluded is that I would have been better off breaking this into a bunch of 7–10 minute segments. An hour long class is tough to get through on youtube and I think I lost steam myself towards the end :)

In addition while much of the content is still relevant, I was surprised by how much the Watson Developer Cloud has evolved in just the past year. So I wanted to put some updates here:

What is a Cognitive System?

In the class, we defined a cognitive system as one that:

  • makes observations of the world
  • makes choices based on those observations
  • is evaluated based on those choices
  • and learns from experience

The key I said was to think about what we mean by “making choices based on observations.” If by “observation” we mean an input and output of the system, then in traditional programmatic systems we start with an input and define a set of logic that results in a desired output. What makes cognitive computing unique is we start with a set of inputs and desired outputs and the cognitive system “learns” the logic itself to make choices with new inputs.

I think this is still a good starting point for cognitive systems, but it’s a bit mechanical and lacking a key insight from implementing these systems in practice.

I saw David Ferrucci speak recently at an AI meetup in New York City. Dr. Ferrucci is a world-renowned AI expert and led development of the original Watson system. He has since left IBM. In his talk, he defined “Intelligent Systems” and while his definition looked similar to the one above, he added an additional criterium: The system must be explicative meaning it’s able to explain why the system made its choice. And how do we judge this? If a human being is willing to take responsibility for the choice of the system, then and only then is it intelligent.

I really like this definition because of what we’ve seen implementing Watson technology in the real world. Take healthcare for example: if a cognitive system is able to diagnose and recommend treatments with very high precision, it could be a fantastic clinical decision support tool. But consider a case where the cognitive system makes a recommendation that a team of physicians did not expect. Without an explanation as to how it came to its conclusion, the team could not reasonably accept the recommendation. They need to see the logical path through the patient medical history, test results, medical literature etc. that led to the conclusion. In other words deep learning alone is not the panacea (no pun intended).

So as Dr. Ferrucci explains, for a system to be truly “cognitive” or “intelligent” it needs to provide evidence that explains how it made it’s choice and a human should be willing to accept responsibility for that choice as they integrate it into their ultimate judgement. The future is one where human experts work alongside cognitive systems and a deep trust needs to be established to maximize collective effectiveness. What better way to establish trust than explaining your “thought process.”

Application Frameworks

The huge change for Watson Developer Cloud over the past year has been the announcement and release of the Conversation and Discovery services. In my class I talked about frameworks as collections of Watson APIs (and other capabilities) orchestrated to deliver on some common use-cases. I highlighted 4:

  • Engagement — Applications that can interact with humans in natural language
  • Discovery — Applications that extract signal from unstructured data
  • Exploration — Applications that organize unstructured data
  • Decision — Applications that support the decision-making process

While each of these use-cases are important in their own right, there is definitely overlap. As we continued to gather feedback from users to enhance Watson’s capabilities we learned 2 things:

  1. We can speed up development with Watson if we orchestrate these services ourselves and provide a UI to configure
  2. Those 4 frameworks can be simplified to 2 if we provide the right product features and design

The result of that evolution is the Conversation and Discovery services. Conversation can be more than just a chat engagement interface — through natural language, Conversation supports decision making (walk me through purchasing my health insurance) and exploration of data (show me the top sales regions for widget x last quarter). Same thing for Discovery — through the extraction of meaningful signals by Watson Natural Language Understanding and exposing those signals through a query interface or visualization, Discovery applications can support decision making and exploration of large data sets. Even more exciting, over time the Discovery service will add advanced analytics to operate on those signals and additional enrichments for unstructured data types beyond text (think images, audio and video files).

I gave an example of an Engagement application which was built from Natural Language Classifier, Alchemy Entity Extraction and Dialog. Now a developer can build the same application using just the Conversation service which orchestrates all of those features and more.

Watson Conversation UI

I hope you experiment with Conversation and Discovery and I’m looking forward to writing more tutorials leveraging those services.

Thanks!

--

--

Chris Ackerson

I lead AI Product Development at AlphaSense. I'm interested in sharing what I've learned about productizing AI