Changes in configuring distributed setup from API Manager 1.9.0 onwards

This post targets to explain the changes happened in communication flows,  with decoupling the KeyManager. If you have any confusions on why certain configurations are done differently from 1.9.0 onwards, this post may also help you to clear out those doubts.

Changes in Renewing Tokens

Before 1.9.0 when renewing access tokens what happened was, Store would call renewAccessToken operation in KeyManager , which would call /revoke  and /token APIs respectively to revoke the existing token and to obtain a new one. Locations of the token and revoke endpoint are defined in APIKeyManager section. Revoke endpoint location has an element for itself. But the token endpoint location is constructed by taking the ServerUrl of APIKeyManager section and removing services part and then, appending /oauth2/token (or the value specified in  TokenEndPointName ). This is why when creating a distributed setup, RevokeAPIURL in KM profile should be pointed to GW profile. You might wonder why KM has to call GW, because what the revoke API in GW does is routing the request back to KM, so why the extra hop. That’s because when the revoke request goes through the RevokeAPI, it performs certain checks and removes the revoked token from the cache.

From 1.9.0 onwards instead of Store invoking a Service hosted on KM profile, directly calls Revoke and Token APIs. So in the new releases,RevokeAPIURL should be correctly configured in Store profile.

Before KM decoupling

Key Renewal Before

After KM decoupling

Key Renewal After.png


One drastic change you might notice in configs while going from 1.8.0 to 1.9.0 would be the addition of a new config section called, APIKeyValidator. What is this and why has this been introduced. If you are not going to use a Third Party KeyManager then you don’t need to worry about this, you can think of it as a renaming of APIKeyManager section. KeyValidator is not an entirely new thing that came up with the Key Manager decoupling. It was there from the beginning, but then because APIM was tightly bound to one component which performed all token related activities, there wasn’t a need to make a distinction between the KeyManager and the KeyValidator. But now a new term is needed to avoid confusions, hence the name APIKeyValidator.

Conceptually a KeyManager is responsible for Managing OAuth Clients, issuing/managing tokens, responding to validity inquiries of tokens etc. It’s purely involved in performing token related tasks. But API Manager needs more than a pure KeyManager to authorize an invocation. Since the APIs are grouped under Applications and there’s a notion of subscriptions, apart from the validity of a token (whether a token is active or not), validity of a subscription (whether the application to which token is issued is subscribed to the API being invoked) too needs to be assessed. But determining the validity of a subscription is something specific to API Manager. To do that certain tables of API Manager should be accessed. Even the token/OAuth Client managing part can be delegated to a different server, validating subscriptions cannot be. That’s why a different server profile is needed to perform API Manager specific validations. APIKeyValidator’s responsibilities would be to Check if the Application is Subscribed to the API, check the authorization level of the token against that of the resource (e.g. whether token is of type Application and security level of the resource is Application or lower), validating the scopes of the resource and generating the JWT.

During an API Invocation Gateway only talks with KeyValidator. It gives all details about the invocation and asks about its the validity from KeyValidator. KeyValidator would talk to KeyManager (which can be either the in built one or a third party one) to determine the validity of the token and to get the consumer key of the OAuth Client. Once the KeyValidator gets the consumer key, it will perform all other API Manager specific validations.

So if you are using a different Key Manager (OAuth Provider), while distributing the components, it would be sufficient if you correctly configure APIKeyValidator section in the Gateway. KeyValidator has to talk to KeyManager so that’s why you have to uncomment and configure APIKeyManager section in KeyValidator.

Another node that needs to talk to KeyManager is the Store node. While creating OAuth clients, Store would directly consume APIs exposed by KeyManager. If ResourceRegistration is to be used, then Publisher too would have to know the location of KeyManager.

Before KeyManager decoupling

KeyValidation before


After KeyManager decoupling

KeyValidation after.png


Does KeyValidator need to run on it’s own node/JVM?

No it doesn’t. Often you don’t need to worry about the existence of a KeyValidator. When you don’t use a third party KeyManager (OAuth Provider), the in built KeyManager would perform the tasks of a KeyValidator.

Even when you are using a different KeyManager, you can get the Gateway to do KeyValidator’s tasks, if Gateway accessing the DBs is not a concern. While doing a deployment, certain organisation policies may require you to Deploy GW in a highly secured NW from where DBs can’t be accessed. Only a such a scenario would require KeyValidator to run on a separate node/JVM. In such a deployment, KeyValidator would sit inside a secured NW which is permitted to access DBs and the Gateway would make a service call to KeyValidator when needed to authorize a call.





Invoking backends secured with Digest Auth

With API Manager 1.10, you get the ability to invoke Backends secured with Digest Authentication.

Unlike HTTP Basic Authentication, Digest Authentication doesn’t require both username and password be passed with the requests, but achieve this at an expense of doing an extra call. When a client is trying to access a resource secured with Digest Auth, the server will return an 401 Unauthorized response, with details needed (nonce, qop, realm) to make an Authorized request. After receiving the first 401 response, client has to calculate the digest using the values provided, and make a second request. The way digest is calculated, both the client and the server need to have access to the password.

Configuration is very much similar to how you configure for Basic Auth.

Before trying out this capability, let’s first create a backend secured with Digest Auth. For this I’m going to a use an webapp deployed on tomcat.

  1. Download the webapp located here and deploy  it on a tomcat server (I’ve been using tomcat7)
  2. If you take a look at the web.xml you may notice that I have added some entries to  enable Digest Auth.
        <!-- Define servlets that are included in the example application -->
      <!-- Security roles referenced by this web application -->
          The role that is required to access the HelloWorldServlet
  3. Since, it’s a role named digestrole, I have associated with the resource, same role name should be registered as valid role and should be assigned to a user. This can be done by editing, tomcat-users.xml.
      <user username="digestadmin" password="digestpass" roles="digestrole"/>
  4. Restart the server and try to access the webapp through the browser. (url would be http://localhost:8080/mytomcat-helloworld/servlet/MyHelloWorldServlet). Browser will ask you for authentication before rendering the page.DigestAuth

    Even for resources secured with Digest Auth what the user sees is not different from , browser popping up a dialog box to get the username and the password. But if you need to dig deeper into what’s happening, you can intercept the conversation through a TcpMon and see what’s happening. This is what I captured…

    Request 1:
    GET /mytomcat-helloworld/servlet/MyHelloWorldServlet HTTP/1.1
    Connection: keep-alive
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
    Upgrade-Insecure-Requests: 1
    User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36
    Accept-Encoding: gzip, deflate, sdch
    Accept-Language: en-US,en;q=0.8
    Cookie: JSESSIONID=9E313ADA769E037A4F1BA0B8A51ADF58
    Response 1:
    HTTP/1.1 401 Unauthorized
    Server: Apache-Coyote/1.1
    Cache-Control: private
    Expires: Thu, 01 Jan 1970 01:00:00 GMT
    WWW-Authenticate: Digest realm="Digest Authentication", qop="auth", nonce="1456578781882:c34ec1f0e8dbdc6adfb9a7f8336bed86", opaque="5EB1ED8E74EB53F568F01D186C545708"
    Content-Type: text/html;charset=utf-8
    Content-Length: 954
    Date: Sat, 27 Feb 2016 13:13:01 GMT
    Request 2:
    GET /mytomcat-helloworld/servlet/MyHelloWorldServlet HTTP/1.1
    Host: localhost:9090
    Connection: keep-alive
    Authorization: Digest username="admin", realm="Digest Authentication", nonce="1456578781882:c34ec1f0e8dbdc6adfb9a7f8336bed86", uri="/mytomcat-helloworld/servlet/MyHelloWorldServlet", response="4a1a871606771ba4717c00719c509575", opaque="5EB1ED8E74EB53F568F01D186C545708", qop=auth, nc=00000001, cnonce="be913a092c5d2606"
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
    Upgrade-Insecure-Requests: 1
    User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36
    Accept-Encoding: gzip, deflate, sdch
    Accept-Language: en-US,en;q=0.8
    Cookie: JSESSIONID=9E313ADA769E037A4F1BA0B8A51ADF58
    Response 2:
    HTTP/1.1 200 OK
    Server: Apache-Coyote/1.1
    Accept-Ranges: bytes
    ETag: W/"21630-1314632856000"
    Last-Modified: Mon, 29 Aug 2011 15:47:36 GMT
    Content-Type: image/x-icon
    Content-Length: 21630
    Date: Sat, 27 Feb 2016 13:13:07 GMT
  5. Once you properly verify that authentication is happening through Digest Auth, let’s start up an API Manager instance and create an API. Download API Manager 1.10. from here, and create an API.DigestAuth-enable

    When you specify the endpoint details in the Manage view, click on More options and mark the endpoint as  secured one and then select Digest Authentication from the options available.

  6. Subscribe to the API and invoke it and see how it works .DigestAuth-Response.png



How does it work ?

When you enable Digest Auth, the synapse API that gets created is somewhat different from that of a normal API. Normally, an API would have a single send mediator, but if you open the synapse API definition, you may see something like the following.

    <!-- Some Properties -->
        <source type="body" clone="true"/>
        <target type="property" property="MessageBody"/>
        <endpoint name="admin--HelloAPI_APIproductionEndpoint_0">
            <http uri-template="http://localhost:8080/mytomcat-helloworld/servlet/MyHelloWorldServlet"/>
    <class name="org.wso2.carbon.apimgt.gateway.mediators.DigestAuthMediator"/>
    <!-- Some Properties -->
        <source type="property" clone="true" property="MessageBody"/>
        <target type="body"/>
        <endpoint name="admin--HelloAPI_APIproductionEndpoint_0">
            <http uri-template="http://localhost:8080/mytomcat-helloworld/servlet/MyHelloWorldServlet"/>

Before the send mediator, a call mediator is used to call the same backend!!

First call is done to check if the backend supports Digest Auth, and  to get WWW-Authenticate header from the backend. These values are processed by the DigestAuthMediator which creates the Authorization header needed for the second call. When the same resource is requested for the subsequent times a browser would normally increase the nonce count without computing the message digest again. But to make things simple, in this version, API Manager would simply make two requests and compute the digest without considering if it had called the backend previously or not.

Please note that even though I blogged about this feature to explain it’s usage, this was completely implemented by Tharika during her internship. Thank you Tharika for your excellent contribution.

Sending custom values from KeyManager to Backend

Using WSO2 API Manager, an organisation can secure their APIs, Throttle them based on pre-defined policies and monitor the API Calls. APIs are secured using OAuth2  and when a client invokes an API using an OAuth token,  Gateway will send the token to KM to be validated, and based on the result given by KM, decision will be taken whether to pass the call to backend or not.

Usually token is omitted before passing the call through, the reason being, the token itself not having any useful information client can get.

But a typical backend might need details about the invoker to do it’s processing. If the content a backend produces contains details pertaining to a user, then it certainly needs the identity of the user. In such scenarios JWT is used to send user details.

With the extension points available in 1.8.0, it was possible to customise a JWT, by adding details related to a user. But there can be instances where, some additional details related to a token is needed by the Backend; for example the list of scopes. Since these are not user related, only by extending JWTGenerator such details cannot be added to the JWT.

But using a JWTGenerator combined with a custom KeyValidationHandler (this extension point is provided with 1.9.0) any attribute returned while validating the token can be easily embedded in the JWT.

Following examples walks you through how to add scopes of a token as a claim in the JWT.

  1. Download the CustomJWTGenerator located at this link.
  2. Download WSO2 API Manager 1.9.1 from here.
  3. Build the jar and place it under wso2am-1.9.1/repository/components/lib folder.
  4. Open api-manager.xml and
    • Uncomment and set EnableTokenGeneration to true.
    • Use the following class for TokenGeneratorImpl org.wso2.demo.jwtgenerator.CustomJWTGenerator
    • As the KeyValidationHandler (this is specified by KeyValidationHandlerClassName element) give org.wso2.demo.jwtgenerator.CustomKeyValidationHandler.
  5. Enable wire logs; you can do this by either changing the log level from Management Console or by uncommenting line in file.
  6. Start the API Manager.
  7. Create an API, Create an Application, Subscribe the API to the Application and Generate Keys for the Application.
  8. Then invoke the API using the token.
  9. If you take a look at the logs, you may see a log statement like this;
    [2016-01-16 22:35:27,637] DEBUG - wire >> "assertion:

    That is the JWT.

  10. JWT Consists of three parts, each part delimited by a period. Take the middle part and decode it (base64) using one of the online tools available.
  11. When decoded this part

    We get the following claims

      "iss": "",
      "exp": 1452964827473,
      "": "admin",
      "": "1",
      "": "DefaultApplication",
      "": "Unlimited",
      "": "/test/1.0.0",
      "": "1.0.0",
      "": "Gold",
      "": "PRODUCTION",
      "": "APPLICATION",
      "": "admin@carbon.super",
      "": "-1234",
      "": "am_application_scope , default "

Note that we have a new claim as which keeps scopes.

Now let’s see how this works…

With 1.9.0 , ability to extend KeyValidation flow was introduced. KeyValidationHandler runs at KeyManager and gets executed once a token is passed to KM to be validated. More information on KeyValidationHandler can be found in this post.

Let’s take a look at the customised KeyValidationHandler

public boolean generateConsumerToken(org.wso2.carbon.apimgt.keymgt.service.TokenValidationContext validationContext)
            throws org.wso2.carbon.apimgt.keymgt.APIKeyMgtException {
        String[] tokenScopes = validationContext.getTokenInfo().getScopes();
        StringBuilder builder = new StringBuilder();
        for (String scope : tokenScopes) {
            builder.append(scope + " , &amp;amp;quot;);
        builder.delete(builder.length() - 3, builder.length() - 1);"Created Scopes String : " + builder.toString());

        MessageContext newMessageContext = null;
        try {
            newMessageContext = ConfigurationContextFactory.createEmptyConfigurationContext()
        } catch (AxisFault axisFault) {
            return false;

        // Creating a message context and setting it if a one doesn't exist.
        // Setting scopes through the message context is one of the ways of making it available in JWTGenerator.
        // But same can be achieved through other means, like using a ThreadLocal.
        MessageContext messageContext = MessageContext.getCurrentMessageContext();
        if (messageContext == null) {
            messageContext = newMessageContext;

        messageContext.setProperty("scope_list", builder.toString());

        return super.generateConsumerToken(validationContext);

At the point where JWT is generated, we are getting the list of scopes from TokenValidationContext and set it as a property in the MessageContext.

When calling

MessageContext messageContext = MessageContext.getCurrentMessageContext();

the MessageContext saved in ThreadLocal is obtained. So a property saved in MessageContext can be accessed from another method executed in the same Thread.
And if you go through the code for ExtendedJWTGenerator it looks like below;

    public java.util.Ma<String, String> populateCustomClaims(APIKeyValidationInfoDTO keyValidationInfoDTO,
                                                              String apiContext, String version, String accessToken)
            org.wso2.carbon.apimgt.api.APIManagementException {

        Map<String, String> claims = super.populateCustomClaims(keyValidationInfoDTO, apiContext, version, accessToken);
        MessageContext messageContext = MessageContext.getCurrentMessageContext();
        if (messageContext != null) {
            String scopeList = (String) messageContext.getProperty("scope_list");
            if (claims == null) {
                claims = new HashMap<String, String>();
            claims.put(ClaimsRetriever.DEFAULT_DIALECT_URI + "/scopes", scopeList);

        return claims;

you may notice that, we get the scope list from the MessageContext , and set it as a claim.


Integrating API Manager with Mitre-ID Connect

In this post, we looked how surf-oauth can be used as an OAuth Provider. There for all the demonstrations, we simply used REST APIs, and nothing was done using the UI. One of the main points highlighted in that post was ,the ability to integrate an OAuth Provider using non-standardized interfaces.

In this post another OAuth Provider is introduced, which offers much more functionality plus which exposes everything through standard interfaces. In this new demo rather than stopping at invoking a handful of APIs, we’ve gone to the extent of customising the UI, so as to give an idea about how the OAuth Provider extension framework can be fully utilised.

Without further ado let’s see how to run this demo.

All the contents for this demo is located at

Deploy Authorization Server

  1. Download the mitre id connect server located at and deploy it on a Tomcat server.
  2. Access the URL (http://localhost:8080/openid-connect-server-webapp/) and see if you can successfully log into the management console. Use admin/password as the credentials for logging in.
  3. Click on Manage Clients link and see if you can see a client with name Test Client.


Configure API Manager.

  1. There’s a pre-configured pack available at this location. You can download it and start using it. Or else the following steps will guide you through how to configure it from the scratch.
  2. Download the API Manager 1.9.1 distribution from this link.
  3. In this demo, several customizations have been done on the Store UI. You can get those customisations by downloading the zip located at and putting its contents inside wso2am-1.9.1/repository/deployment/server/jaggeryapps/store.
  4. The Key Manager implementation needed call to Mitre-ID Connect is located at You have to clone this and build it.
    git clone
    mvm clean install
  5. Copy the built jar into wso2am-1.9.1/repository/components/lib
  6. For keeping certain configuration information about OAuth Clients, an intermediate DB will be used. If it’s MySql DB you are using, put the mysql connector jar (the particular jar I’ve used was mysql-connector-java-5.1.30-bin.jar) inside wso2am-1.9.1/repository/components/lib folder.
  7. Create a mysql DB
    create database mitre_clients
  8. Then create the table CLIENT_INFO with the following definition;
    `CLIENT_ID` varchar(256) NOT NULL,
    `CONSUMER_KEY` varchar(2048) DEFAULT NULL,
    `PAYLOAD` blob,
    `MAPPING_ID` varchar(255) DEFAULT NULL,
    `CLIENT_NAME` varchar(255) DEFAULT NULL,
    `REDIRECT_URI` varchar(255) DEFAULT NULL,
    `CLIENT_TYPE` varchar(255) DEFAULT NULL,
  9. Open master-datasources.xml located inside  (wso2am-1.9.1/repository/conf/datasources) and add the following configuration element.
                 <description>The datasource used by Authoraization server</description>
                 <definition type="RDBMS">
                     <validationQuery>SELECT 1</validationQuery>
  10. Next is on editing api-manager.xml. Open api-manager.xml and add the following config element
  11. Put the following config block in api-manager.xml. This section contains all the configuration details needed to operate with Mitre-ID Connect.
  12. Start up the API Manager and log into store.
  13. Go to MySubscriptions page and select an existing Application. Once you click on Generate Keys button, If it changes to the following UI, then you can conclude that customisations on Store UI have been successfully applied.

When Clicked on Generate Button



  1. Click on Generate Keys button.CreateClient
  2. In the new UI, provide a Callback URL and hit Save.
  3. Now the call will be sent to the OAuth Provider and Keys will be returned.


    Generated Keys

  4. Now let’s see how we can generate a token. For this I’ll be using OAuth2 Playground which can be found at Deploy it on your tomcat server and access the url http://localhost:8080/playground2/oauth2client
  5. If it’s a different callback url you have registered the first time, click on update and add http://localhost:8080/playground2/oauth2client as a redirect url. Hit on save.
  6. In the playground2 app, select Authorization Code as as the grant type and fill in other details. The values should be;
    • Client ID is the Consumer Key specified in My Subscriptions page
    • Callback URL should be http://localhost:8080/playground2/oauth2client
    • put email as the scope (or else you can use one of the scopes provisioned while creating the client)
    • Authorization Endpoint should be http://localhost:8080/openid-connect-server-webapp/authorizeGenerating Token-1st Screen
  7. When clicked on Authorize button you’ll be taken to the following page. Use admin/password to sign in.Generating Token-2nd Screen
  8. Next is the consent screen. Click on Authorize to continue.Generating Token-3rd Screen
  9. Provide the Consumer Secret and Callback URL.Generating Token-4th Screen
  10. Now you’ll get the Access token.Generating Token-5th Screen
  11. You’ll be able to invoke an API using the Access Token just created. In the pre-configured pack, there’s an API already created
  12. You’ll get a response similar to following when you invoke it.
    curl -H "Authorization : Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InJzYTEifQ.eyJleHAiOjE0NTEzODczNTYsImlzcyI6Imh0dHA6XC9cL2xvY2FsaG9zdDo4MDgwXC9vcGVuaWQtY29ubmVjdC1zZXJ2ZXItd2ViYXBwXC8iLCJhdWQiOiI2MmU4NjVkMi03NDczLTQyNDItOTdmNS0xMjY1YzA5OTNmZDkiLCJqdGkiOiI2NTg2ZjM0Ny01ODIyLTQ3MDctYTVlMi1hYzJlZmVkYTJhNmIiLCJpYXQiOjE0NTEzODM3NTZ9.TROHD02NctQt9w-a__Upc8Hbb7FjDD0_CKRP0SS7i5CJ4ZgAJd2Ffui8JXRPEn0k0DTj8j3Ht6HYZFLvY8yUyoOivxYiIxRKmT52B2x6yxvZcOoaxY5_kaKwCUM3ULNATPumQCwAmviIKePc1ySLjFJ_G12RWkPwPv_8EDM-k_BhrQl3mvOOXFKQhXzE6UT-7cTd9KtxkAzxXCKhKMPJVXbRhKwb2S5rHDVxDAOqEoCldSP0Du7qlLv04w2JO5hwkllptX3muub8BMNONHXp-FVuIhbCpN4_WQUa6uCwDrzXM80M8L1KUoTRM42sHKhBBkdeizpBr4YBVdapQPQfoQ" http://localhost:8280/greetapi/1.0.0
    {"Greeting":"Hello ....."}

Specifying Hard Throttling Limits for APIs

WSO2 API Manager 1.10 allows you to specify a limit on the number of requests a particular API can make on a backend.

With WSO2 API Manager requests can be throttled at three different levels.

  1. Throttle Requests by API Context.
  2. Throttle Requests by Application.
  3. Throttle by API Resource.

At the time of creating an API, Throttling policies available for the API and policies for each resource can be specified. With each new token generated for the API, the requester gets a quota specified by the Tier. For example, if the policy for the context allows 50 Requests per minute, then each new user would be able to make 50 calls on that API within a minute. Policy defined for the resource level behaves in a similar way. Throttling at the Application level ensures, that the total number of requests made by all the APIs coming under the Application doesn’t exceed the limit specified in the Application level policy.

All the existing throttling limits specify the Quota the invoker would get, not the limit Backend cans handle. Even after applying all these throttling mechanisms, it’s still possible to overuse the backend, since there isn’t a single place to specify, the total number of calls the API Manager would be allowed to make on the Backend. The latest API Manager release, addresses this issue by providing the capability to specify a Hard Throttling limit.

Hard limit is the total number of requests that are allowed to be made on that API. This feature can be used to specify the number of requests API Manager can make on a  particular Backend.

How to enable

Once you reach Manage Page while creating an API, you’ll see an option to enable Hard Throttling limits.


Enabling Hard Throttling Limits

Specify limits of the Production & Sandbox endpoints. Since the two endpoints can come from two servers with different capacities, option is provided to specify different throttling limits.

Save and Publish the API. If you take a look at the synapse config, you might notice that now under APIThrottleHandler, several additional properties are defined.

      <handler class="">
         <property name="apiImplementationType" value="ENDPOINT"/>
      <handler class=""/>
      <handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
         <property name="id" value="A"/>
         <property name="productionMaxCount" value="800"/>
         <property name="sandboxMaxCount" value="500"/>
         <property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
      <handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageHandler"/>

There are two properties to keep Production TPS and Sandbox TPS.

Changing Unit time

Normally Hard limits are counted over a duration of 1 second (Since it’s a TPS we are specifying). But needs may arise to apply Throttling over a larger time window – like over a 1 minute. Time window can be defined using the properties productionUnitTime and sandboxUnitTime.

      <handler class="">
         <property name="apiImplementationType" value="ENDPOINT"/>
      <handler class=""/>
      <handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
         <property name="id" value="A"/>
         <property name="productionMaxCount" value="600"/>
         <property name="productionUnitTime" value="60000"/>
         <property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
      <handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageHandler"/>

This configuration will allow 600 requests, within a duration of 1 minute. Unit time can only be changed by directly editing the synapse config.


1. Create an API giving the Production Limit as 5. While selecting Throttling tiers, make sure you have selected Bronze and Silver plans as well.

2. For the purpose of testing, let’s edit the synapse config and change the unit Time as 1 minute.

      <handler class="">
         <property name="apiImplementationType" value="ENDPOINT"/>
      <handler class=""/>
      <handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
         <property name="id" value="A"/>
         <property name="productionMaxCount" value="5"/>
         <property name="productionUnitTime" value="60000"/>
         <property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
      <handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageHandler"/>

3. Log into Store, create an Application and then Subscribe to the API. While subscribing make sure that you select a Tier that allows more than 5 reqeusts/min (Silver allows 10).

4. Generate the Keys, get the access token and invoke the API for more than 5 times. If you are using a curl command, make sure you have put -v option too, since it shows the request and response headers.

Once you exceed the limit, you’ll get an error like this;

< HTTP/1.1 503 Service Unavailable
< Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: GET
< Content-Type: application/xml; charset=UTF-8
< Accept: */*
< Date: Sun, 01 Nov 2015 00:00:48 GMT
< Transfer-Encoding: chunked
< Connection: Close
* Closing connection 0
<amt:fault xmlns:amt=""><amt:code>900801</amt:code><amt:message>API Limit Reached</amt:message><amt:description>API not accepting requests</amt:description></amt:fault>

Note that the response gives 503 Service Unavailable and a response message saying API is not accepting requests anymore.

5. To see how this differs from Soft Throttling, delete the subscription and re-subscribe the API with Bronze tier. This time when throttling limit exceeds following response will be given instead;

< HTTP/1.1 429
< Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: GET
< Content-Type: application/xml; charset=UTF-8
< Accept: */*
< Date: Sun, 01 Nov 2015 00:02:23 GMT
< Transfer-Encoding: chunked
* Connection #0 to host left intact
<amt:fault xmlns:amt=""><amt:code>900800</amt:code><amt:message>Message throttled out</amt:message><amt:description>You have exceeded your quota</amt:description></amt:fault>

Setting up a two node Spark cluster

Spark’s official website introduces Spark as a general engine for large-scale data processing. Spark is increasingly becoming popular among data mining practitioners due to the support it provides to create distributed data mining/processing applications.

In this post I’m going to describe how to setup a two node spark cluster in two separate machines. To play around and experiment with spark, I’ll also be using IPython Notebook which will act as a driver program.

Spark will be sitting on two machines, one acting as the master and other as the slave. If you need to add more slave nodes, you can simply do it by following steps in the section below on the machines you need run spark slave on.

Installing spark

Since Spark runs on java, before starting with this, make sure that JDK is installed and JAVA_HOME is properly set.

  1. Download and extract spark. You can download the latest version from page. At the time of writing this, latest version available was 1.4.1.
  2. In your home directory create a directory named spark and copy spark distribution in to that folder.
    mkdir spark
    mv spark-1.4.1-bin-hadoop2.6.tgz spark
  3. Extract contents into the folder.
    tar -zxvf spark-1.4.1-bin-hadoop2.6.tgz
  4. Set SPARK_HOME property on both machines
    • Add the following entries to ~/.bashrc or ~/.bash_profile (make sure you run a source ~/.bashrc or logout and re-login to the remote machine.)
export SPARK_HOME=/home/ubuntu/spark/spark-1.4.1-bin-hadoop2.6

You have to do this in both the machines. (In case you don’t have two machines you can try this out in the same machine)

Configuring SSH

For cluster related communications, Spark master should be able to create password less ssh logins to Spark slave. This is how you enable it.

  1. Generate a new key pair

This would create the key pair and save it in ~/.ssh directory. You have to do the same on the other machine.

2. Copy public key of master node and add it as an authorized key on the slave node

On master node

cat ~/.ssh/

On slave node

Open authorized_keys on master node and paste the public key

vi ~/.ssh/authorized_keys

3. Run ssh-copy-id from master node

ssh-copy-id ubuntu@slave-node

ubuntu is the name of the user on the slave machine.

Now from the master node, try to log into the slave node

ssh ubuntu@slave-node

If keys were added successfully, you should be able to log into the machine.

You should have an ssh server running on slave node to try this. To install an ssh server, you can run

sudo apt-get install openssh-server

Then start the server using following command.

Starting Spark

  1. On the master node run

When starting spark details of the nodes will get written to a log file. This will print, spark url of the master node. It usually looks like this. spark://<host-name>:7077.

2, Start the slave nodes giving url of the master.

./sbin/ &amp;amp;lt;spark-master-URL&amp;amp;gt;

Go to the web console of spark master and check the status of the cluster. If the slave node starts up correctly and joins to the cluster, you should see details of the worker node under Workers sections.


Setting up IPython Notebook

IPython Notebook is widely used by data scientists to log and present their findings in a reproducible and in an interactive way. Since I was running spark in a remote machine, what I was looking for is a driver program running as a server, which would run on the same network as spark but which I could access remotely and submit new jobs. For this, IPython Notebook suited very well.

  1. Log into the master node

2. Download and install IPython Notebook.

If you already have pip installed you can do this by:

pip install "ipython[notebook]"

For more installation options you can refer to their official website.

3. Create a new profile for spark in IPython Notebook

ipython profile create spark
[ProfileCreate] Generating default config file: u'/home/ubuntu/.ipython/profile_spark/'
[ProfileCreate] Generating default config file: u'/home/ubuntu/.ipython/profile_spark/'

4. Create the file ~/.ipython/profile_spark/startup/ and add the following

Following code snippet was taken from

import os
import sys

# Configure the environment
if 'SPARK_HOME' not in os.environ:
    os.environ['SPARK_HOME'] = '/home/ubuntu/spark/spark-1.4.1-bin-hadoop2.6'

# Create a variable for our root path
SPARK_HOME = os.environ['SPARK_HOME']

# Add the PySpark/py4j to the Python Path
sys.path.insert(0, os.path.join(SPARK_HOME, "python", "build"))
sys.path.insert(0, os.path.join(SPARK_HOME, "python"))

5. Start the IPython Notebook

ipython notebook --profile spark

6. You can access IPython Notebook UI by http://spark.master:8888
From the main page, create a new Notebook and then add the following lines.

from pyspark import SparkContext
# Getting spark context by connecting to an existing cluster
sc = SparkContext( 'spark://apim-gwm:7077', 'pyspark')

7. After executing the above code, if you go to the Spark Web Console, you may see pyspark listed as a Running Application.


Customizing Key Validation flow

Since we have already covered how WSO2 API Manager 1.9.0, allows you creating OAuth clients on your desired OAuth Provider, let’s move on and see how we can delegate token validating part to the OAuth provider.

As part of the OAuth Provider Extension Framework, the capability was provided to extend Key Validation flow. This was done by introducing a handler which executes when APIKeyValidation service is called. Before explaining about the new handler, it would be good to have a clear idea on, how KeyValidation works.

How keys are validated

Once you call an API providing an Access Token, the execution flows through five handlers,specified in the API. ( For the curious, you can take a look at these handlers by opening an xml, located at ./repository/deployment/server/synapse-configs/default/api )

It is the second handler APIAuthenticationHandler that captures our attention. This is the handler that extracts the Token out from the Header and call APIKeyValidationService ( running on the KeyValidator node) to get the Token validated. Upon validating the Token, Gateway would receive an APIKeyValidationInfoDTO as the response , only using which the rest of the operations would be performed. Before decoupling was done, the entire Key Validation process happened  inside one single method ( validateKey). ValidateKey operation performed all the following operations running a single query,

  1. Checking token validity.
  2. Checking if the application  to which token was issued has subscribed to the API being invoked,
  3. Checking whether the token has necessary permission to access the resource being accessed ( if resource is protected with a user token, then token has to be of type APPLICATION_USER)

If the query evaluates token as a valid one, then somewhere down inthe validatekey operation, we check if the token has a valid scope required to access the resource.

What is changed

What has been done with the new framework is, breaking down this one big code block into smaller parts and providing a way to extend each step. With the new change, we have introduced a handler KeyValidationHandler, which runs inside validateKey operation, which has four operations representing each smaller task previously done inside validateKey.


Now, APIKeyValidationService  will instantiate the handler implementing KeyValidationHandler, ( the class name of the implementation can be provided in api-manager.xml) once the server starts up, and would call each method specified in the handler in the order specified above.

Default implementation shipped with the product does following things.

Inside validate token method it would call getTokenMetaData in KeyManager interface, which would call introspect endpoint in OAuth Provider would return;

  1. Validity Status
  2. Consumer key
  3. Issued time &  validity period
  4. Scopes issued for the token
  5. EndUserName

associated with the token.

Returning last two attributes is optional.

If a developer wishes to use any additional fields for validation, the framework supports using such. If token validation failed and the developer would like to propogate the particular error associated with the failure,error code can be set in AccessTokenInfo – but please remeber that error code can only take values defined in APIConstants.KeyValidationStatus

Within the same validateToken method, a new APIKeyValidationInfoDTO will be created, and populated with details returned by getTokenMetadata method.

After executing each method mentioned above, APIKeyValidationInfoDTO will have following details populated.

KeyValidatioFlow (1)

Only those fields getting changed at each step are mentioned in the diagram above for clarity.

After going through this chain, APIKeyValidationInfoDTO is sent back to Gateway as  the response. Gateway would perform all subsequent operations only using values populated in this DTO. In the case KeyValidationHandler is extended, it’s crucial that proper values are set for above fields, since all the functionalities (Throttling, Statistics, picking the correct endpoint) depend upon them.

Do we really need to extend KeyValidationHandler?

For most cases no. The default implementation was written in such a way, that by only extending getTokenMetadata method (in KeyManager interface), you should be able to complete entire keyValidation flow.

So when do we really need to extend KeyValidationHandler?

Suppose that you need to skip some functionalities provided in API Manager. For example domain validation ( When a creating a key via Store, Subscribers can specify which domains are allowed to make calls using a token generated against a particular   consumer key). If this validation doesn’t add any value, then such trivial steps can be ignored and skipped by extending KeyValidationHandler.

There can be instances where, default scope validation doesn’t suite perfectly well for certain usecases. In default scope validation, we first get the scope assigned for the resource being accessed. And then we check whether the issued token has the scope assigned to that resource. Invocation is only allowed, if those two are matching. Suppose that someone doesn’t need to go into the level of detail if  a scope is assigned to a resource, but only need to verify if the token used to access has at least one of the scopes defined for that API. Extending validateScope method would be a good option to cater such requirements.

As a method of passing details of an API invocation to the backend, a JWT is used. If someone needs to send a different type of token, then generateConsumerToken method can be extended.