Wednesday, August 26, 2015

OAuth as a Service

WSO2 API Cloud brings you OAuth as a Service. If you are a person who,
  • Has a service/API behind a firewall that needs to be opened up to the public
  • Has ability to introduce a firewall rule
Then this blog post will explain, what you need to do.

In addition to OAuth protection, here is what you get after using WSO2 API Cloud,
  1. Advertising to public in your own portal
  2. Public developers will be able to get OAuth consumer secret/key from portal and call your API
  3. You will be able to monitor API statistics such as - no of calls per service, who is calling the service
  4. Throttle the API as required
  5. Finally charge the API when that feature is added

High level architecture


Now you can instantly enable OAuth to your API. Steps,

1) Firstly, protect your API. If it is a JaxRS service protect using HTTP BasicAuth. If it is a SOAP service protect using UsernameToken. In step 3 you will see why we need to protect it.

2) Then get an account in protect it using OAuth. Here is the tutorial on how to do it.

Now you have the API in a public portal.

3) Now contact send an email to to get an IP, in-order to add a firewall rule enable this IP to talk to your service. We need to protect the service using username/password to avoid unauthorised access, because there could be an attack where unauthorised party trying to call the same services, if it is not protected.

Now you have implemented the architecture mentioned above.

Tuesday, August 18, 2015

Nginx for URL hash based routing

I wanted Nginx to do sticky-URL routing. One good way of doing sticky-URL routing is to take hash of the URL and route it between the available nodes.

There are several ways to do sticky-URL routing. Here is the LB configuration that I have tested with. It uses Lua. Lua is a very capable language.

1:  upstream wso2servers {  
2:    hash $urlpart consistent;  
3:    server;  
4:    server;  
5:  }  
7:    server {  
8:      listen 8080;  
9:      server_name router.wso2;  
10:      location / {  
11:         set_by_lua $urlpart "return string.match(ngx.arg[1],\"/t/.-/\")" $request_uri;  
12:         proxy_set_header X-Forwarded-Host $host;  
13:         proxy_set_header X-Forwarded-Server $host;  
14:         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;  
15:         proxy_set_header Host $http_host;  
16:         proxy_read_timeout 5m;  
17:         proxy_send_timeout 5m;  
18:         proxy_pass http://wso2servers;  
19:      }  
20:    }  

Now the more interesting question why someone needs to do sticky-urls? There can be many reasons, but mine was to achieve the most basic way of doing tenant partitioning. Tenant partitioning is explained here. As long as a string is representing the tenant is in the URL we can do tenant aware load balancing to achieve tenant partitioning.

Thursday, August 6, 2015

Tenant Aware Load Balancing

Cloud service providers develop their software as SaaS applications to support multiple tenants within the same instance. In a public cloud scenario, it is required to support thousands of tenants utilizing a large number of clustered instances. In such a situation, it is required to properly manage tenant allocation per instance. It is not feasible to load all of tenants in all the clusters randomly. This would increase resource utilization. Tenants needed to be properly partitioned into different clusters to achieve optimal results. The following diagram shows an un-managed deployment vs. tenant-partitioned cluster.

Load balancing is a key function in a tenant-partitioned deployment, because we need to route the request to the correct cluster. Hence the term is coined as “tenant-aware” load balancing. Extensive research has been done on the tenant aware load balancing and the research paper [1] presents a reference architecture.