Friday, December 16, 2016

APIs that Consume APIs - JWT Bearer Grant Type

OAuth protocol has provision to design custom grant type in addition to the four grant types that are mentioned in the specification. SAML grant type is of the first grant types that came out as an extensions and many IdPs support this. Many IdPs allow to write plugins that enable other custom grant types, and among the most popular grant types are,
  • Biometrics - Here the biometrics such as your fingerprint or retina scan is used obtain an access token. This is pretty useful for mobile apps.
  • JWT - Here a service that has a JWT token may use it to obtain a access token to access another API
Here is a scenario for JWT grant type.

You are writing a service that allows employees to report time that they spend on customer work. Your service has received a JWT token so it can do authorizations.  One of the methods in the service is as below,

reportTimeForCustomer(String clientId, String ticketId, int durationInMinutes) {}

First of all, authorization checks happen based on the JWT and the next step is to validate clientId and ticketId before proceeding. There is a CustomerAPI and TicketAPI that can perform validations.


But both of these APIs are OAuth protected. How do you proceed? How would you obtain an access token to access this service. You have several options,
  • You can take an application token
  • You can take an access by presenting the JWT token as mentioned above

Monday, December 5, 2016

Applications Accessing OAuth Protected APIs

When implementing applications that access OAuth2.0 protected APIs there are few recurrent questions asked by different parties. Before diving into the those questions, lets brush up on the for different OAuth2.0 grant-types and their usages.
  • Authorization Grant - Typically used by 3rd party web applications. They can obtain an authorization code that can be used to get an access token. 
  • Implicit Grant - Typically used by mobile phones and single page applications to obtain an access token.
  • Username/Password Grant - Typically used by trusted applications by clients. For example, the applications within the same organization as the the API provider can use this approach to obtain an Enduser access token.
  • Client Credential Grant - Used to get an application access token.

You can read more about these at https://tools.ietf.org/html/rfc6749

Now lets look at the common questions raised by some application developers.

Q1 - My application must do SSO with SAML2.0. It should also access APIs without requesting the end users to login again.

Additionally there is freedom to design and implement different grant-types. One of the first grant-type is OAuth2.0 SAML2.0 bearer token. This allows an application to obtain an access token by presenting the SAML token, so this answers your Q1.





Q2 - My web application should access 3rd party APIs as the end user. How can I obtain an access token to call the APIs?
When 3rd party APIs needs to be accessed, redirect the end user to the 3rd party auth server to get authorization code. This means the user has to first authenticate to your app, and then authenticate to the 3rd party auth server.  Or else you can obtain an application token using client credential grant-type, but in this method the application is not accessing the APIs as the end user.

Q3 - I don't want my application to maintain multiple OAuth access tokens for the same API provider. Is this possible?

For the same provider you can access multiple APIs using the same OAuth2.0 tokens. This is inbuilt into API provider platform by default, mos of the time



Wednesday, October 12, 2016

WSO2 APIM: Publishing APIs to External\Internal Parties

What if you want to have two API gateways - one for external facing APIs and the other for internal facing APIs? This can be achieved by two API manager deployments, but from WSO2 APIM 2.0.0 onwards this has become an inherent features supported by the product, so there is no need for two deployments. The solution is based on the multi-gateway feature. The multi-gateway feature allows one publisher to push APIs different gateway environments selectively.





At the time of publishing APIs, all available environments are listed in the publisher so that the API publisher can pick the correct environment. So the publisher will see all the environments available, in this case external and internal.





This allows publisher to push APIs to external or internal gateway selectively. So you can pick to expose an API on either external or internal gateway or on both of the gateways at the same time.

RBAC Store

In a real world scenario, internal users should be able to see only internal APIs and external users should be able to see only external APIs. This can be achieved via user roles. User roles can be defined for external and internal user, but sometimes it is not always scalable and can be troublesome as you have to assign users to specific roles.

Why two stores would be great?

What if we could deploy two stores, one for internal users and the other for external users. This will match the deployment expectation as well. The external store can be in DMZ (or accessible by outside world) and internal store can be internal network only.


This is added as a new feature to APIM road-map.

What about OAuth Keys and Throttling?

Underlying gateway/environments APIs are transparent to the OAuth keys. Irrespective of the API being exposed on a single or multiple gateways the number of cumulative API calls will be considered by the traffic manager when enforcing throttling.


Friday, September 9, 2016

5 Top Technical Reasons to use WSO2 API Manager


Not Just a Gateway - WSO2 API Manager is more than an API gateway. It is a complete platform for API Management, which includes an API gateway, security, developer portal, publisher portal, API lifecycle management, analytics and much more. It has a very comprehensive feature list and you can read about the feature list here.

Cloud and On-premise Options - WSO2 API Manager is available has a public cloud offering from here. Or it can be downloaded and installed on premise with different deployment options. Your organization can take an iterative approach and start small and grow into a API centric organization in an iterative manner.

Flexible Deployment - This is one of the strongest reasons to go for WSO2 API Manager. It offers flexible deployment patters as each organization has different network, governance policies. You can start WSO2 API Manager in different modes (profiles) to act as a gateway, developer portal and publisher portal. You can deploy different profiles at different network zones (eg. DMZ) adhering to organization network and deployment policies.

Part of a Platform - As all WSO2 products, the API Manager is built on it's comprehensive platform. That means whenever additional functionalities are required one can extend into those areas such as security, integration, real-time analytics and mobile device management and they would work together seamlessly with near zero effort. So what if you don't want any more WSO2 products? No problem, just jump into the next point.

Openness, Modularity Extensibility - WSO2 API Manager is 100% open source and distributed under Apache License 2.0. It is built on open standards such as OAuth 2.0 and has all of it's functionalities available as APIs. It is modular and extensible allowing to plug into external Identity Providers. All of these leads to into zero vendor locking.



Sunday, September 4, 2016

An Iterative Approach to Transform your API Strategy using WSO2 APIM

Iterative approach is the choice for most IT related projects today. In the same wary API management at your enterprise can follow an iterative approach that will eventually lead to digital transformation.

If you have no problem using Cloud services just get a demo account and you are on your journey. 

API management can be achieved within a few hours using WSO2 APIM. You can download the latest pack from http://wso2.com/api-management/, deploy and configure it to be production ready instance within a few hours. The deployment is as follows.




Pros
1 - The cost is for single instance and you will get 24*7 WSO2 production support
2 - Deployment is up and running within hours
3 - Minimum hardware/cloud infrastructure requirement (only one node)
4 - Suitable for starters

Cons
1 - No HA
2 - Not network friendly. Where are you going to run this instance? Not in DMZ as this need a database connectivity.
3 - The supported load really depends on your use-case

What if you want HA? This is the next level. You need  high availability. The system should be up and running 99.99%. Then you will need another node.





Pros
1 - The system is highly available
2 - The cost is for single instance and you will get 24*7 WSO2 production support
3 - Deployment is up and running within hours

Cons
1 - Not network friendly. Where are you going to run this instance? Not in DMZ as this need a database connectivity.
2 - The supported load really depends on your use-case

What if your load goes high? You can make the passive node as a traffic serving node. This means production subscription changes from 1 instance to 2 instances. 


Pros
1 - The system is highly available
2 - The cost is for two instance sand you will get 24*7 WSO2 production support
3 - Deployment is up and running within hours

Cons
1 - Not network friendly. Where are you going to run this instance? Not in DMZ as this need a database connectivity.
2 - The supported load really depends on your use-case


What if you want to support more TPS OR complex throttling OR adhere to standard network patterns, where the un-trusted connections are throttled out at the gateway itself. Then you need a distributed deployment where the deployment looks like below. This deployment allows different functional component of API management to scale differently - and these components scale at different proportions.



For this type of deployments you can get solution architecture help from the WSO2 team. This is an API empire that needs to be planned precisely. 

Saturday, September 3, 2016

WSO2 ESB - Serving Multiple Modern Apps from a Slow Legacy System


The story of serving thousands of requests sent by multiple modern applications in the downstream from a system that serves only 2 requests per second, basically how to get more bang from the existing buck!


Scenario - Vehicle Licensing Department of the Liliput Kingdom is going through digital transformation. As a step in this process they are going to introduce multiple applications to their manual processes. The apps can include,
  • Mobile app running on vehicle owner's devices to check their vehicle licenses
  • Customer service web app to check status of vehicle licences
  • EverGreen - a third party organization that check vehicle emissions. It needs to get details of the licenses and vehicles.
  • The vehicle license printing service needs to be automated
All the vehicle license information in a mainframe(MF) and this MF has one method for listing all the vehicle licenses. Calls for this method cannot exceed 2 per second. The goals of the architecture,
  • Implement a solution by retrieving records from MF
  • Expose REST services for the mobile, webapps and third parties
  • Expose a SOAP API that the printing service is capable of calling
  • Calls to down stream must not exceed 2 calls per second
  • Calls to MF should be done over CICS

This is the solution proposed for Liliput VLD is based on WSO2 ESB. It helped VLD to remove point to point connections and perform mediation, transformation in minimal time.






High lights of the architecture,
  • Create an intermediate database between the MF and new system - let's call it IDB
  • Periodically retrieve delta from the MF and update the IDB
  • Expose the data in IDB as a REST service
  • Integrate data using the ESB to create composite interfaces
  • Expose all APIs over APIM to be consumed by applications. This provides throttling, security and API analytics

The Vehicle Licensing Department of Liliput Kingdom achieved digital transformation much faster and less cost, due to this architecture.



Friday, August 26, 2016

Enterprise SOA Governance with WSO2 Greg - Evaludate, Direct, Control

SOA governance is an extension to IT governance. IT governance ensures that enterprise IT is inline with business strategy, architectural best practices, government regulations, laws and contribute to the value addition process of the organization. SOA governance ensures this factors against the SOA efforts in the organization. Governance is best implemented in an organization in a 3 step procedure - evaluate, direct and control. This blog post will show how WSO2 Greg will assist governing bodies in each of the steps.

Evaluate - Distributed nature of SOA means dis-aggregation of service data by hosted location, ownership and domain, making it a challenge to evaluate SOA efforts. Primary task of a governance product is to build a centralized database of information of SOA. This will be the basis of SOA governance in industry. WSO2 Greg provides capabilities to build a asset store.WSO2 Greg ships with a default service asset, that can be modified or used as it is. Building a service catalog using the registry is easy. However if the service catalog is going to be manually populated it is hard to keep it in-sync with the alive services. Populating services can be automated in WSO2 Greg as it has a REST API. Any service hosting platform can publish service information by calling the WSo2 Greg APIs.







Direct - Guiding people and technology in SOA efforts require a lot of information. High level business decisions comes from business analytic/intelligence layer that provides direct insight into what is going on in the business. Once the business goals are foamed then comes the SOA based implementation strategy. Here Governance can play a significant role as it can provide information as,
  • A service catalog with usage, interface and owner for each service
  • Decide how much a service being reused
  • Service usages and dependency graph
  • Data to deduce service maturity - versions and creation times 
  • Different versions and their life-cycle movements
  • Social features such as service rating, comments
WSO2 governance registry is designed to provide all of the information,





Control - What needs to be controlled in a SOA effort? Some of the things that needs to be controlled are service replication, services with bad APIs, bad code, existence of unused services and etc ..

How much control is too much control is a good question to ask each control step the governance body is going to enforce. Whatever the control step that put in place to hinder the innovation of developer and devops is too much control. If a developer is not allowed to create a service without approval that would hinder innovation. However Lifecycle management is an ideal place for enforcing control. It would not hinder innovation and still ensures the quality of the SOA architecture. For this purpose WSO2 Greg provides and extensible, customizable lifecycle management feature in Greg.




Monday, June 13, 2016

Continuous Integration for WSO2 Artifacts

This blog will walk you through a set of best practices, guidelines, tips and tricks to setup an efficient software delivery lifecycle for WSO2 middleware platform. It explains how to apply generic software development best practices such as managing configurations, continuous integration, continuous deployment and build-once-deploy-everywhere concepts to WSO2 platform. The result is an efficient software delivery process for WSO2 platform that integrates into the organization software development toolset and lifecycle. 

WSO2 middleware platform has a story for comprehensive development, deployment and delivery. WSO2 platform ships with Developer Studio - an Eclipse based graphical editor to create artifacts such as services and integrations and manage links dependencies between the artifacts. When Developer Studio integrates with continuous integration and deployment it will provide a very comprehensive story.


Development Time Best Practices


All development efforts universally go through minimally two stages - dev and production environment. Ideally in a large organization, there are well established environments on which artifacts needs to be tested before going live. When WSO2 platform is introduced into such an environment it needs to plugin to the existing software development lifecycle tool set seamlessly. An existing software delivery process can be as follows.





WSO2 artifacts can plugin to the above CI seamlessly, provided that the your project is structure properly.
Organizing your Project

Applications/artifacts on WSO2 platform connect to systems and databases, which will change depending on the environment. Mostly, we have observed that the following vary between environments. We are going to call these “external references” in the rest of the article.
  • Endpoints and credentials to external systems - In ESB artifacts
  • Database connection details - ESB artifacts, Services, Web apps and Data Services


As these external references keep changing per-environment. How do we change values between environments? What if the code en-composes external references in the integration logic itself? This is a big NO. You’ll have to modify the deploy-able artifact at each stage, which is error prone and very primitive way of development. It is not at all deploy-friendly or CI/CD friendly. The key is to separate the main logic from external references, external references and configurations will keep changing based on the environment but the main logic will stay the same throughout.

Use Developer Studio for the development. Separate out main logic from environment specific configurations. You can follow the steps here [1]. A sample workspace that configured for the Dev/Test environments is as follows.
Screen Shot 2016-06-13 at 1.30.13 PM.png
A separate java project has been added to write tests for the CRM integration project. Any technology that can send HTTP calls would serve the purpose, including Jmeter tests. The MainESBConfig project has the main logic of the project.

Setting up CI/CD for Environments

When a project workspace is organized to as above it can be configured in a CI system to do continuous build and deployment. The typical steps of a CI includes,
  • Build the CRM integration project each time a commit happens
  • Deploy the artifacts each time the build is successful
  • Run tests against the Development environment
  • Then periodically at a predefined time deploy the CRM integration project to the PreProd environment, if all the tests are successful

When all of the above actions are configured in a CI, it would create a build pipeline, that delivers software to Preprod stage. Let’s see how to configure that using Jenkins.

Step 1 - Configure CI System to build the CRM integration project each time a commit happens

Deploying artifacts can be handled by the maven-car-deployer plugin. It can handle the following artifacts successfully.
  • ESB artifacts
  • AS artifacts
  • DSS artifacts
  • DAS artifacts
  • BPS artifacts

Create a CAR file per environment and add the maven-car deployer plugin to the pom.xml of each CAR fie. By default Developer Studio adds the maven-car-deployer plugin to every CAR project.

For example dev and test environment CAR files can be shown as below along with their CAR deployer plugin.


Screen Shot 2016-06-13 at 2.18.29 PM.png
Screen Shot 2016-06-13 at 2.18.21 PM.png
Screen Shot 2016-06-13 at 2.20.53 PM.png


Screen Shot 2016-06-13 at 2.22.28 PM.png

Now Configure the Dev CAR file to be deployed to the above dev environment using the CI system. First build the maven-multi-module project each time a developer commits.


Screen Shot 2016-06-13 at 4.36.01 PM.png

Step 2 - Configure CI to deploy the artifacts each time the build is successful.

Screen Shot 2016-06-13 at 4.40.01 PM.png

When deploying BPS artifacts such as BPEL processes it will delete all of the old instances. If the preferred behavior is, old instances using the old BPEL process and the newly initiated instances using the new BPEL process, then BPS artifacts needs to be deployed as a *.zip archive separately. It can be done using ant scp task inside a maven-antrun-plugin.

Step 3 - Configure CI to run the tests against the development environment

Configure the automated tests as a downstream build job of the development build job defined above. Each time the deployment happens, the tests are run against it.

Screen Shot 2016-06-13 at 3.47.54 PM.png


Step 4 - Configure CI to periodically at a predefined time deploy the CRM integration project to the PreProd environment, if all the tests are successful

If the above build is stable then move the artifact to the PreProd environment by adding a periodic build that runs only if the CRMTest build job is stable. A build is stable only if all the tests are passing.


Screen Shot 2016-06-13 at 3.49.52 PM.png

Conclusion

It is possible to setup a CI/CD for WSO2 platform artifacts very easily and create an efficient software delivery process with development best practices. In this way you can automate deployment to production as well, but the “revert back” procedure which must kick-in if something goes wrong must be planned along with it.

Friday, April 8, 2016

SaaS Developer Guide - Part 3 - Achieving Economy of Scale

How did Cloud pick up an age-old term such as “economy of scale”? Well, it fits perfectly to the scenario Cloud service providers are trying to achieve. Achieving economy of scale is all about doing super optimized deployment where addition of users/organizations causes minimal or zero incremental cost to the SaaS provider. This blog of the SaaS Developer Guide blog series, introduces a list technologies that are available to realize economy of scale for SaaS.

Why is "economy of scale" so important?

Lets start small. As a Cloud Learner SaaS provider, imagine the initial deployment is done. What if it can only support one lecturer/institute? That is not going to be useful. What if a single deployment of 1000 nodes are put in-place to support everybody? That is not very useful either. The deployment should elastically scale during peak hours of use and we should be able to support maximum number of users with minimal amount of hardware resources. That is how we achieve economy of scale.

There is a list of technologies that are available for SaaS providers, to do a deployment to serve thousands of users based on different levels resource sharing. This is a quotation from IBM developerWorks that explains resource sharing and isolation.

"The more resources that are shared, the higher the density. Higher density lowers the provider's costs. At the same time, increased sharing reduces the level of isolation between tenants — the individual systems or services that are being delivered. Isolation is the degree to which one tenant can affect the activity and data of other tenants."[1]

This blog will concentrate on two of the most popular resource resource sharing mechanisms.

Containers

What is the difference between VM and containers? VMs work on virtualized hardware while containers share the operating system? During the recent years, virtualization technologies have evolved massively. Containerizing is such a technology that took a leap during the past couple of years. Containers work sharing the operating system as opposed to VMs that emulate hardware, making it possible to run five-to-six times more servers with containers as opposed to VMs. In container based SaaS deployment, each subscriber gets a single instance of the SaaS application, running in a container. This ensures SLA to the end user as we guarantee the CPU for each running instance. Docker is one such technology. Docker is the leading light weight container technology today due to the following reasons.
  • Easy to use compared to other container technologies
  • Docker’s libcontainer is the de-facto standard for Linux containers. It is open source.
  • Supported by Google and Redhat
  • Docker can be used to pack and ship software
Modern PaaSes such as Kubernetes, GCloud or Cloud Foundry can run docker images. The world is moving to the containers as we are passing the technology-hype curve for docker. Docker can be very effective as containers can be sprung up on first request for the tenant. Containers can killed after an idle time and the same infrastructure resources can be used over an over again to serve 1000s of tenants.

Multi-tenancy

The highest form of density would be using multi-tenancy. When it comes to multi-tenancy, the same application instance will be used by several tenants. In this mode, isolation is minimal and activity of one tenant will affect others. This is a good option for SaaS, if the user activity on the application is predictable and does not need to heavy processing. There are tenant models.
  • Each tenant has it's own database
  • Each tenant has a scheme in the database
  • Each tenant is distinguished by an Id in the schema   
Some SaaS providers use multi-tenant mode to cater to the demo period with cut down set of features. With careful monitoring and distribution of tenants it can provide a large economy of scale, especially if tenants are sharing the same schema and the tenant ID distinctly identify which data belongs to which tenants.

Cloud Learner

Applications can support both multi-tenancy and container based deployment. The Cloud Learner application is implemented in the hybrid approach. This allows the application to be deployed in different modes depending on other aspects such as subsription layers. For example the Platinum subscription layer can have an instance of Cloud Learner application for itself deployed on containers while the demo mode will share the same running instance. 
 
In a future post we'll look at how the dockerized Cloud Learner application is deployed in GCloud.


Sunday, April 3, 2016

WSO2 Identity Server - Supporting Binary Claims such as Windows SID and objectGUID

Configuring a binary attribute (such as Windows SID or objectGUID) as a claim in WSO2 Identity Server and sending it over XML token such as SAML needs an additional configuration.

Problem Identification

 [2016-03-17 16:48:48,203] @nextlabs.com [1] [IS]ERROR {org.wso2.carbon.identity.sso.saml.processors.SPInitSSOAuthnRequestProcessor} - Error processing the authentication request  
 org.wso2.carbon.identity.base.IdentityException: Error Serializing the SAML Response  
     at org.wso2.carbon.identity.base.IdentityException.error(IdentityException.java:162)  
     at org.wso2.carbon.identity.sso.saml.util.SAMLSSOUtil.marshall(SAMLSSOUtil.java:352)  
     at org.wso2.carbon.identity.sso.saml.processors.SPInitSSOAuthnRequestProcessor.process(SPInitSSOAuthnRequestProcessor.java:161)  
     at org.wso2.carbon.identity.sso.saml.SAMLSSOService.authenticate(SAMLSSOService.java:164)  
     at org.wso2.carbon.identity.sso.saml.servlet.SAMLSSOProviderServlet.handleAuthenticationReponseFromFramework(SAMLSSOProviderServlet.java:691)  
 .....  
 .....  
 .....  
 Caused by: org.w3c.dom.ls.LSException: The character '☼' is an invalid XML character  
     at org.apache.xml.serialize.DOMSerializerImpl.write(Unknown Source)  
     at org.wso2.carbon.identity.sso.saml.util.SAMLSSOUtil.marshall(SAMLSSOUtil.java:348)  
     ... 55 more  
 Caused by: java.io.IOException: The character '☼' is an invalid XML character  
     at org.apache.xml.serialize.BaseMarkupSerializer.fatalError(Unknown Source)  
     at org.apache.xml.serialize.BaseMarkupSerializer.surrogates(Unknown Source)  


Solution

Add the following parameter in <carbon_home>/repository/conf/usermgt.xml for each binary attribute that you wish to convert to XML.

 <Property name="java.naming.ldap.attributes.binary">objectGUID</Property>  

Saturday, April 2, 2016

SaaS Developer Guide - Part 2 - Scalability and Resiliency

This blog post is divided into two sections,
  • Background - Touches on definitions and calculations
  • Current Technology and Execution - Available technology, their features and Cloud Learner deployment
Background 

Distribution Systems has been around in software world for quite a while - since 1970s. It has been evolving, getting new terms, and becoming a key ingredient in Cloud based systems. Formally distributed systems are defined as a system consisting of components communicating over a network. Today all highly SaaS systems are designed as a horizontally scaling "set of services". A service is a evolution of a distributed component. If a well defined interface is given to a component in a distributed system it becomes an interface.

Why Services?
  • Services can be developed, deployed and upgraded independantly
  • Services can be scaled and made highly available independently. More replication for important services. More nodes for services with higher load
  • Allows to use different technologies
  • Strong boundaries for modularity

In the Cloud Learner application, there is going to be several services,
  • Class CRUD service
  • UserMgt service
  • Subscription service
  • Course CRUD service
  • Front end component


Load balanced Nodes

When the load goes high on the "Class CRUD service" we can have two instances of the "Class CRUD service" to serve the requests. The load balancer will distribute the requests between these nodes.



"Elasticity" or "auto-scaling", is the primary scalable ingredient of  a SaaS. It means the system scales up as the load goes high and it scales down as the load becomes low. This allows a SaaS to exploit the full potential of the underlying IaaS/PaaS, which has the ability to provision infrastructure on-demand. Elasticity leads to optimized resource consumption that results in economy of scale. We'll be discussing about "economy of scale" in length in a future blog.

Resiliency and scalability go hand-in-hand, in a large distributed system. For example one of the primary methods of providing resiliency is detecting faulty node and removing it from cluster while and creating another instance for it. If one instance of the component "A" goes down, the performance of that module degrades by (n-1) where n is number of nodes in the module. But the system will continue to function. This is a key feature of resilient architecture.

              Percentage performance hit on a cluster = (n-&)/n * 100%
       where,
              n is the total number of nodes
              & is the faulty number of nodes

When n is large, the impact of loosing a single node shows less performance hit. But what happens when n is small? For example if the number of nodes are 2 then, 1 node malfunctioning is a critical factor.




Everybody starts starts small. So as an initial SaaS provider with a cluster of 2 nodes that are fully utilized by the load, adding an additional node in active mode would give n+1 resilient factor. When it comes to Cloud Learner application only one node is enough to manage subscriptions, but for resilience another node is added.




Current Technology and Execution

Load is predicted and tested, calculations are done, it is important to glance at the current cloud landscape to understand the "execution" aspects of achieving scalability and resilience. The line between PaaS and IaaS are blurring as IaaS providers keep on adding PaaS level features. Following public cloud features can be leveraged for scalability and resiliency.
  • Load balancing
  • Health check
  • Rule based auto scaling
  • Rule based routing
  • Availability zones 
For example look at the functionalities provided by GCloud for auto-scaling.

Now that is the difference of "going cloud all the way". The sample application can be deployed with scalability and resilience in on a PaaS as follows. There are two FE components. All nodes have n+1 resilient factor. Here the ClassCRUD services is deployed separately because it's load factor is higher than other services. ClassCRUD service the most important service, in the Cloud Learner application as it serves 90% of the user actions. Therefore it has 3 nodes.