I wanted Nginx to do sticky-URL routing. One good way of doing sticky-URL routing is to take hash of the URL and route it between the available nodes.
There are several ways to do sticky-URL routing. Here is the LB configuration that I have tested with. It uses Lua. Lua is a very capable language.
Now the more interesting question why someone needs to do sticky-urls? There can be many reasons, but mine was to achieve the most basic way of doing tenant partitioning. Tenant partitioning is explained here. As long as a string is representing the tenant is in the URL we can do tenant aware load balancing to achieve tenant partitioning.
There are several ways to do sticky-URL routing. Here is the LB configuration that I have tested with. It uses Lua. Lua is a very capable language.
1: upstream wso2servers {
2: hash $urlpart consistent;
3: server 192.168.1.2:9763;
4: server 192.168.1.2:9764;
5: }
6:
7: server {
8: listen 8080;
9: server_name router.wso2;
10: location / {
11: set_by_lua $urlpart "return string.match(ngx.arg[1],\"/t/.-/\")" $request_uri;
12: proxy_set_header X-Forwarded-Host $host;
13: proxy_set_header X-Forwarded-Server $host;
14: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
15: proxy_set_header Host $http_host;
16: proxy_read_timeout 5m;
17: proxy_send_timeout 5m;
18: proxy_pass http://wso2servers;
19: }
20: }
Now the more interesting question why someone needs to do sticky-urls? There can be many reasons, but mine was to achieve the most basic way of doing tenant partitioning. Tenant partitioning is explained here. As long as a string is representing the tenant is in the URL we can do tenant aware load balancing to achieve tenant partitioning.
No comments:
Post a Comment