0. Set enviroment dependent configuration.
As software development lifecycle includes such steps as QA, production deployment and development itself, we should have a number of enviroments for each of these steps. And your application configuration parameters may vary in response to a particular environment you run your software on. I.e. database connection: definitely database creds are different for production and qa platforms. This is the case for toSeemo.org. There we have a database and, we also want to have port to run cowboy webserver on in our configuration files. Here is part of our configuration file:{env, [ {cowboy, [{port, 8888}]}, {mongo, [{host, "127.0.0.1"}, {port, 27017}, {dbname, toseemo}]} ]}Initially it's a part of toseemo.app.src file and all of these valuse can be reached with application:get_env/1,2,3 functions. But it's not a good idea to have this parameters in the code, as every config change will lead to the application being recompiled. It's not really flexible. Fortunately, Erlang allows storing these params separately in config files and passing this config as flag to erl command line application as follows:
$ erl -config PATH_TO_YOUR_CONFIG_FILEThis config has specific format and in general case it look like this:
[{Application1, [{Par11, Val11}, ..]}, .. {ApplicationN, [{ParN1, ValN1}, ..]}].And here is toseemo.config:
[ {toseemo, [ {cowboy, [{port, 8888}]}, {mongo, [{host, "127.0.0.1"}, {port, 27017}, {dbname, toseemo}]} ]} ].Now let's change rel/files/vm.args to make our releases use the config. (Added lines are marked in green)
## Enviroment vars config -config /etc/toseemo/toseemo.config ## Name of the node -name toseemo@127.0.0.1 ## Cookie for distributed erlang -setcookie toseemo ## Heartbeat management; auto-restarts VM if it dies or becomes unresponsive ## (Disabled by default..use with caution!) ##-heart ## Enable kernel poll and a few async threads ##+K true ##+A 5 ## Increase number of concurrent ports/sockets ##-env ERL_MAX_PORTS 4096 ## Tweak GC to run more often ##-env ERL_FULLSWEEP_AFTER 10That's it. We just need to create /etc/toseemo/toseemo.config file to make our releases use enviroment specific-configuration, which is still accessible with application:get_env/1,2,3 functions. Please follow this link for more info about configuration files.
1. Configuration for distributed node.
As we are going to run toseemo app on distributed nodes our configuration should reflect that. This means that node name and security cookie should be defined. Basically, they are defined by default, but you might want to have more secured cookie. And in case of toseemo we made some "magic" in nodes naming just to automate deployment of the release a bit. We use DigitalOcean to host toseemo. Each of toseemo nodes (DO droplets) has 2 network interfaces: eth0 and eth1. We would like to use an IP address attached to eth1 for node naming, so our node name looks like toseemo@10.X.Y.Z.Setting of, lets say "IP based" name of an erlang node, is pretty simple task if you start erlang VM via console using erl command:
$ erl -name toseemo@`ip addr list eth1 |grep "inet " |cut -d' ' -f6|cut -d/ -f1`
But this will not work if you set it in vm.args. Our solution for that is using eval argument. Thus, here is our rel/files/vm.args, changes are marked in green:
## Enviroment vars config -config /etc/toseemo/toseemo.config ## Name of the node -name toseemo@`ip addr list eth1 |grep "inet " |cut -d' ' -f6|cut -d/ -f1` ## Cookie for distributed erlang -setcookie super_duper_secured_cookie ## Name of the node -eval "{ok,[{addr, Ip}]} = inet:ifget(\"eth1\", [addr]), IpStr = inet_parse:ntoa(Ip), net_kernel:start([list_to_atom(lists:concat(['toseemo@', IpStr])), longnames])." ## Heartbeat management; auto-restarts VM if it dies or becomes unresponsive ## (Disabled by default..use with caution!) ##-heart ## Enable kernel poll and a few async threads ##+K true ##+A 5 ## Increase number of concurrent ports/sockets ##-env ERL_MAX_PORTS 4096 ## Tweak GC to run more often ##-env ERL_FULLSWEEP_AFTER 10All the "magic" is done here by this code snippet. You can try to run it in erlang console, but please remember that node should not have a name.
{ok,[{addr, Ip}]} = inet:ifget("eth1", [addr]), IpStr = inet_parse:ntoa(Ip), net_kernel:start([list_to_atom(lists:concat(['toseemo@', IpStr])), longnames]).Meanwhile -name attribute also present in the config. And to make it work we will need to change a bit application management script (rel/files/YOUR_NODE.sh (in our case it's rel/files/toseemo.sh))). Basically, defining -eval attribute is enought to start release with required node name but it's not enought to do another actions like attach, ping, getpid, stop etc. That's why we left -name in out config. The only thing left to do is changing the app management file by making it execute out "node-naming" command from -name attribute. Just open rel/files/YOUR_NODE.sh and make following changes:
Location in file | Old Version | New Version |
function ping_node() | $NODETOOL ping < /dev/null | eval $NODETOOL ping < /dev/null |
function get_pid() | PID=`$NODETOOL getpid < /dev/null` | PID=`eval $NODETOOL getpid < /dev/null` |
stop action handler | $NODETOOL stop | eval $NODETOOL stop |
Now we can start our nodes with dynamic names.
2. Attach node to cluster.
By default rebar generated release management script has no option to attach the node to a cluster, but it would be usefull to have it. So let's teach our node to do it. Basically, we have almost everything for that. There is one more rebar generated file - rel/files/nodetool which can be found in erts folder after release generation. If you open it (just open we don't need to change it) you will see handling for rpc command so let's use it.First thing we need to do is adding a small module to an application. In case of toseemo it's toseemo_ctrl.erl :
-module(toseemo_ctrl). -author("konstantin.shamko@gmail.com"). %% API -export([connect_node/1]). connect_node([Node]) -> net_kernel:connect_node(list_to_atom(Node)), ok.We have just one function to connect current node to Node. This function should return ok atom. In general case when you connect a node to another node attached to cluster, all other cluster nodes will know about their new friend. Connections are by default transitive, so Node in the function above is just long name of any node in the cluster. On the next step let's edit rel/files/YOUR_NODE.sh (in our case it's rel/files/toseemo.sh) again. We need just to add an option to attach node to cluster. Just add this code to main sh script "case":
cluster_add) # Make sure a node is running ping_node ES=$? if [ "$ES" -ne 0 ]; then echo "Node is not running!" exit $ES fi eval $NODETOOL rpc toseemo_ctrl connect_node "$2" ;;That's it just run the following command on your newly started node:
$ bin/toseemo cluster_add toseemo@10.X.Y.ZThat's all for this post. Next time we will talk about hot code swapping.