An Erlang server design proof
I had an idea for a game server, that authentication and authorization are accomplished with SSH pubkeys. It's a standard, proven, secure tech and many people already have an SSH keypair which can be used to identify a user uniquely and enable secure communications. Using a pubkey for auth gives us these nice properties:
- User can maybe re-use existing pubkey
- User can later safely use pubkey for other games by other authors
- No password to store, hash, validate, change via e-mail confirmation, etc.
The linked tarball above is a working proof, and there's actually nothing really game-specific about it other than it's made for a thick client to use (not for a browser).
- Summary of design
- Server: persistence layer: mnesia (but not committedly so)
- Server: transport layer: ssh
- Server and client wire protocol: asn.1
- Client: single portable binary (escript)
Persistence layer
The gs_data app houses the persistence layer. The api is exposed through the gs_data_client module, which makes gen_server calls to a driver module for whatever database is configured.
Only driver_mnesia is currently implemented. If you don't know about mnesia, it's a surprisingly good distributed key-value store that ships with the Erlang runtime. It offers transactions, secondary indexes, and a bevy of other features you would expect from a production-ready database system. It is sometimes dismissed, I think, because it stores only Erlang terms and can only be accessed via Erlang code, and so you'll never see, for example, Python and Ruby bindings. If those are limitations you can deal with, you'll probably find Mnesia more than capable for your app.
If not, the use of a driver architecture means you can implement a new database driver in a single file.
A migration-ready schema
A call is available, driver_mnesia:migrate_to_latest_schema/0
,
that future-proofs the schema in the case of, say, deleting a field. By
storing the current schema version in a table schema_version, it's easy
to bring schema up-to-date from any previous version.
Rails users are used to the idea behind migrations since that framework includes migrations as a first-class deployment action. I've been surprised how many devs are unfamiliar with the idea or resist it entirely.
migrate_to_latest_schema() -> Replicas = application:get_env(gs_data,replicas,[node()]), case mnesia:create_table(schema_version,[{attributes,[key,vsn]},{disc_copies,Replicas}]) of {atomic,ok} -> {atomic,ok} = mnesia:create_table(user,[ {attributes,[id,ssh_pubkey,username]}, {disc_copies,Replicas} ]), {atomic,ok} = mnesia:create_table(score,[ {attributes,[userplusdate,date,value]}, {disc_copies,Replicas}, {index,[date,value]} ]), {atomic,_} = mnesia:transaction(fun() -> ok = mnesia:write({schema_version,global,1}) end); {aborted,{already_exists,_}} -> ok end.
Beyond the schema_version table, our toy game server has only two tables:
user
(UUID key and fields for username and SSH pubkey) and
score
({UUID,timestamp} key and a field for the game score).
Transport layer
Erlang's ssh lib makes it very easy to implement a server that speaks
over SSH. While there are some shortcuts that allow you to quickly
create an interactive shell, it seemed more appropriate here to go a
little further and implement an SSH subsystem (a la sftp). This
requires implementing the ssh_daemon_channel
behaviour,
which gives us handle_ssh_message/2
. In this toy app,
the messages go only from client to server. To send replies back to
the client, the client would need to implement the ssh_channel
behaviour.
-module(gs_ssh_daemon_channel). -behaviour(ssh_daemon_channel). -include("../../gs_proto/include/Protocol.hrl"). -export([init/1,terminate/2,handle_msg/2,handle_ssh_msg/2]). init(_Args) -> State = [], {ok,State}. terminate(_Reason,_State) -> ok. handle_msg(Msg,State) -> io:format("Our channel subsystem got a message: ~p~n",[Msg]), {ok,State}. handle_ssh_msg({ssh_cm,Conn,{data,_,_,SshMsg}},State) -> io:format("Our channel subsystem got an _ssh_ message: ~p~n",[SshMsg]), {ok,Req}='Protocol':decode('Request',SshMsg), io:format("Decoded: ~p~n",[Req]), [{user,UserIdStr}] = ssh:connection_info(Conn,[user]), UserId = key_handler:uuidstr_to_binary(UserIdStr), handle_req(Req,UserId), {ok,State}. handle_req({setUsername,Obj},UserId) -> io:format("Handling set user name message ~p for user ~p~n",[Obj,UserId]), gs_data_client:set_username(UserId,Obj#'Request_setUsername'.name); handle_req({addScore,Obj},UserId) -> io:format("Handling add score message ~p for user ~p~n",[Obj,UserId]), gs_data_client:add_score(UserId,Obj#'Request_addScore'.value).
Erlang handles the key exchange parts, and our daemon decides whether a user is authorized by implementing the ssh_server_key_api. This is powerful because it lets us implement an auth policy such as "Let's allow anyone to log in who provides any UUID and valid SSH pubkey". In other words, a user who logs in with an unrecognized UUID/pubkey simply become a new user in the system. It's possible later to implement a user blacklist or whitelist, but until then the system is beautifully simple for users and developers.
By recording these credentials into mnesia (instead of the traditional authorized_keys file), we have the start of a system that scales beyond one machine.
is_auth_key(Key, User, _DaemonOptions) -> UserBin = uuidstr_to_binary(User), gs_data_client:is_user_valid(UserBin,Key).The driver then authorizes known users (or unknown users, after saving them):
handle_call({is_user_valid,UserId,SshPubkey}, _From, State) -> {atomic,UserReply} = mnesia:transaction(fun() -> ExistingUserSet = mnesia:read(user,UserId), io:format("ExistingUserSet is ~p~n",[ExistingUserSet]), case ExistingUserSet of [ExistingUser] -> SshPubkey == ExistingUser#user.ssh_pubkey; [] -> NewUser = #user{id=UserId,ssh_pubkey=SshPubkey,username=""}, ok = mnesia:write(NewUser), true end end), {reply,UserReply,State};
I mentioned a UUID. The client must provide an SSH username when they log in. Traditionally this would be a Unix user on the target SSH server, but the flexibility of the API's means that a user can be anything we want. A UUID is as good as anything, it only needs to be unique. On the first login, the client will generate a new UUID and store it in ~/.gameconfig for subsequent logins. Don't lose your ~/.gameconfig!
Wire protocol
The subsystem API will send and receive an erlang binary. For a simple protocol, it would have been enough to invent a text protocol (e.g. "SCORE 789"). However, since I'm trying to prove a scalable system, I opted for ASN.1 as a serialization framework, using BER encoding.
The toy protocol is simple:
Protocol DEFINITIONS ::= BEGIN Request ::= CHOICE { setUsername [0] SEQUENCE { name [0] UTF8String }, addScore [1] SEQUENCE { value [0] INTEGER } } END
I learned that a top-level CHOICE is the way to specify many messages, whose actual type can be known at decode-time.
The protocol gets its own Erlang app, since it must be available to both client (at build-time) and server (at run-time).
Client
The toy client simply connects, sends a couple of test messages, and exits.
The client binary is not an OTP application, but rather a single escript binary. This is a good time to mention erlang.mk, the excellent build tool from Nine Nines. It's the only third-party lib I've used in this project, and it made escript-izing the client trivial.
Clustered deployment
I deployed a server cluster on two small Gentoo nodes at Rackspace. A deployment script is in the tarball under gs_transport/deploy. Some deployment problems include:
- The stock 15.x Erlang failed to build the server, but 17.3 was available by unmasking.
- Distributed Erlang requires that hostnames in a cluster are resolvable to their communication address (10.x.x.x)
- Distributed Erlang requires all nodes to share a common ~/.erlang.cookie
- I wanted to leave the system sshd running on port 22, and run the game server as a nonprivileged user on port 9022.
- This requires generating a dedicated SSH host key. I ended up disabling host key checking in the client, but an alternative is to pregenerate the host key, hard-code it into the client (.pub) and paste out the private key to each server node. This would make security work in the other direction too: the user would know it is connecting to a valid server.
- .app files must be adjusted to have production configuration values
- Starting the whole app requires a phased approach since certain actions require that _all_ nodes be in a certain state.
- Mnesia data must be stored in a fixed directory
The deployment script works as a "poor man's" Chef or Puppet: expose it somewhere via HTTP ('python -m SimpleHTTPServer
' is good for this), then run it on each node with 'curl http://10.1.2.3:8000/deploy | sh
'.
The deploy script writes out some extra scripts for launching the
app in phases. For example, calling
'mnesia:create_schema(['datanode@cloud-server-01','datanode@cloud-server-02'])
'
needs to be only called on one node of the cluster, but only after all
nodes have been started and connected. So, the 'startup0.sh' script
gets run on each node to start the empty cluster, then 'startup1.sh'
creates the schema. Likewise, starting the game server on an instance
requires that mnesia is running on all instances, so those are later
phase scripts.
I struggled with how to execute code on an existing distributed
cluster, then encountered erl_call
. So, code can be
executed like this:
echo 'application:start(gs_data),ssh:start(),application:start(gs_transport).' | erl_call -sname datanode -e
Conclusion
I spent a couple weeks of free-time coding sessions working on this proof, but I'm glad I saw it through. The system has a lot of nice properties:- The use of standards-based technologies (SSH and ASN.1) means that the client could be implemented in any language, not just Erlang.
- Communications are secure.
- The system authorizes users with almost no effort from users or the developer.
- compact - about 600 SLOC
- Scaling is free: mnesia is clustered, and the transport layer is stateless (provide N nodes, the user can connect to any one)
- Database migrations are built in
- Since new connections get new erlang processes, all your machine's cores will be used with no developer effort.
- Hot deployments are possible (upgrading an application is a first-class action in erlang), although the shared database means that new code may have to grok old and new database schemas until the database schema upgrade can occur.
Thanks for reading!