Because Linux can only open a certain number of files
If you’ve been around the tech industry you may have heard someone mention the term “scale.” Even outside of tech, you may have observed businesses “scaling” their production methods to meet demand. In essence, “scaling” would equate to increasing your means of production. In the context of tech, scaling is increasing your capacity as the service provider. There are two common approaches to scale in tech, one would be vertical scaling and the other horizontal scaling. In this post, I’m going to talk about horizontal scaling, and how it can applied to a service that operates via websockets. For reference, horizontal scaling involves adding additional servers ( or nodes ) to increase capacity.
While I was developing multiplayer games, my optimal pattern for handling communication consisted of having the client only send input to the server. The server would then be responsible for controlling the client’s state. This would guarantee that all players were seeing the same exact thing. Another way to frame this would be that I let the server decide the client’s state. For this post, the client will be the websocket server and the server controlling state will be
dBus . Here is a diagram to better illustrate this :
Let’s say I wanted to broadcast a message to all the websocket connections on server 1, I’d push the message to my IPC ( in this case
dBus ), and await for dBus to dispatch the actual broadcast on Server 1. Now to implement this with code.
For this post, I’ll attempt to build an echo server. The server will relay messages to all connected devices, this includes websocket connections on another server instance. My makeshift message broker will be
dBus . I’ll use package
github.com/gorilla/websocket to implement my websocket. I’ll use package
github.com/godbus/dbus/v5 to access
dBus . To start, I’ll define a type that will house an array of websocket connections and the connection to
dBus . Here is the struct definition :
I’ll proceed by defining the global variables.
upgrader will initiate the websocket connections. Variables
AppID will be used with
dBus communications. Here are my global variables :
Next, I’ll define a method for type
App . This method will broadcast a message to all websocket connections on the server. I’m starting with this method as it does not depend on other functions/methods to operate. Here is the definition for method
Next, I’ll implement the method that will listen to messages from
dBus . This method will also be responsible for setting up a connection to
dBus, as well as broadcasting a message to all connections. Once data is received from
dbus, the method will broadcast the message to all websocket connections on the server instance. This method will be called
listen , here is its definition :
The last method I’ll implement for type
App will be the HTTP handler. This handler is responsible for establishing websocket connections, storing the connection, and relaying messages to
dBus. This method will be called
handler . Here is its definition :
Now that I’ve defined all my components, I’ll proceed with the implementation. Method
listen will be ran on a Goroutine. I’ll mount the handler to the endpoint path
sockettome . Here is the code representing my implementation :
Here is the code in action :
In a production environment,
dBus may not be the right fit. There are other message brokers that will perform much better.
dBus is limited, in the sense that the server instances must be on the same physical host. For this post, I use
dBus to simulate a message broker.
My theory is this, a central service ( message broker ) should dictate what each individual server instance writes to websocket connections. And clients should tell the central service what to broadcast. In my opinion, this removes the need to manage state at an instance level, since a central service is doing it. There is a link to the code used in this post below.