Brainstorming Padium’s server architecture

Home/Uncategorized/Brainstorming Padium’s server architecture

Brainstorming Padium’s server architecture

Architecture kung fu

After spending years designing systems for equities trading, it’s refreshing to tackle a new challenge and explore some different technologies. For the Linc, our digital cooking assistant, project (and technically all of Padium’s projects), we want to make the focus on an auto-scaling, high availability architecture to reduce any ongoing maintenance costs, keep any customer outages to a minimum and rapidly make new features available to our customers.

As you would expect all the above requirements for an architecture have pretty well known solutions and best practices, so the only thing left is to choose our favorite stacks and move forward.

In the beginning

Firstly, at the very edge of our system, we use Haproxy and our custom auto-scaling application called ‘Mr Fantastic’.  MrFanastic’s job is to handle detecting incoming load and to right-size our backend systems to handle the load.

For the core of our systems, we chose Docker, in swarm configuration, to be used to deploy every set of services. The core of our system is called ‘Galactus’ (way too much Marvel), which is a Java-based application that contains the required Padium business logic.  It’s designed to speak many of the popular network protocols that we would like incoming clients to use, depending on the requirements of the data.

The book of Revelations

The persistence layers, depending on the nature of the data, we use Apache Cassandra for any non-structured data and PostgreSQL for any data that would require ACID type requirements (like payments for example).  For any plain vanilla file data store access we would be heavy users of GlusterFS over XFS.

For any required caching, assuming the latency profiles can’t be met using Cassandra or PostgreSQL, we would simply setup a Redis cluster when the bottlenecks call for it.

Finally, since we expect to need to handle large amounts of data and do heavy analysis of it, in order to provide more value to our customers, we will also have an Apache Spark cluster. The idea behind using Spark is to leverage the builtin machine learning libraries for much of our upcoming features.

By | 2017-01-31T22:45:04+00:00 January 30th, 2017|Categories: Uncategorized|Tags: , , , , , , , , , , , , , |0 Comments

About the Author:

Leave A Comment