Entrepreneur's Toolkit: Tools and Technology for your Startup - Part 2

In part 1 of this series, we covered software for productivity, chat and collaboration, audio video conferencing and file sharing. Without further ado, let's continue.

Password Management

Keeping your company's data secure across the various SaaS tools and technologies you use is critical. At the heart of that is good password management. Some basic things you should be doing to protect yourself and your company:

  1. Ensuring that employees always use strong, unique passwords for each site

  2. Storing all passwords, whether they be for individual accounts (bound to a specific person), or shared accounts (the same account is used by multiple people in the company) in a password manager

  3. Using 2 Factor Authentication (2FA) wherever possible 

A password manager helps with the first 2 of those.

LastPass

LastPass Teams.png

Like all password managers, LastPass allows you to remember one password (ie your "last" password) and store everything else inside LastPass's secure password vault. This means you can have really long, complicated and unique passwords for each of your sites, and LastPass does the remembering so you don’t have to. It has browser plugins along with a mobile app that all sync together, so you can easily and securely access all of your passwords from wherever you are. LastPass's Team version allows individuals to keep track of their passwords, as well as setup company-wide shared folders to securely share passwords across the team. It also has a variety of different security policies you can set up, for example, ensuring that everyone must use 2FA to get into their LastPass vault. Pricing for LastPass teams starts at $4/user/mo.

 

1Password

1Password Business.png

1Password is another password manager similar to LastPass. Also $4/user/mo for their Teams product.


Dashlane

Dashlane Business.png

Dashlane is another option. Similar feature set and price point as LastPass and 1Password.

 

My Recommendation: LastPass

LastPass Logo 512w.png

All of these options are pretty similar. Out of familiarity, I'm going to recommend LastPass. But all of these are capable products and will give you the core features you need for password management across your team.

Whatever you do, you must provide a password manager for your team and insist that everyone in the company use it to store their company-related passwords.

 

Electronic Signature Service

As part of actually forming your company, you're going to need to sign a bunch of documents. And that's just the beginning. To manage all of this, you're going to want an electronic signature (sometimes calls eSignature or e-sign) service.

DocuSign

Docusign.png

DocuSign provides online electronic signature services. Pretty straightforward, decent UI. Pricing starts at $10/mo for a single user, $25/user/mo for a multi-user plan.


Adobe Sign / EchoSign

Adobe Sign.png

Way back in 2011, Adobe acquired Echosign, and has since rebranded it Adobe Sign. Similar feature set to DocuSign, and priced at $10/mo for a single user, or $30/user/mo for multiple user plans.

 

HelloSign

Hellosign.png

Recently, Dropbox acquired HelloSign. Presumably it will eventually be rolled into the main Dropbox product, but for now it remains a standalone service. HelloSign's pricing is a bit more attractive than Docusign or AdobeSign. There is a limited version for free, a Pro version for $13/mo and a Business version for $40/mo for up to 5 users - quite a bit cheaper than an equivalent plan from Adobe or Docusign.

 

My Recommendation: DocuSign

Docusign Logo 521w.png

I've used both DocuSign and Adobe Sign. Between the two of them, I prefer DocuSign, but fundamentally, they're both going to give you the same end result: electronic signatures. HelloSign has a compelling price point, so if I were starting something from scratch today, I'd check it out as well.


Accounting

It's one of the last things that you want to deal with as an entrepreneur trying to conquer the world, but it's necessary.


QuickBooks Desktop

Quickbooks Desktop.png

Do not use QuickBooks Desktop - it is the devil!

But seriously, when I first started my own company, QuickBooks was the de facto standard for small business. Every accountant is familiar with it, however, for ordinary people, I found it extremely un-intuitive and one of the ugliest pieces of software I have ever used. I haven’t touched this in years, but I assume (hope?) they have changed the UI, but regardless, a desktop version means you've got your accounting file stored locally, which makes it difficult to share with your accountant and bookkeeper. You also need to make sure it's getting backed up (which you should be doing anyway).

Steer clear of this product and go for a hosted solution.


Xero

Xero.png

Xero is an online QuickBooks alternative. It’s simpler, fully online and easy to invite other team members, bookkeepers and accountants to collaborate on your books. It is however rather limited in its feature set. If your business is simple, you'll probably be fine, but I've found problems as soon as I start getting into more complicated areas of accounting (or more accurately, when my bookkeeper or accountant tells me that Xero doesn’t support something that I would like to do, like certain kinds of reporting). Pricing starts at $9/mo, but realistically you'll be looking at the $30/mo or maybe $60/mo version. Unlike the software we've explored thus far, this is not priced per user, but just one charge for the company.

 

QuickBooks Online

Quickbooks Online.png

Long after I had adopted Xero, QuickBooks came out with QuickBooks Online, an alternative to their desktop product. Based on my prior experiences with their desktop product, I stayed well clear. However, recently, I was trying to do some reporting on expenses with Xero and my bookkeeper informed me that it was not possible with Xero, but could be done with QuickBooks online. I tried it out, and was pleasantly surprised - QB Online as a good UI, it's easy to use, and it did have the reporting ability I wanted. Pricing starts at $20/mo, but you'll probably be looking at the $40 or $70/mo version.

 

My Recommendation: QuickBooks Online

Quickbook Online Logo 512w.png

At present, I use Xero for my companies, but I think if I were starting something new today I'd probably go with QuickBooks Online (I literally never thought I'd say that). This is primarily due to the number of limitations I've run into with Xero. Something that seems simple, and should be doable, is often not. Xero has evolved over the years, but it seems to move at a snail's pace compared to other online services I use. At this point, I’d give QuickBooks Online a shot.

 

Virtual Phone Number

As a startup, you probably don’t need a full-fledged PBX phone system. However, you do need a business phone number. Even if it's just for the myriad of accounts you need to establish with various government entities like your state's department of revenue and the city in which you operate. You may also want one to publish one on your website for sales and support purposes.

 

Google Voice

Google Voice.png

If you're just looking for another inbound phone number that's not your own personal number, Google Voice has a free personal offering that works great. For example, you create a Google Voice number, and when people call the number, the call gets forwarded to your cell phone.

 

Grasshopper

Grasshopper.png

If you're looking for professional phone presence, Grasshopper might be your ticket. When someone calls your company's number, they hear something like "Welcome to Awesome Co. Press 1 for sales, 2 for support etc…". Then you can setup a variety of different extensions that forward to people's cell phones according to various rules, along with a whole bunch of other features. It's pretty capable, but kinda pricey starting at $29/mo.

 

My recommendation: Google Voice Free then Grasshopper

Logo Lockup - Google Voice Grasshopper.png

If you just need to publish a business number and don’t intend to use it much, start with Google Voice personal version which is free. When you get to the point that you need to publish a phone number on your website and handle inbound calls for sales, support etc… Grasshopper makes sense.

As a side note, if you do need a full-fledged internal phone system, there is also Google Voice for business that gives you a cloud-based phone system which is tightly integrated with G Suite. Pricing starts at $10/user/mo.

 

General Project Management

Remember that never ending list of to do items I mentioned earlier? You're going to need somewhere to keep track of all that stuff and collaborate with the rest of your team. There are more pieces of project management software that you can shake a stick at, but below are 2 that I like the best.

Note - this is not software project management, which I'll cover below.

Asana

Asana.png

Asana is a modern, easy to use, online project management system that can start simple and scale up to handle more complex scenarios. Accessible via their website or mobile apps to help keep everyone in sync. The free version allows for up to 15 people, so it will take you a long way before you need to upgrade to the paid version, which starts at $10/user/mo.

 

Trello

Trello.png

Trello is easy to use, flexible project management software with a cool UI. Conceptually pretty similar to Asana, but they both take quite a different approach. Free initially, then starting at $10/user/mo.

 

My Recommendation: Asana

Asana Logo 512w.png

Personally I like Asana for most general project management tasks, but Trello is also a worthy choice. Use what you like best.

 

Software Project Management

Managing software development is different than general project management, thus it deserves its own tool. Similar to general project management, there are so many choices in this area, it's easy to get stuck in analysis paralysis. Here's a few of the most popular products out there to help your narrow down your search.

 

Pivotal Tracker

Pivotal Tracker.png

I've been using Pivotal Tracker, an agile project management tool, for many years now, and I like it. It's straightforward to use, constantly evolving, and designed to support an agile development workflow. There is a free version for up to 3 people and 2 projects, then you’re into the paid versions starting at $12.50/month (for 5 users) or $30/mo plan (10 users). Pricing scales up from there for bigger teams.

 

Jira

Jira.png

I don’t use Jira myself, but it's a popular issue tracking tool with a lot of features. $10/mo for a team of up to 10 people, then pricing jumps up from there to $77/mo for 11 people.

 

GitHub Issues

GitHub.png

The king of source control, GitHub, has a built-in issue tracking system as well. The fact that it's built right into GitHub makes this a compelling choice if you're using GitHub for source control. Even though I use GitHub for source control, I don’t think it's project management features are as capable as Pivotal Tracker or Jira.

 

My Recommendation: Pivotal Tracker

Pivotal Tracker Logo 512w.png

Personally, I like Pivotal Tracker. I've been using it for nearly 10 years across a variety of different projects. It's simple to use and quite prescriptive leading you down an agile development path, which I think is a good thing.

 

Stay tuned for part 3 for more recommendations on tools and technology you'll need for your startup.

Entrepreneur's Toolkit: Tools and Technology for your Startup - Part 1

When first embarking on your startup journey, there is a seemingly never ending list of things to do, and everything on that list is demanding your time and attention.

In fact, before you even formally start your company by creating a C-Corp in Delaware (or whatever type of entity you're forming, and wherever you're locating it), there are a lot of things you should be doing, for example:

  • Refining your vision

  • Validating your vision and idea with potential customers

  • Building prototypes and starting work on your minimal viable product (MVP)

  • Selling your product to customers - if you've got customers who are willing to pre-pay for your product, knowing that it won't be available for X months, that’s a great sign that you're solving a real pain point

  • Building your team - or at least talking to people who you may want to be part of your team as things progress

During this process, you're going to need to start using various different pieces of tools and technology to collaborate with people, manage the development of your MVP and communicate with customers.

Once you formally start your company, you now need to deal with things like accounting, tax filing, payroll and a whole bunch more.

To help manage all of this, you're going to need tools and technology. There are a myriad of choices of SaaS based tools and technology out there. It's time consuming and overwhelming to try and sift through them all and figure out which ones you want to try, then which ones to adopt.

I've spent countless hours over the years researching, trying out, and then actively using different pieces of software across various startups. To help give you a head start, here's my list of tools that you may want to check out.

Since there is a lot to cover here, I've broken this post up into 3 parts:

  • Part 1: productivity, chat and collaboration, audio video conferencing and file sharing

  • Part 2: password management, electronic signatures, accounting, business phone number and project management

  • Part 3: customer relationship management and support, HR and payroll, website, email marketing, payment processing and a couple of other bits and pieces.

Stay tuned for parts 2 and 3.


Productivity Suite (email, word processing etc…)

Selecting a productivity suite to give you customized email (eg @yourocmpany.com), word processing, spreadsheets, presentations etc… is going to be one of the first things you need to do. It will also drive the other choices you're going to make. Fortunately, there are really only 2 options you need to worry about. 

Microsoft Office 365

Office 365.png

Microsoft's offering in the productivity space is called Office 365. Depending on which package you choose, it includes hosted company-branded email via Exchange along with desktop and online versions of Outlook for email, Word, Excel, PowerPoint etc… On their website, you'll find 3 different plans for personal, 3 for small business and at least 4 for enterprise. It's confusing, but here's the summary: for a startup, you'll be looking at the "Business" section:

Office 365 Pricing.png

Plans start at $5/user/mo, but you'll probably want the $12.50/user/mo Business Premium version - this gets you cloud hosted company email, all the Office apps (online and desktop versions) along with a bunch of other stuff we'll discuss below.

G Suite

G Suite.png

G Suite is Google's competitor to Microsoft Office 365. It's a very capable offering and many companies use it. However, all of Google's apps are browser based. I prefer the richness of a desktop app along with its offline capabilities (I spend a lot of time on a plane). Google's Docs, Sheets, Slides offerings are good, but they're not as good as Microsoft Word, Excel and PowerPoint respectively. Google's pricing packages are a lot simpler:

G Suite Pricing.png

Starts at $6/user/mo for the Basic version, but you'll probably want the $12/user/mo Business version. 

My Recommendation: Office 365

Office 365 Logo.png

I prefer the desktop versions of Word, Excel PowerPoint to their online equivalents or Google’s offerings, so that puts me squarely in the Office 365 camp. But to be honest, either of these options are good products. Choose whatever you're most comfortable with.

Chat and Collaboration

In many ways, your decision regarding a productivity suite will influence this decision quite heavily.

Slack

Slack.png

Slack is a tool for team collaboration and instant messenger-style chat between team members. Slack rose to popularity in recent years sparking competition from Microsoft and Google (see below). Slack would claim you can replace email by using it. Personally, I don’t really buy that - no matter what you do internally, you'll need to communicate externally with customers and partners using whatever they use - which is email. I also think email is a better tool for longer form communication. Instant messaging is useful for short messages and quick responses. There is a free version, but if you need more features you'll be starting at $6.67/user/mo.

Microsoft Teams

Teams.png

Teams is Microsoft's Slack competitor. Deep ties into the Office 365 ecosystem and already included for free in various Office 365 plans.

Google Hangouts Chat

Hangouts Chat.png

Hangouts Chat is Google's Slack competitor. As you would expect, it's got deep ties into the G Suite ecosystem and already included in G Suite plans.

Skype

Skype.png

If you're looking to save money, Skype can also be used for instant messenger-style chat. It's more geared towards personal communication, so not optimized for business chat. It's ok for very small teams, and it is totally free, so there's that.

My Recommendation: Free version of Slack then Teams/Hangouts

Logo Lockup - Slack Teams Hangouts Chat.png

Slack is a very popular choice, and the free version is great for small teams. However, when you start to exceed the limits of the free tier and need to move to the paid tier, it's tough to justify $6.67/user/mo if you're already paying ~$12/user/mo for your productivity suite that includes equivalent . You'd be increasing your per user cost by 50% per month. Slack is arguably ahead of its competitors at the time of this writing, but Microsoft and Google are pouring probably hundreds of millions of dollars into catching up. Is Slack worth the extra money? Probably not.

After running into the limits of the free version of Slack, I'd move to whatever your productivity suite includes, namely Teams or Hangouts.

Audio and Video Conferencing - Internally Within Your Team

You'll need tools to do audio and video conferencing within your team.

Slack

Slack has audio and video conference ability built in, although I've found the features and quality don't match up with the below offerings. I'm sure it will improve over time.

Teams

Teams has this capability built in. In the early days, it was impossible to invite people outside your organization, for example, contractors without a @yourcompany.com email address, to join a meeting. That was rather problematic. They've fixed that now, and Teams is a good choice for meetings within the team (meaning employees and contractors). It's also included in various versions of Office 365.

Google Hangouts Meet

Hangouts Meet is G Suites audio and video conferencing offering. Included in your G Suite subscription.

Skype

If you're penny pinching, and working with a small team, you can use Skype totally for free. You will need to have each team member add the other team members into their contact lists to easily initiate calls, and everyone needs a Skype account.

My Recommendation: Teams or Hangouts

Logo Lockup - Teams Hangouts Meet.png

Go with whatever your productivity suite offers here.

Audio and Video Conferencing - Externally with Customers and Partners

Besides internal communication, you're going to need to communicate with external customers and partners. While you can technically use any of the above "internal" tools for this, they're not (yet) a great solution to the problem.

Zoom

Zoom.png

Zoom has risen to be one of the dominant video conferencing platforms in recent years. It's super simple and easy to use, and importantly, doesn't require your participants to jump through a lot of hoops to get into a meeting. Participants will need to install an app to access the meeting, but that’s free and easy to do for their PC, Mac, iOS or Android device. There is a free version limited to 40 minute meetings, so you'll probably want a paid version starting at $15/mo. You don’t want to be cut off at 40 minutes during an important customer presentation :/

My Recommendation: Zoom

Zoom Logo 512w.png

For external meetings, you can't beat the simplicity of Zoom. It's just a better user experience compared to Hangouts Meet, Teams or Skype. Hangouts Meet is pretty good, followed by Teams then Skype for this purpose. See if you like the experience included with your productivity suite, but if not, go with Zoom. 

File Sharing

This is another area heavily influenced by your choice of productivity suite.

Dropbox

Dropbox.png

Dropbox was one of the first companies to simplify file sharing and make it really easy for individuals. They started with a personal product so you could access your files from any device, and they've since expanded their offering to include features for business, both large and small. Microsoft and Google, along with a ton of other companies, also compete with them. For a team version of Dropbox, you'll be looking at $12.50/user/mo with a minimum of 3 people, so at least $37.50/mo.

OneDrive for Business

OneDrive for Business.png

OneDrive for Business is Microsoft's offering. It’s biggest selling point is that it's included for free with various Office 365 plans. However, it's actually based on SharePoint, and while SharePoint is a good product for enterprises (it has lots of enterprisey features), it's clunky. Microsoft has tried to put a nicer interface on top of SharePoint to make it look more like the consumer version of OneDrive (which is actually really good), but they are actually totally different products.

Google Drive

Google Drive.png

G Suite's file storage solution is called Drive. Similar in concept to One Drive for Business, included with G Suite.

OneDrive Personal

OneDrive Personal.png

If you're penny pinching, you can use Microsoft's OneDrive Personal product and setup shared folders to share files with your team. OneDrive Personal is simple and easy to use, but you'll be missing out on a lot management features you'll want as your team grows.

My Recommendation: Dropbox if you want to spend the money, OneDrive for Business / Google Drive otherwise

Logo Lockup - Dropbox OneDrive for Business Google Drive.png

If you are willing to spend the money, I think Dropbox has the best product in this space. But at $12.50/user/mo with a minimum of 3 users, it's pretty spendy - you're basically doubling the cost of your core productivity suite.

Is it really worth the extra? Similar to chat and collaboration with Slack, you'll need to determine if it's that much better that you are willing to spend more money on it. If you are happy with whatever is included with your productivity suite, ie OneDrive for Business or Google Drive, go for that instead.

 

Stay tuned for parts 2 and 3 for more recommendations on tools and technology you'll need for your startup.

How to Delete Old Data from an Azure Storage Table: Part 2

In part 1 of this series, we explored how to delete data from an Azure Storage Table and covered the simple implementation of the AzureTablePurger tool.

The simple implementation seemed way too slow to me, and took a long time when purging a large amount of data from a table. I figured if I could execute many requests to Table Storage in parallel, it should perform a lot quicker.

To recap, our code fundamentally does 2 things:

  1. Enumerates entities to be deleted

  2. Delete entities using batch operations on Azure Table Storage

Since there could be a lot of data to enumerate and delete, there is no reason why we can't execute these operations parallel.

Since the AzureTablePurger is implemented as a console app, we could parallelize the operation by spinning up multiple threads, or multiple Tasks, or using the Task Parallel Library to name a few ways. The latter is the preferred way to build such a system in .NET 4 and beyond as it simplifies a lot of the underlying complexity of building multi-threaded and parallel applications.

I settled on using the Producer/Consumer pattern backed by the Task Parallel Library.

Producer/Consumer Pattern

Put simply, the Producer/Consumer pattern is where you have 1 thing producing work and putting it in a queue, and another thing taking work items from that queue and actually doing the work.

There is a pretty straightforward way to implement this using the BlockingCollection in .NET.

Producer

In our case, the Producer is the thing that is querying Azure Table Storage and queuing up work:

/// <summary>
/// Data collection step (ie producer).
///
/// Collects data to process, caches it locally on disk and adds to the _partitionKeyQueue collection for the consumer
/// </summary>
private void CollectDataToProcess(int purgeRecordsOlderThanDays)
{
    var query = PartitionKeyHandler.GetTableQuery(purgeRecordsOlderThanDays);
    var continuationToken = new TableContinuationToken();

    try
    {
        // Collect data
        do
        {
            var page = TableReference.ExecuteQuerySegmented(query, continuationToken);
            var firstResultTimestamp = PartitionKeyHandler.ConvertPartitionKeyToDateTime(page.Results.First().PartitionKey);

            WriteStartingToProcessPage(page, firstResultTimestamp);

            PartitionPageAndQueueForProcessing(page.Results);

            continuationToken = page.ContinuationToken;
            // TODO: temp for testing
            // continuationToken = null;
        } while (continuationToken != null);

    }
    finally
    {
        _partitionKeyQueue.CompleteAdding();
    }
}

We start by executing a query against Azure Table storage. Like with our simple implementation, this will execute a query and obtain a page of results, with a maximum of 1000 entities. We then take the page of data and break it up into chunks for processing.

Remember, to delete entities in Table Storage, we need the PartitionKey and RowKey. Since we could be dealing with a large amount of data, it doesn't make sense to store this PartitionKey and RowKey information in memory, otherwise we'll start running out of space. Instead, I decided to cache this on disk in one file for each PartitionKey. Inside the file, we write one line for each PartitionKey + RowKey combination that we want to delete.

We also add a work item to an in-memory queue that our consumer will pull from. Here’s the code:

/// <summary>
/// Partitions up a page of data, caches locally to a temp file and adds into 2 in-memory data structures:
///
/// _partitionKeyQueue which is the queue that the consumer pulls from
///
/// _partitionKeysAlreadyQueued keeps track of what we have already queued. We need this to handle situations where
/// data in a single partition spans multiple pages. After we process a given partition as part of a page of results,
/// we write the entity keys to a temp file and queue that PartitionKey for processing. It's possible that the next page
/// of data we get will have more data for the previous partition, so we open the file we just created and write more data
/// into it. At this point we don't want to re-queue that same PartitionKey for processing, since it's already queued up.
/// </summary>
private void PartitionPageAndQueueForProcessing(List<DynamicTableEntity> pageResults)
{
    _cancellationTokenSource.Token.ThrowIfCancellationRequested();

    var partitionsFromPage = GetPartitionsFromPage(pageResults);

    foreach (var partition in partitionsFromPage)
    {
        var partitionKey = partition.First().PartitionKey;

        using (var streamWriter = GetStreamWriterForPartitionTempFile(partitionKey))
        {
            foreach (var entity in partition)
            {
                var lineToWrite = $"";

                streamWriter.WriteLine(lineToWrite);
                Interlocked.Increment(ref _globalEntityCounter);
            }
        }

        if (!_partitionKeysAlreadyQueued.Contains(partitionKey))
        {
            _partitionKeysAlreadyQueued.Add(partitionKey);
            _partitionKeyQueue.Add(partitionKey);

            Interlocked.Increment(ref _globalPartitionCounter);
            WriteProgressItemQueued();
        }
    }
}

A couple of other interesting things here:

  • The queue we're using is actually a BlockingCollection, which is an in-memory data structure that implements the Producer/Consumer pattern. When we put things in here, the Consumer side, which we’ll explore below, takes them out

  • It's possible (and in fact likely) that we'll run into situations where data having the same PartitionKey ends up spanning multiple pages of results. For example, at the end of page 5, we have 30 records with PartitionKey = 0636213283200000000, and at the beginning of page 6, we have a further 12 records with the same PartitionKey. To handle this situation, when processing page 5, we create a file 0636213283200000000.txt to cache all of the PartitionKey + RowKey combinations and add 0636213283200000000 to our queue for processing. When we process page 6, we realize we already have a cache file 0636213283200000000.txt, so we simply append to the file it. Since we have already added 0636213283200000000 to our queue for processing. We don’t want to have duplicate items in the queue, otherwise we'd end up with multiple consumers each trying to consume or process the same cache file, which doesn't make sense. Since there isn't a built-in way to prevent us from adding duplicates into a BlockingCollection, the easiest option is to simply maintain a parallel data structure (in this case a HashSet) so we can keep track of and quickly query items we've added into the queue to ensure we're not going to add the same PartitionKey into the queue twice

  • We're using Interlocked.Increment() to increment some global variables. This ensures that when we're running multi-threaded, each thread can reliably increment the counter without running into any threading issues. If you're wondering why to use Interlocked.Increment() vs lock or volatile, there is a great discussion on the matter on this Stack Overflow post.

Consumer

The consumer in this implementation is responsible for actually deleting the entities we want to delete. It is given the PartitionKey that it should be dealing with, reads the cached PartitionKey + RowKey combination from disk, and constructs batches of no more than 100 operations at a time and executes them against the Table Storage API:

/// <summary>
/// Process a specific partition.
///
/// Reads all entity keys from temp file on disk
/// </summary>
private void ProcessPartition(string partitionKeyForPartition)
{
    try
    {
        var allDataFromFile = GetDataFromPartitionTempFile(partitionKeyForPartition);

        var batchOperation = new TableBatchOperation();

        for (var index = 0; index < allDataFromFile.Length; index++)
        {
            var line = allDataFromFile[index];
            var indexOfComma = line.IndexOf(',');
            var indexOfRowKeyStart = indexOfComma + 1;
            var partitionKeyForEntity = line.Substring(0, indexOfComma);
            var rowKeyForEntity = line.Substring(indexOfRowKeyStart, line.Length - indexOfRowKeyStart);

            var entity = new DynamicTableEntity(partitionKeyForEntity, rowKeyForEntity) { ETag = "*" };

            batchOperation.Delete(entity);

            if (index % 100 == 0)
            {
                TableReference.ExecuteBatch(batchOperation);
                batchOperation = new TableBatchOperation();
            }
        }

        if (batchOperation.Count > 0)
        {
            TableReference.ExecuteBatch(batchOperation);
        }

        DeletePartitionTempFile(partitionKeyForPartition);
        _partitionKeysAlreadyQueued.Remove(partitionKeyForPartition);

        WriteProgressItemProcessed();
    }
    catch (Exception)
    {
        ConsoleHelper.WriteWithColor($"Error processing partition ", ConsoleColor.Red);
        _cancellationTokenSource.Cancel();
        throw;
    }
}

After we finish processing a partition, we remove the temp file and we remove the PartitionKey from our data structure that is keeping track of the items in the queue, namely the HashSet _partitionKeysAlreadyQueued. There is no need to remove the PartitionKey from the queue, as that already happened when this method was handed the PartitionKey.

Note:

  • This implementation is reading all of the cached data into memory at once, which is fine for the data patters I was dealing with, but we could improve this by doing a streamed read of the data to avoid reading too much information into memory

 

 

Executing both in Parallel

To string both of these together and execute them in Parallel, here's what we do:

public override void PurgeEntities(out int numEntitiesProcessed, out int numPartitionsProcessed)
{
    void CollectData()
    void ProcessData()
    {
        Parallel.ForEach(_partitionKeyQueue.GetConsumingPartitioner(), new ParallelOptions { MaxDegreeOfParallelism = MaxParallelOperations }, ProcessPartition);
    }

    _cancellationTokenSource = new CancellationTokenSource();

    Parallel.Invoke(new ParallelOptions { CancellationToken = _cancellationTokenSource.Token }, CollectData, ProcessData);

    numPartitionsProcessed = _globalPartitionCounter;
    numEntitiesProcessed = _globalEntityCounter;
}

Here we're defining 2 Local Functions, CollectData() and ProcessData(). CollectData will serially execute our Producer step and put work items in a queue. ProcessData will, in parallel, consume this data from the queue.

We then invoke both of these steps in parallel using Parallel.Invoke() which will wait for them to complete. They'll be deemed completed when the Producer has declared there is no more things it is planning to add via:

_partitionKeyQueue.CompleteAdding();

and the Consumer has finished draining the queue.

Performance

Now we're doing this in parallel, it should be quicker, right?

In theory, yes, but initially, this was not the case:

From the 2 runs above you can visually see how the behavior differs from the Simple implementation: each "." represents a queued work item, and each "o" represents a consumed work item. We're producing the work items much quicker than we can consume them (which is expected), and once our production of work items is complete, then we are strictly consuming them (also expected).

What was not expected was the timing. It's taking approximately the same time on average to delete an entity using this parallel implementation as it did with the simple implementation, so what gives?

Turns out I had forgotten an important detail - adjusting the number of network connections available by using the ServicePointManager.DefaultConnectionLimit property. By default, a console app is only allowed to have a maximum of 2 concurrent connections to a given host. In this app, the producer is typing up a connection while it is hitting the API and obtaining PartitionKeys + RowKeys of entities we want to delete. There should be many consumers running at once sending deletion requests to the API, but there is only 1 connection available for them to share. Once all of the production is complete, then we'd free up that connection and now have 2 connections available for consumers. Either way, we're seriously limiting our throughput here and defeating the purpose of our much fancier and more complicated Parallel solution which should be quicker, but instead is running at around the same pace as the Simple version.

I made this change to rectify the problem:

private const int ConnectionLimit = 32;

public ParallelTablePurger()
{
    ServicePointManager.DefaultConnectionLimit = ConnectionLimit;
}

Now, we're allowing up to 32 connections to the host. With this I saw the average execution time drop to around 5-10ms, so somewhere between 2-4x the speed of the simple implementation.

I also tried 128 and 256 maximum connections, but that actually had a negative effect. My machine was a lot less responsive while the app was running, and average deletion times per entity were in the 10-12ms range. 32 seemed to be somewhere around the sweet spot.

For the future: turning it up to 11

There are many things that can be improved here, but for now, this is good enough for what I need.

However, if we really wanted to turn this up to 11 and make it a lot quicker, a lot more efficient and something that just ran constantly as part of our infrastructure to purge old data from Azure tables, we could re-build this to run in the cloud.

For example, we could create 2 separate functions: 1 to enumerate the work, and a second one that will scale out to execute the delete commands:

1. Create an Azure Function to enumerate the work

As we query the Table Storage API to enumerate the work to be done, put the resulting PartitionKey + RowKey combination either into a text file on blob storage (similar to this current implementation), or consider putting it into Cosmos DB or Redis. Redis would probably be the quickest, and Blob Storage probably the cheapest, and has the advantage that we avoid having to spin up additional resources.

Note, if we were to deploy this Azure Function to a consumption plan, functions have a timeout of 10 minutes. We'd need to make sure that we either limit our run to less than 10 minutes in duration, or we can handle being interrupted part way through and pick up where we left the next time the function runtime calls our functio

2. Create an Azure Function to do the deletions

After picking up an item from the queue, this function would be responsible for reading the cached PartitionKey + RowKey and batching up delete operations to send to Table Storage. If we deployed this as an Azure Function on a consumption plan, it will scale out to a maximum of 200 instances, which should be plenty for us to work through the queue of delete requests in a timely manner, and so long as we're not exceeding 20,000 requests per second to the storage account, we won't hit any throttling limits there.

Async I/O

Something else we'd look to do is to make the I/O operations asynchronous. This would probably provide some benefits in our console app implementation, although I'm not sure how much difference it would really make. I decided against async I/O for the initial implementation, since it doesn't play well with Parallel.ForEach (check out this Stack Overflow post or this GitHub issue for some more background), and would require a more complex implementation to work.

With a move to the cloud and Azure Functions, I'm not sure how much benefit async I/O would have either, since we rely heavily on scaling out the work to different processes and machines via Azure Functions. Async is great for increasing throughput and scalability, but not raw performance, which is what our goal is here.

Side Note: Durable Functions

Something else which is pretty cool and could be useful here is Azure Durable functions. This could be an interesting solution to this problem as well, especially if we wanted to report back on aggregate statistics like how long it took to complete all of the work and average time to delete a specific record (ie the fan-in).

Conclusion

It turns out the Parallel solution is quicker than the Simple solution. Feel free to use this tool for your own purposes, or if you’re interested in improving it, go ahead and fork it and make a pull request. There is a lot more that can be done :)

How to Delete Old Data From an Azure Storage Table : Part 1

Have you ever tried to purge old data from an Azure table? For example, let's say you're using the table for logging, like the Azure Diagnostics Trace Listener which logs to the WADLogsTable, and you want to purge old log entries.

 There is a suggestion on Azure feedback for this feature, questions on Stack Overflow (here's one and another), questions on MSDN Forums and bugs on Github.

 There are numerous good articles that cover deletion of entities from Azure Tables.

 However, no tools are available to help.

 

TL;DR: while the building blocks exist in the Azure Table storage APIs to delete entities, there are no native ways to easily bulk delete data, and no easy way to purge data older than a certain number of days. Nuking the entire table is certainly the easiest way to go, so designing your system to roll to a new table every X days or month or whatever (for example Logs201907, Logs201908 for logs generated in July 2019 and August 2019 respectively) would be my recommendation.

If that ship has sailed and you stuck in a situation where you want to purge old data from a table, like I was, I created a tool to make my life, and hopefully your life as well, a bit easier.

 The code is available here: https://github.com/brentonw/AzureTablePurger

Disclaimer: this is prototype code, I've run this successfully myself and put it through a bunch of functional tests, but at present, it doesn’t have a set of unit tests. Use at your own risk :)

How it Works

Fundamentally, this tool has 2 steps:

  1. Enumerates entities to be deleted

  2. Delete entities using batch operations on Azure Table Storage

I started off building a simple version which synchronously grabs a page of query results from the API, then breaks that page into batches of no more than 100 items, grouped by PartitionKey - a requirement for the Azure Table Storage Batch Operation API, then execute batch delete operations on Azure Table Storage.

Enumerating Which Data to Delete

Depending on your PartitionKey and RowKey structure, this might be relatively easy and efficient, or it might be painful. With Azure Table Storage, you can only query efficiently when using PartitionKey or a PartitionKey + RowKey combination. Anything else results in a full table scan which is inefficient and time consuming. There is tons of background on this in the Azure docs: Azure Storage Table Design Guide: Designing Scalable and Performant Tables.

Querying on the Timestamp column is not efficient, and will require a full table scan.

If we take a look at the WADLogsTable, we'll see data similar to this:

 

PartitionKey = 0636213283200000000
RowKey = e046cc84a5d04f3b96532ebfef4ef918___Shindigg.Azure.BackgroundWorker___Shindigg.Azure.BackgroundWorker_IN_0___0000000001652031489

 

Here's how the format breaks down:

PartitionKey = "0" + the minute of the number of ticks since 12:00:00 midnight, January 1, 0001

RowKey = this is the deployment ID + the name of the role + the name of the role instance + a unique identifier

 

Every log entry within a particular minute is bucketed into the same partition which has 2 key advantages:

  1. We can now effectively query on time

  2. Each partition shouldn't have too many records in there, since there shouldn't be that many log entries within a single minute

Here's the query we construct to enumerate the data:

public TableQuery GetTableQuery(int purgeEntitiesOlderThanDays)
{
    var maximumPartitionKeyToDelete = GetMaximumPartitionKeyToDelete(purgeEntitiesOlderThanDays);

    var query = new TableQuery()
        .Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.LessThanOrEqual, maximumPartitionKeyToDelete))
        .Select(new[] { "PartitionKey", "RowKey" });

    return query;
}

private string GetMaximumPartitionKeyToDelete(int purgeRecordsOlderThanDays)
{
    return DateTime.UtcNow.AddDays(-1 * purgeRecordsOlderThanDays).Ticks.ToString("D19");
}

The execution of the query is pretty straightforward:

var page = TableReference.ExecuteQuerySegmented(query, continuationToken);

The Azure Table Storage API limits us to 1000 records at a time. If there are more than 1000 results, the continuationToken will be set to a non-null value, which will indicate we need to make the query again with that particular continuationToken to get the next page of data from the query.

Deleting the Data

In order to make this as efficient as possible and minimize the number of delete calls we make to the Auzre Table Storage API, we want to batch things up as much as possible.

Azure Table Storage supports batch operations, but there are two caveats we need to be aware of (1) all entities must be in the same partition and (2) there can be no more than 100 items in a batch.

To achieve this, we're going to break our page of data into chunks of no more than 100 entities, grouped by PartitionKey:

protected IList<IList<DynamicTableEntity>> GetPartitionsFromPage(IList<DynamicTableEntity> page)
{
    var result = new List<IList<DynamicTableEntity>>();

    var groupByResult = page.GroupBy(x => x.PartitionKey);

    foreach (var partition in groupByResult.ToList())
    {
        var partitionAsList = partition.ToList();
        if (partitionAsList.Count > MaxBatchSize)
        {
            var chunkedPartitions = Chunk(partition, MaxBatchSize);
            foreach (var smallerPartition in chunkedPartitions)
            {
                result.Add(smallerPartition.ToList());
            }
        }
        else
        {
            result.Add(partitionAsList);
        }
    }

    return result;
}

Then, we'll iterate over each partition, construct a batch operation and execute it against the table:

foreach (var entity in partition)
{
    Trace.WriteLine(
        $"Adding entity to batch: PartitionKey=, RowKey=");
    entityCounter++;

    batchOperation.Delete(entity);
    batchCounter++;
}

Trace.WriteLine($"Added  items into batch");

partitionCounter++;

Trace.WriteLine($"Executing batch delete of  entities");
TableReference.ExecuteBatch(batchOperation);

Here's the entire processing logic:

public override void PurgeEntities(out int numEntitiesProcessed, out int numPartitionsProcessed)
{
    VerifyIsInitialized();

    var query = PartitionKeyHandler.GetTableQuery(PurgeEntitiesOlderThanDays);

    var continuationToken = new TableContinuationToken();

    int partitionCounter = 0;
    int entityCounter = 0;

    // Collect and process data
    do
    {
        var page = TableReference.ExecuteQuerySegmented(query, continuationToken);
        var firstResultTimestamp = PartitionKeyHandler.ConvertPartitionKeyToDateTime(page.Results.First().PartitionKey);

        WriteStartingToProcessPage(page, firstResultTimestamp);

        var partitions = GetPartitionsFromPage(page.ToList());

        ConsoleHelper.WriteLineWithColor($"Broke into  partitions", ConsoleColor.Gray);

        foreach (var partition in partitions)
        {
            Trace.WriteLine($"Processing partition ");

            var batchOperation = new TableBatchOperation();
            int batchCounter = 0;

            foreach (var entity in partition)
            {
                Trace.WriteLine(
                    $"Adding entity to batch: PartitionKey=, RowKey=");
                entityCounter++;

                batchOperation.Delete(entity);
                batchCounter++;
            }

            Trace.WriteLine($"Added  items into batch");

            partitionCounter++;

            Trace.WriteLine($"Executing batch delete of  entities");
            TableReference.ExecuteBatch(batchOperation);

            WriteProgressItemProcessed();
        }

        continuationToken = page.ContinuationToken;

    } while (continuationToken != null);

    numPartitionsProcessed = partitionCounter;
    numEntitiesProcessed = entityCounter;
}

Performance

I ran this a few times on a subset of data. On average it takes around 10-20ms depending on the execution run to delete each entity:

This seemed kind of slow to me, and when trying to purge a lot of data, it takes many hours to run. I figured I should be able to improve the speed dramatically by turning this into a parallel operation.

Stay tuned for part 2 of this series to dive deeper into the parallel implementation.

Conclusion for Part 1

Since Azure Table Storage has no native way to purge old data, your best best is to structure your data so that you can simply delete old tables when you no longer need them and purge data that way.

However, if you can’t do that, feel free to make use of this AzureTablePurger tool.

Reliability and Resiliency for Cloud Connected Applications

Building cloud connected applications that are reliable is hard. At the heart of building such a system is a solid architecture and focus on resiliency. We're going to explore what that means in this post.

When I first started development on FastBar a cashless payment system for events, there were a few key criteria that drove my decisions for the overall architecture of the system.

Fundamentally, FastBar is a payment system designed specifically for event environments. Instead of using cash or drink tickets or clunky old credit card machines, events use FastBar instead.

There are 2 key characteristics of the environment in which FastBar operates, and the system that provide that drive almost all underlying aspects of the technical architecture: internet connectivity sucks, and we're dealing with people's money.

Internet connectivity at an event sucks

Prior to starting FastBar, I had a side business throwing events in Seattle. We'd throw summer parties, Halloween parties, New Year's Eve parties etc… In 10 years of throwing events, I cannot recall a single event where internet worked flawlessly. Most of the time it ranged from "entirely useless" to "ok some of the time".

At an event, there are typically 2 choices for getting internet connectivity:

  1. Rely on the venue's in-house WiFi

  2. Use the cellular network, for example a hotspot

Sometimes the venue's WiFi would work great in an initial walkthrough… and then 2000 people would arrive and connectivity goes to hell. Other times it would work great in certain areas of the venue, then we tested it where we wanted to setup registration, or place a bar, only to get an all too familiar response from the venue's IT folks: "oh, we didn’t think anyone would want internet there".

Relying on hotspots was just a bad: at many indoor locations, connectivity is poor. Even if you're outdoors with great connectivity, add a couple of thousand people to that space, each of them with smartphones hungry for bandwidth so they can post to Facebook/Instagram/Snapchat, or their phone just decides now is a great time to download that latest 3Gb iOS update in the background.

No matter what, internet connectivity at event environments is fundamentally poor and unreliable. This is something that isn't true in a standard retail environment like a coffee shop or hairdresser where you'd deploy a traditional point of sale, and it would have generally reliable internet connectivity.

We're dealing with people's money

For the event, the point of sale system is one of the most critical aspects - it effects attendee's ability to buy stuff, and the events ability to sell stuff. If the point of sale is down, attendees are pissed off and the event is losing money. Nobody wants that.

Food, beverage and merchandise sales are a huge source of revenue for events. For some events, it could be their only source of revenue.

In general, money is a very sensitive topic for people. Attendees have an expectation that they are charged accurately for the things they purchase, and events expect the sales numbers they see on their dashboard are correct, both of which are very reasonable expectations.

Reliability and Resiliency

Like any complicated, distributed software system, there are many non-functional requirements that are important to create something that works. A system needs to be:

  • Available

  • Secure

  • Maintainable

  • Performant

  • Scalable

  • And of course reliable

Ultimately, our customers (the events), and their customers (the attendees), want a system that is reliable and "just works". We achieve that kind of reliability by focusing on resiliency - we expect things will fail, and design a system that will handle those failures.

This means when thinking about our client side mobile apps, we expect the following:

  • Requests we make over the internet to our servers will fail, or will be slow. This could mean we have no internet connectivity at the time and can't event attempt to make a request to the server, or we have internet, but the request failed to get to the server, or the request made it to our server but the client didn't get the response

  • A device may run out of battery in the middle of an operation

  • A user may exit the app in the middle of an operation

  • A user may force close the app in the middle of an operation

  • The local SQLite database could get corrupt

  • Our server environment may be inaccessible

  • 3rd party services our apps communicate with might be inaccessible

On the server side, we run on Azure and also depend on a handful of 3rd party services. While generally reliable, we can expect:

  • Problems connecting to our Web App or API

  • Unexpected CPU spikes on our Azure Web Apps that impact client connectivity and dramatically increase response time for requests

  • Web Apps having problems connecting to our underlying SQL Azure database or storage accounts

  • Requests to our storage account resources being throttled

  • 3rd party services that normally respond in a couple of hundred milliseconds taking 120+ seconds to respond (that one caused a whole bunch of issues that still traumatize me to this day)

We've encountered every single one of these scenarios. Sometimes it seems like almost everything that can fail, has failed at some point, usually at the most inopportune time. That's not quite true, I can mentally create some nightmare scenarios that we could potentially encounter in the future, but these days we're in great shape to withstand multiple critical failures across different parts of our system and still retain the ability to take orders at the event, and have minimal impact to attendees and event staff.

We've done this by focusing on resiliency in all parts of the system - everything from the way we architect to the details of how we make network requests and interact with 3rd party services.

Processing an Order

To illustrate how we achieve resiliency, and therefore reliability, let's take a look at an example of processing and order. Conceptually, it looks like this:

FastBar - Order Processing - Conceptual.png

The order gets created on the POS and makes an API request to send it to the server. Seems pretty easy, right?

Not quite.

Below is a highly summarized version of what actually happens when an order is placed, and how it flows through the system:

There is a lot more to it that just a simple request-response. Instead, it's a complicated series of asynchronous operations and a whole bunch of queues in between which help us provide a system that is reliable and resilient.

On the POS App

  1. The underlying Order object and associated set of OrderItems are created and persisted to our local SQLite database
  2. We create a work item and place it on a queue. In our case, we implement our own queue as a table inside the same SQLite database. Both steps 1 and 2 happen transactionally, so either all inserts succeed, or none succeed. All of this happens within milliseconds, as it's all local on the device and doesn't rely on any network connectivity. The user experience is never impacted in case of network connectivity issues
  3. We call the synchronization engine and ask it to push our changes
    1. If we're online at the time, the synchronization engine will pick up items from the queue that are available for processing, which could be just the 1 order we just created, or there could be many orders that have been queued and are waiting to be sent to the server. For example if we were offline and have just come back online. Each item will be processed in the order that it was placed on the queue, and each item involves its own set of work. In this case, we're attempting to push this order to our server via our server-side API. If the request to the server succeeds, we'll delete the work item from the queue, and update the local Order and OrderItems with some data that the server returns to us in the response. This all happens transactionally.
    2. If there is a failure at any point, for example a network error, or a server error, we'll put that item back on the queue for future processing
    3. If we're not online, the synchronization engine can't do anything, so it returns immediately, and will re-try in the future. This happens either via a timer that is syncing periodically, or after another order is created and a push is requested
    4. Whenever we make any request to the server that could update or create any data, we send a IndempotentOperationKey which the server uses to determine if the request has been processed already or not

The Server API

  1. Our Web API receives the request and processed it
    1. We make sure the user has permissions to perform this operation, and verify that we have not already processed a request with the same IdempotentOperationKey the client has supplied
    2. The incoming request is validated, and if we can, we'll create an Order and set of OrderItems and insert them into the database. At this point, our goal is to do the minimal work possible and leave the bulk of the processing to later
    3. We'll queue a work item for processing in the background

Order Processor WebJob

  1. Our Order Processor is implemented as an Azure WebJob and runs in the background, constantly looking at the queue for new work
  2. The Order Processor is responsible for the core logic when it comes to processing an order, for example, connecting the order to an attendee and their tab, applying any discounts or promotions that may be applicable for that attendee and re-calculating the attendee's tab
  3. Next, we want to notify the attendee of their purchase, typically by sending them an SMS. We queue up a work item to be handled by the Outbound SMS Processor

Outbound SMS Processor WebJob

  1. The Outbound SMS processor handles the composition and sending of SMS messages to our 3rd party service for delivery, in our case, Twilio
  2. We're done!

That's a lot of complexity for what seems like a simple thing. So why would we add all of these different components and queues? Basically, it’s necessary to have a reliable and resilient system that can handle a whole lot of failure and still keep going:

  • If our client has any kind of network issues connecting to the server

  • If our client app is killed in any way, for example, if the device runs out of battery, or if the OS decided to kill our app since we were moved to the background, or if the user force quits our app

  • If our server environment is totally unavailable

  • If our server environment is available but slow to respond, for example, due to cloud weirdness (a whole other topic), or our own inefficient database queries or any number of other reasons

  • If our server environment has transitory errors caused by problems connecting with dependent services, for example, Azure SQL or Azure storage queues returning connectivity errors

  • If our server environment has consistent errors, for example, if we pushed a new build to the server that had a bug in it

  • If 3rd party services we depend on are unavailable for any reason

  • If 3rd party services we depend on are running slow for any reason

Asynchronicity to the Max

You'll notice the above flow is highly asynchronous. Wherever we can queue something up and process it later, we will. This means we're never worried if whatever system we're talking to is operating normally or not. If it's alive and kicking, great, we'll get that work processed quickly. If not, no worries, it will process in the background at some point in the future. Under normal circumstances, you could expect an order to be created on the client and a text message received by the customer within seconds. But, it could take a lot longer if any part of the system is running slowly or down, and that's ok, since it doesn't dramatically affect the user experience, and the reliability of the system is rock solid. .

It's also worth noting that all of these operations are both logically asynchronous, and asynchronous at the code level wherever possible.

Logically asynchronous meaning instead of the POS order creation UI directly calling the server, or, on the server side, the request thread directly calling a 3rd party service to send an SMS, these operations get stored in a queue for later processing in the background. Being logically asynchronous is what gives us reliability and resiliency.

Asynchronous at the code level is different. This means that wherever possible, when we are doing any kind of I/O we utilize C#’s async programming features. It's important to note that underlying code being asynchronous doesn’t actually have anything to do with resiliency. Rather, it helps our components achieve higher throughput since they're not tying up resources like threads, network sockets, database connections, file handles etc… waiting for responses. Asynchronity at the code level is all about throughput and scalability.

Conclusion

When you're building mobile applications connected to the cloud, reliability is key. The way to achieve reliability is by focusing on resiliency. Expect that everything can and probably will fail, and design you system to handle these failures. Make sure your system is highly logically asynchronous and queue up work to be handled by background components wherever possible.

FastBar's Technical Architecture

Previously, I've discussed the start of FastBar, how the client and server technology stacks evolved and what it looks like today (read more in part 1, part 2 and part 3 of that series).

As a recap, here's what the high-level components of FastBar look like:

FastBar Components - High Level.png

Let's dive deeper.

Architecture

So, what does FastBar’s architecture look like under the hood? Glad you asked:

Client apps:

  • Registration: used to register attendees at an event, essentially connecting their credit card to a wristband and all of the associated features that are required at a live event

  • Point of Sale: used to sell stuff at events

These are both mobile apps built in Xamarin and running on iOS devices.

The server is more complicated from an architectural standpoint, and is divided into the following primary components:

  • getfastbar.com - the primary customer facing website. Built on Squarespace, this provide primarily marketing content for anyone wanting to learn about FastBar

  • app.getfastbar.com - our main web app which provides 4 key functions:

    • User section - as user, there are a few functions you can perform on FastBar, such as creating an account, adding a credit card, updating your user profile information and if you've got the permissions, creating events. This section is pretty basic

    • Attendee section - as an attendee, you can do things like pre-register for an event, view your tab, change your credit card, email yourself a receipt and adjust your tip. This is the section of the site that receives the most traffic

    • Event Control Center section - this is by far the largest section of the web app, it's where events can be fully managed: configuring details, connecting payment accounts, configuring taxes, setting up pre-registration, managing products and menus, viewing reports and downloading data and a whole lot more. This is where event organizers and FastBar staff spend the majority of their time

    • Admin section - various admin related features used by FastBar support staff. The bulk of management related to a specific event, they would do from the Event Control Center if acting on behalf of an event organizer

  • api.getfastbar.com - our API, primarily used by our own internal apps. We also open up various endpoints to some partners. We don’t make this broadly accessible publicly yet because it doesn't need to be. However, it’s something we may decide to open up more broadly in the future

The main web app and API share the same underlying core business logic, and are backed by a variety of other components, including:

  • WebJobs:

    • Bulk Message Processor - whenever we're sending a bulk message, like an email or SMS that is intended to go to many attendees, the Bulk Message Processor will be responsible for enumerating and queuing up the work. For example, if we were to send out a bulk SMS to 10,000 attendees of the event, whatever initiates this process (the web app or the API) will queue up a work item for the Bulk Message Processor that essentially says "I want to send a message to a whole bunch of people". The Bulk Message Processor will pick up the message and start enumerating 10,000 individual work items that it will queue up for processing by the Outbound SMS Processor, a downstream component. The Outbound SMS Processor will in turn pick up each work item and send out individual SMSs

    • Order Processor - whenever we ingest orders from the POS client via the API, we do the minimal amount of work possible so that we can respond quickly to the client. Essentially, we're doing some initial validation and persisting the order in the database, then queuing a work item so that the Order Processor can take care of the heavy lifting later, and requests from the client are not unnecessarily delayed. This component is very active during an event

    • Outbound Email Processor - responsible for sending an individual email, for example as the result of another component that queued up some work for it. We use Mailgun to send emails

    • Outbound Notification Processor - responsible from sending outbound push notifications. Under the covers this uses Azure Notification Hub

    • Outbound SMS Processor - responsible for sending individual SMS messages, for example a text message update to an attendee after they place an order. We send SMSs via Twilio

    • Sample Data Processor - when we need to create a sample event for demo or testing purposes, we send this work to the Sample Data Processor. This is essentially a job that a admin user may initiate from the web app, and since it could take a while, the web app will queue up a work item, then the Sample Data Processor picks it up and goes to work creating a whole bunch of test data in the background

    • Tab Authorization Processor - whenever we need to authorize someone's credit card that is connected to their tab, the Tab Authorization Processor takes care of it. For example, if attendees are pre-registering themselves for an event week before hand, we vault their credit card details securely, and only authorize their card via the Tab Authorization Processor 24 hours before the event starts

    • Tab Payment Processor - when it comes time execute payments against a tab, the Tab Payment Processor is responsible for doing the work

    • Tab Payment Sweeper - before we can process a tab's payment, that work needs to be queued. For example, after an event, all tabs get marked for processing. The Tab Payment Sweeper runs periodically, looking for any tabs that are marked for processing, and queues up work for the Tab Payment Processor. It's similar in concept to the Bulk Message Processor in that it's responsible for queuing up work items for another component

    • Tab Authorization Sweeper - just like the Tab Payment Sweeper, the Tab Authorization Sweeper looks for tabs that need to be authorized and queues up work for the Tab Authorization Processor

  • Functions

    • Client Logs Dispatcher - our client devices are responsible for pushing up their own zipped-up, JSON formatted log files to Azure Blob Storage. The Client Logs Dispatcher then takes the logs and dispatches them to our logging system, which is Log Analytics, part of Azure Monitor

    • Server Logs Dispatcher - similar in concept to the Client Logs Dispatcher, the Server Logs Dispatcher is responsible for taking server-side logs, which initially get placed into Azure Table Storage, and pushing them to Log Analytics so we have both client and server logs in the same place. This allows us to do end to end queries and analysis

    • Data Exporter - whenever a user requests an esport of data, we handle this via the Data Exporter. For large events, an export could take some time. We don’t want to tie up request threads or hammer the database, so we built the Data Exporter to take care of this in the background

    • Tab Recalculator - we maintain a tab for each attendee at an event, it's essentially the summary of all of their purchases they've made at the event. From time to time, changes happen that require us to recalculate some or all tabs for an event. For example, let's say the event organizer realized that beer was supposed to be $6, but was accidentally set for $7 and wanted to fix this going forward, and for all previous orders. This means we need to recalculate all tabs that have orders involving the affected products. For a large event there could be many thousands of tabs affected by this change, and since each tab has unique characteristics, including the rules around how totals should be calculated, this has to be done individually for each tab. The Tab Recalculator takes care of this work in the background

    • Tags Deduplicator - FastBar is a complicated distributed system that support offline operation of client devices like the Registration and POS apps. On the server side, we also process things in parallel in the background. Long story short, these two characteristics mean that sometimes data can get out of sync. The Tags Deduplicator helps put some things back in sync so eventually arrive at a consistent state

Azure Functions vs WebJobs

So, how come some things are implemented as Functions and some as WebJobs? Quite simply, the WebJobs were built before Azure Functions existed and/or before Azure Functions really became a thing.

Nowadays, it seems as though Azure Functions are the preferred technology to use so we made the decision a while ago to create any new background components using Functions, and, if any significant refactoring was required to a WebJob, we'll take the opportunity to move it over to a Function as well.

Over time, we plan on phasing out WebJobs in favor of Functions.

Communication Between the Web App / API and Background Components

This is almost exclusively done via Azure Storage Queues. The only exception is the Client Logs Dispatcher, which can also be triggered by a file showing up in Blob Storage.

Azure has a number of queuing solutions that could be used here. Storage queues is a simple solution that does what we need, so we use it.

Communication with 3rd Party Services

Wherever we can, we’ll push interaction with 3rd party services to background components. This way, if 3rd party services are running slowly or down completely for a period of time, we minimize impact on our system.

Blob and Table Storage

We utilize Blog storage in a couple of different ways:

  • Client apps upload their logs directly to blog storage for processing bys the Client Logs Dispatcher

  • Client apps have a feature that allows the user to create a bug report and attach local state. The bug is logged directly into our work item tracking system, Pivotal Tracker. We also package up the client side state and upload it to blob storage. This allows developers to re-create the state on a client device on their own device, or the simulator for debugging purposes

Table storage is used for the initial step in our server-side logging. We log to Table storage, and then push that data up to Log Analytics via the Server Logs Dispatcher.

Azure SQL

Even though there are a lot of different technologies to store data these days, we use Azure SQL for a few key reasons: it’s familiar, it works, it’s a good choice for a financial system like FastBar where data is relational and we require ACID semantics.

Conclusion

That’s a brief overview of FastBar’s technical architecture. In future posts, I’ll go over more of the why behind the architectural choices and the key benefits that it has.

Choosing a Tech Stack for Your Startup - Part 3: Cloud Stacks, Evolving the System and Lessons Learnt

This is the final part in a 3 part series around choosing a tech stack for your startup:

  • In part 1, we explored the choices we made and the evolution of FastBar’s client apps

  • In part 2, we started the exploration of the server side, including our technology choices and philosophy when it came to building out our MVP

  • In part 3, this post, we’ll wrap-up our discussion of the server side choices and summarize our key lessons learnt.

As a recap, here’s the areas of the system we’re focused on we’re focused on:

FastBar Components - Server Hilighted.png

And in part 2, we left off discussing self-hosting vs utilizing the cloud. TL;DR - forget about self hosting and leverage the cloud.

Next step - let’s pick a cloud provider…

AWS vs Azure

In 2014, AWS was the undisputed leader in the cloud, but Azure was quickly catching up in capabilities and feature set.

Through the accelerator we got into, 9 Mile Labs, we had access to some free credits with both AWS and Azure.

I decided to go with Azure, in part because they offered more free credits via their Bizpark Plus program than what AWS was offering us, in part because I was more familiar with their technology than that of AWS, in part because I'm generally a fan of Microsoft technology, and in part because I wanted to take advantage of their Platform as a Service (PaaS) offerings. Specifically, Azure App Service Web Apps and Azure SQL - AWS didn't have any direct equivalents for those at the time. I could certainly spin up VMs and install my own versions of IIS and SQL on AWS, but that was more work for me, and I had enough things to do.

PaaS vs IaaS

After doing some investigation into Azure's PaaS offerings, namely App Service Web Apps and Azure SQL, I decided to give them a go.

With PaaS offerings, you're trading some flexibility for convenience.

For example, with Web Apps, you don’t deploy your app to a machine or a specific VM - you deploy it to Azure's Web app service, and it deploys it to one or more VMs on your behalf. You don’t remote desktop into the VM to poke around - you use the web-based tools or APIs that Microsoft provides for you. Azure SQL doesn't support all of the features that regular SQL does, but it supports most of them. You don’t have the ability to configure where your database and log files will be placed, Azure manages that for you. In most cases, this is a good thing, as you've got better things to do.

With Web Apps, you can easily setup auto-scaling, like I described in part 2, and Azure will magically create or destroy more VMs according to the rules you setup, and route traffic between them. With SQL Azure, you can do cool things like create read-only replicas and geo-redundant failover databases within minutes:

If there is a PaaS offering of a particular piece of infrastructure that you require on whatever cloud you're using, try it out. You'll be giving up some flexibility, but you'll get a whole lot in return. For most scenarios, it will be totally worth it.

3rd Party Technologies

Stripe

The first 3rd party technology service we integrated was Stripe - FastBar is a payment system after all, so we needed a way to vault credit cards and do payment processing. At the time Stripe was the gold standard in terms of developer friendly payment APIs, so we went with it and still use it to this day. We've had our fair share of issues with Stripe, but overall it's worked well for us.

Loggly: A Cautionary Tale

Another piece of 3rd party tech we used early on was Loggly. This is essentially “logging as a service” and instead of you having to figure out how to ingest, process and search large volumes of log data, Loggly provides a cloud-based service for you.

We used this for a couple of years and eventually moved off it because we found the performance was not reliable.

We ran into an indecent one time where Loggly, which typically would respond to our requests in 200-300ms, was taking 90-120 seconds to respond (ouch!). Some of our server-side web and API code called Loggly directly as part of the request execution path (a big no-no, that was our bad) and needless to say, when your request thread is tied up waiting for a network call that is going to take 90-120 seconds, everything went to hell.

During the incident, it was tough for us to figure out what was going on, since our logging was impacted. After the incident, we analyzed and eventually tracked down 90-120 second response times from Loggly as the cause. We made changes to our system so that we would never again call Loggly directly as part of a request's execution path, rather we'd log everything "locally" within the Azure environment and have a background process that would push it up to Loggly. This is really what we should have been doing from the beginning. At the same time, Loggly should have been more robust.

This made us immune to any future slowdowns on the Loggly side, but over time we still found that our background process was often having trouble sending data to Loggly. We had an aot-retry mechanism setup so we’d keep retrying to send to Loggly until we succeeded. Eventually this would work, but we found this retry mechanism was bring triggered way too often for our liking. We also found similar issues on our client apps, where we'd have our client apps send logs directly to Loggly in the background to avoid us having to send to our server, then to Loggly. This was more of an issue, since clients operate in constrained bandwidth environments.

Overall, we experienced lots of flakiness with Loggly regardless of if we were communicating with it from the client or server.

In addition, the cheaper tiers of Loggly are quite limited in the amount of data you can send to them. For a large event, we'd quickly hit the data cap, and the rest of our logs would be dropped. This made the Loggly search features (which were awesome by the way, and one of the key things that attracted us to Loggly) pretty much useless for us, since we'd only have a fraction of our data available unless we moved up to a significantly more expensive tier.

We removed Loggly from the equation in favor of Azure's Log Analytics (now renamed to Azure Monitor). It's inside Azure with the rest of our stuff, has awesome query capabilities (on par with Loggly) and it’s much cheaper for us due to its “cloud-based pricing model” that scales based on the amount you use it, as opposed to handful of main pricing buckets with Loggly.

Twilio

We use Twilio for sending SMS messages. Twilio has worked great for us from the early days, and we don’t have any plans to change it anytime soon.

Cloudinary

On a previous project, I got deep into the complexities of image processing: uploading, cropping, resizing and hosting, distributing to a CDN etc…

TL;DR it's something that seems really simple on the surface, but quickly spirals out of control - it’s a hard problem to solve properly. 

I learnt my lesson on a previous project, and on FastBar, I did not pass Go and did not collect $200, rather I went straight to Cloudinary. It's a great product, easy to use, and it removes all of our image processing and hosting hassles.

Mailgun

Turns out sending email is hard. That’s why companies like Mailgun and Sendgrid exist.

We decided to go with Mailgun since it had a better pricing model for our purposes compared to Sendgrid. But fundamentally, they’re both pretty similar. They help you take care of the complexities of sending reliable email so you don’t have to deal with it.

Building out the Event Control Center

As our client apps and their underlying APIs started to mature, we started turning our development focus to building out the Event Control Center on the server - the place where event organizers and FastBar staff could fully configure all aspects of the event, manage settings, configure products and menus, view reports etc…

This was essentially a traditional web app. We looked at using tech like React or Angular. As we speced out our screens, we realized that our requirements were pretty straightforward. We didn't have lots of pages that needed a rich UI, we didn’t have a need for a Single Page App (SPA), and overall, our pages were pretty simple. We decided to go with a more "traditional" request/response ASP.NET web app, using HTML 5, JS, CSS, Bootstrap, Jquery etc…

The first features we deployed were around basic reporting, followed by the ability to create events, edit settings for the event, view and manage attendee tabs, manage refunds, create and configure products and menu items.

Nowadays, we've added multi user support, tax support, comprehensive reporting and export, direct and bulk SMS capabilities, configuration of promotions (ie discounts), device management, attendee surveys and much more.

The days of managing via SQL Management Studio are well in the past (thankfully!).

Re-building the public facing website

For a long time, the public facing section of the website was a simple 1-pager explanation of FastBar. It was long overdue for a refresh, so in 2018 I set out to rebuild it, improve the visuals, and most importantly, update the content to better reflect what we had to offer customers.

For this, we considered a variety of options, including: custom building an ASP.NET site, Wordpress, Wix, Squarespace etc...

Building a custom ASP.NET website was kind of a hassle. Our pages were simple, but it would be highly beneficial if non-developers could easily edit content, so we really needed a basic CRM. This meant we needed to go for a self-hosted CRM, like Wordpress, or a hosted CRM, like Wordpress.com, Wix or Squarespace.

I had build and deployed enough basic Wordpress sites to know that I didn't want to spend our time, effort and money on self-hosting it. Self-hosting means having to deal with constant updates to the platform and the plugins (Wordpress is a ripe target for hackers, so keeping everything up to date is critical), managing backups and the like.

We were busy enough building features for FastBar system, I didn’t want to allocate precious dev resources to the public facing website when a hosted solution at $12/mo (or thereabouts) would be sufficient.

For Wordpress in general, I found it tough to find a good quality template that matches the visuals I was looking for. To be clear, there are a ton of templates available, I'd say way too many. I found it really hard to hunt through the sea of mediocrity to find something I really liked.

When evaluating the hosted offerings like Squarespace and Wix, my first concern was that as a technology company I was worried potential engineering hires might judge us for using something like that. I don’t know about you, but I'll often F12 or Ctrl-U a website to see what's going on under the hood :) Also, while quick to spin up, hosted offerings like Squarespace lacked what I consider basic features, like version control, so that was a big red flag.

Eventually I determined that the pros and simplicity of a hosted offering outweighed the cons and we went with Squarespace. Within about a week, we had the site re-designed and live - the vast majority of that time was spent on the marketing and messaging, the implementation part was really easy.

Where we're at today

Today, our backend is comprised of 3 main components: the Core Web App and API, our public facing website and 3rd party services that we depend on.

Our core Web App and API is built in ASP.NET and WebAPI and runs on Azure. We leverage Azure App Services, Azure SQL, Azure Storage service (Blob, Table and Queue), Azure Monitor (Application Insights and Log Analytics), Azure Functions, WebJobs, Redis and a few other bits and pieces.

The public facing website runs on Squarespace. 

The 3rd party services we utilize are Stripe, Cloudinary, Twilio and Mailgun.

Lessons Learnt

Looking back at our previous lessons learnt from client side development:

  1. Optimize for Productivity

  2. Choose Something Popular

  3. Choose the Simplest Thing That Works

  4. Favor Cross Platform Tech

The first 3 are highly applicable to the server side development. The 4th is more client specific. You could make the argument that it’s valuable server-side as well, it depends on how many server environments you’re planning on deploying to. In most cases, you’re going to pick a stack and stick with it, so it’s less relevant.

Here are some additional lessons we learnt on the server side.

Ruthlessly Prioritize

This one applies to both client and server side development. As a startup, it's important to ruthlessly prioritize your development work and tackle the most important items first. How far can you go without building out an admin UI and instead relying on SQL scripts? What's the most important thing that your customers need right now? What is the #1 feature that will help you move the product and business forward?

Prioritization is hard, especially because it's not just about the needs of the customer. You also have to balance the underlying health of the code base, the design and the architecture of the system. You need to be aware of any technical debt you're creating, and you need to be careful not to paint yourself into a corner that you might not be able to get out of later. You need to think about the future, but not get hung up on it too much that you adopt unnecessary work now that never ends up being needed later. Prioritization requires tough tradeoffs. 

Prioritization is more art than science and I think it's something that continues to evolve with experience. Prioritize the most important things you need right now, and do your best to balance that with future needs.

Just go Cloud

Whether you're choosing AWS, Azure, Google or something else, just go for the cloud. It's pretty much a given these days, so hopefully you don't have the urge to go to the dark side and buy and host your own servers. 

Save yourself the hassle, the time and the money and utilize the cloud. Take advantage of the thousands upon thousands of developers working at Amazon, Microsoft and Google who are working to make your life easier and use the cloud.

Speaking of using the cloud…

Favor PaaS over IaaS

If there is a PaaS solution available that meets you needs, favor it over IaaS. Sure, you'll lose some control, but you'll gain so much in terms of ease of use and advanced capabilities that would be complicated and time consuming for your to build yourself.

It means less work for you, and more time available to dedicate to more important things, so favor PaaS over IasS.

Favor Pre-Built Solutions

Better still, if there is an entire solution available to you that someone else hosts and manages, favor it.

Again, less work for you, and allows you to focus your time, energy and resources on more important problems that will provide value to your customers, so favor pre-built solutions.


Conclusion

In part 1 we discussed client side technology choices we went through when building FastBar, including our thinking around Android vs iOS, which client technology stack to use, how our various apps evolved, where we’re at today, and key lessons learnt.

In part 2 and part 3, this post, we discussed the server side, including choosing a server side stack, building out an MVP, deciding to self-host or utilize the cloud, AWS vs Azure, various other 3rd party technologies we adopted, where we’re at today and more lessons learnt, primarily related to server side development.

Hopefully you can leverage some of these lessons in building your own startup. Good luck, and go change the world!

Choosing a Tech Stack for Your Startup - Part 2: Server Side Choices and Building Your MVP

In part 1 of this series, we covered a detailed overview of how FastBar chose it's client side technology stack, how it evolved over the years, where it is today, and key lessons we learnt along the way.

In this post, part 2, we'll start exploring the server side technology choices and conclude in part 3.

FastBar Components - Server Hilighted.png

Selecting a Stack

The very first prototype version of FastBar that was built at Startup Weekend didn't have much of a server side at all. I think we had a couple of basic API endpoints built in Ruby on Rails and deployed to Heroku.

After Startup Weekend when we became serious about moving FastBar forward, building out the server side became a priority and we needed to select a tech stack.

We discussed various options: Ruby on Rails, Go, Java, PHP, Node.js and ASP.NET. I decided to go with ASP.NET MVC and C# for a few reasons:

  1. Familiarity

  2. Suitability for the job

  3. How the platform was evolving

Familiarity

.NET and C# were the platform and language that I was most familiar with. I spent 7.5 years working at Microsoft, the first 2 of which were in the C# compiler team, the next 5.5 helping customers architect and build large-scale systems on Microsoft technology. Since leaving Microsoft, I spent a lot of time using .NET for a startup, along with some consulting work. For me, .NET technology was going to be the most productive option.

In tech, there is a ton of religion, and often times (perhaps most of the time) people make decisions on technologies based on their particular flavor of religion. They believe that X is faster than Y, or A is better than B for [insert unfounded reason here].  The reality is there are many technology choices, all of which have pros and cons. So long as you're choosing a technology that is mainstream and well suited for the task at hand, you can probably be successful building your system using (almost) whatever technology stack you prefer.

Suitability for the job

It's important to select a tool that's suitable for the job you're trying to achieve. For us, our backed required a website and an API - pretty standard stuff. ASP.NET is a great platform for doing just that. Likewise, there are many other fine choices including Ruby on Rails or Node.js.

How the platform was evolving

Back in 2014, Microsoft technologies were often shunned by developers who were not familiar with .NET, because they felt that Microsoft was the evil empire, all the produced was proprietary tech and closed source code, and nothing good could possibly come from the walled garden that was Redmond. 

The reality was quite different. As early as 2006, Microsoft started a project called Codeplex as a way to share source code, and used it to publish technologies like the AJAX Control Toolkit. In October 2007, Microsoft published the source code for the entire .NET Framework, it wasn't an "open source" project per se, but rather "reference source" - it allowed developers to step into the code and see what was going on under the hood, a primary complaint related to proprietary software or closed source systems. Also in October 2007, Microsoft announced that the upcoming ASP.NET MVC project would be fully open source. In 2008, the rest of the ASP.NET technologies were also open sourced.

That trend continued, with Microsoft open sourcing more and more stuff. Fast forward to April 2014 and Microsoft made a big announcement regarding open source: the creation of the .NET foundation, and the open-sourcing of large chunks of .NET. Later that year, they open sourced even more stuff.

Fast forward again to today, now Microsoft owns Github and has made most, if not all, of .NET open source. It's pretty clear that Microsoft is "all in" on open source. Here's an interesting article on the state of Microsoft and open source as of December 2018. And if you're interested in some more background, Scott Hunter and Beth Massi have some great Medium posts that chronicle some of Microsoft's journey into open source.

Back in 2014, I was a fan .NET technology, I liked the direction it was moving in, and felt that the trend towards open sourcing more stuff would only strengthen the technology and the ecosystem. Looking back, this has proved correct.

Building the Basics

With our tech stack chosen, the first 2 things we needed to build were (a) a basic customer facing website and (b) an API with underlying business logic and data schema to support the POS. In Startup and, this is often called a MVP or Minimum Viable Product.

For the customer facing website, I built a 1-pager ASP.NET website using Bootstrap. It was simple, but looked decent enough and was mobile friendly. It really just needed to serve as a landing page with a brief explanation of FastBar and a "join our email list" form. That site actually lasted way longer than it should have :)

The more important thing we needed was an API that the client apps could talk to: first to push up order details and next to display tabs to attendees so they could keep track of their spending. Although it would have been nice to have an administrative UI so we could view and manage attendees and orders, configure products and menus, view reports etc… there was a lot of effort required to build it, and it wasn't the highest priority thing we needed to implement.

Our First UI: SQL Management Studio and Excel

For a long time, the primary interface we used to setup and manage events was SQL Management Studio. I created an Excel spreadsheet that served as a helper tool to generate SQL statements which would in turn be run in SQL Management Studio. This was definitely a rough and ready approach, and not my preferred path, but hey, as a startup with limited resources, you need to pick your battles.

Reporting was done via a somewhat complicated SQL query that would spit out tabular sales results, which I'd then copy/paste into a fancy Excel spreadsheet I'd created. The results of the copy/paste would drive a "dashboard" tab in the spreadsheet that summarized key metrics, as well as a series of other pages that would show fancy graphs of sales over time and product breakdowns.

This was all rather crude, but like I said, as a startup with limited resources, you need to pick your battles and focus on highest priority tasks first.

You see, our attendees didn’t care about how the system was configured or how reports were generated. They simply wanted to get their wristband, tap to pay for their drinks and get back to enjoying the event.

Our event organizers didn’t much care how the event was configured either, so long as the point of sale displayed the right products at the right prices, customers were charged correctly, money appeared in their bank account and they could get some kind of reporting. We took care of all of the configuration of the system on their behalf, and Excel-based reports were fine for them, in the early days.

Self-hosted vs Cloud

In 2011 some friends of mine left Microsoft to start a company. At the time, Amazon Web Services (AWS) was coming along nicely, and most forward-thinking companies and startups were looking to the cloud.

The CTO for my friend's company, let's call him "Bob" (not his real name) decided that it would be cheaper to buy the hardware himself instead of going with something cloud-based. Bob created spreadsheets that he used to justify his desire to buy server and show how over time it would be cheaper. In reality, Bob was a "build his own metal" kinda guy. Bob wanted to spend money on cool hardware and build his own servers and that's what he was comfortable with, so he found a way to justify it.

Bob spent a couple of hundred thousand dollars on servers. A few years later, all of those servers were sitting in a spare room in their office collecting dust.

Don't be like Bob.

In 2011, it didn’t make sense to buy your own servers. AWS was a great choice. Azure was early, and quite frankly pretty crap at the time. Google's App Engine existed, but I don't think anyone actually used it.

In 2014 when FastBar started, it didn’t make sense to buy your own servers. AWS was cranking along and adding new services at a furious pace, and Azure was busy catching up. Azure had moved from crap a couple of years earlier to a really solid offering by 2014.

Today, it definitely doesn't make sense to buy your own servers. Unless you're Google, or Microsoft, or Amazon, then sure, buy as many servers as you need. But for the rest of us, cloud computing is so much simpler and easier. For example, at FastBar we have a script that will deploy a fresh version of our entire FastBar environment, including:

  • Web applications, APIs and background workers across multiple servers

  • Geo-redundant SQL databases

  • Geo-redundant storage services

  • Monitoring and logging resources

  • Redis caches

There's like 20 odd components all together, and this all happens within minutes. This is something that would take days to deploy to our own servers by hand, or we would have spent weeks or months automating the deployment process.

Not only that, but if we decide we need to scale up our front end web servers, all is takes is a couple of clicks, and within minutes, we’ll be running on more powerful hardware: 

Better still, we have automatic scaling setup, so if our webservers start getting overloaded, Azure makes more of them magically appear, and when things go back to normal, the extra servers simply go away.

It's a beautiful thing, and it makes me very happy. I'm pretty sure I still have a bit of PTSD when I think about how much effort it would take to set this stuff up manually before the cloud came along.

Another argument I used to hear against the cloud from Bob was that the cloud has lots of outages (here's some recent outages from 2018), but Bob claimed his "servers have never gone down". Maybe they haven't. But they will eventually, and usually at the worst possible time.

It's true that the cloud has outages. And these days when something fails at one of the big cloud providers, it's got the potential to take out a huge portion of the internet. But the cloud providers are getting better - their systems get stronger, and they learn from their mistakes.

Personally, I'd much rather be relying on something like Azure or AWS or Google Cloud, so that when an outage occurs (note I said when, not if - all system go down at some point), there are thousands upon thousands of people tweeting/writing/blogging about it, and hundreds or maybe thousands of engineers working on fixing the problem.

Forget about buying servers, and deploy your system to the cloud. There are so many benefits - from zero up front capital expenditure, to spinning up and down infrastructure and building out globally scalable and redundant systems within minutes.

Stay tuned for part 3 where we'll explore the different cloud stacks, Platform as a Service (PaaS) vs Infrastructure as a Service (IaaS), 3rd party technologies, where FastBar is at today and key lessons learnt.

Why the cloud can't be trusted

Late the night of June 19th I was chatting with one of my developers and he mentioned that our hosted source control provider, Codespaces, was down. I checked the website, and sure enough, nothing. At first, it didn’t seem like anything new, from time to time Codespaces would go down (more often than I'd like) but it would eventually come back up and everything would be back to normal.

I figured I'd check their Twitter feed to see if there was any details on what was happening and an ETA for it to be back online:

The Codespaces website they were pointing to was down so I couldn't get any more information there, but some quick googling revealed that the nightmare cloud security scenario had just occurred to my hosted source code provider. From reading various articles online it became clear that all or most of Codespaces data had been deleted. Ouch.

An attacker orchestrated a DDOS attack, gained control of Codespaces' AWS account and tried to extort money. When Codespaces didn't pay, the attacker started deleting assets in the AWS account. That includes all the machine images, EBS volumes, backups, customer source code repositories and snapshots. Almost everything was deleted, essentially wiping out the business along with all of their customers precious data.

Fortunately I was one of the lucky ones. By chance, my repository was hosted on one of their older nodes that survived the attack, and I was able to get them to send me a link to download a dump of the repository. It was an intense 48 hours while I contemplated next steps in case my source code repository containing 3.5 years worth of development history was lost.

As a side note, I recall an email from Codespaces some time ago with an offer to upgrade to their new SVN hosting. It didn’t seem super important and I never got around to it. I'm normally all about the upgrades, but that's one upgrade I'm very glad I didn’t do :)

During the 48 hours when I was contemplating the worst case scenario for Shindigg, I realized that it wasn't actually that bad. The latest version of the code is available locally on our development and build machines so at worst we'd lose our history, but still have the latest version. Although being able to look back in time at source code history isn't something that you need everyday, it's important to have when you need it. Losing history is a hassle, but not crippling.

So, what can we learn from this?

Lesson 1 - Do Not Trust Your Data To Any Single Cloud Provider

The first and biggest lesson is that you should never trust your data to any one provider. No matter who the provider is or what kind of redundancy they have in place, never trust your data to a single provider.

Whether you're an individual, a business using hosted services, or a business providing hosted services to others (and probably using someone else's hosted services in the process), you've got to take responsibility for your own backups.

I made a mistake trusting our source code repository data to Codespaces and relying on them to properly operate their business and back-up their data, which they failed to do. Our most important data, the latest version of the source, was replicated in several places, so it was safe, but the repository (and therefore history) was vulnerable.

I fixed that by choosing a new hosted source code control provider that has a feature to automatically create backups and send them to my S3 account, thus giving me redundancy across 2 different providers. If my source code control provider gets wiped out, I've got a backup on a completely different system.

At Shindigg, we use a combination of self-hosting and a variety of other services including Azure, Cloudinary, Stripe, Mandrill etc… The most important data for us is our core database, and it gets backed up every 10 minutes and those backups are immediately uploaded offsite to a completely different provider. Which brings me to my next point…

Lesson 2 - Offsite Means At a Different Cloud Provider

Codespaces claimed to have offsite backups. Technically, the data was probably replicated to different AWS data centers, but it was still all in AWS, and it was all accessible through a single AWS account. That's not really "offsite" when it comes to the cloud.

In the cloud, offsite doesn't just mean relying on the cloud provider to have the data in a different physical location - you need to have critical data backing up at least to a separate account but really to a completely different provider.

At a minimum what Codespaces should have done is have their backups pushing to a separate AWS account, but ideally send backups to Azure, Rackspace or any other cloud storage provider. That way when their main AWS account was compromised, they wouldn't have lost everything. It may have taken them a couple of days to recover from back-ups which would have been bad, but having almost all of your customer data and therefore your business irrevocably wiped out is significantly worse. Which brings me to lesson 3...

Lesson 3 - Think About The Worst Case Scenario

Spend some time thinking about the worst case scenario and taking steps to mitigate the risk - what happens if a part of your infrastructure gets compromised? Security is all about layers.

For most small-mid size websites, the reality is a determined attacker can probably get in if they want to. Nothing is 100% secure, but you need to be taking reasonable steps to protect yourself and harden your site against attack. Even large websites or companies with huge teams dedicated to security get compromised from time to time (Google, Adobe, Target etc...).

Ask yourself what happens if someone does get in to part of your system? What's the worst that can happen? If for any one component of your infrastructure the answer is "we're completely screwed" then take steps to fix it and make it harder for your business to get wiped out. The answer needs to be more like "if A happened and B happened and C happened and D happened and E happened and F happened, then we'd be totally screwed".

Different applications require different degrees of redundancy and security, and building all of this costs time and money. You need to figure out what makes sense for your business.

If Codespaces had have asked themselves this simple question, they could have taken steps to mitigate the risk by having a backup at a different provider.

Incidentally, Codespaces could have suffered the same loss of data through operator error (ie accidentally deleting something from the AWS account), massive AWS failure or a disgruntled employee. That's far too many ways to completely wipe out their business.

Years ago at university, I remember a lecturer talking about how the digitization or information makes it not only more accessible, but easier to destroy. He used the analogy of a pallet full of papers vs a CD-ROM (back before the days of flash storage :)). Destroying a stack of papers takes effort. Sure, you could rip them, burn them, shred then, blow them up. But any of those options required a decent amount of time, effort or equipment to execute. Compare that with destroying the same amount of data on a CD, which you could just snap with your hands in less than 1 second. The cloud exacerbates this problem. Now it's possible to destroy a pallet load of CD's worth of data in seconds.

Lesson 4 - Follow Best Practices

There are plenty of resources on best practices for security. It's hard to know where to begin when analyzing Codespace's security failures, but one thing that likely wasn't in place and should have been was multi-factor authentication. AWS has it, Azure has it, Google has it, Facebook has it, Twitter has it. Pretty much all major sites have it these days and encourage people to use it even for personal accounts.

If thousands of customers are trusting you with their data, do yourself a favor and enable it. It probably would have saved Codespaces' ass.

Conclusion

Just because your data is in the cloud doesn't mean it's safe. As an individual or a business, never trust your data to a single provider, make sure you've got backups in multiple places. As a business, think about the worst case scenario and make sure it's not too easy to have your data (and consequently your business) wiped out. Security can be complicated and expensive, but there are a lot of simple, cheap things you can do to make your app more secure. If Codespaces had followed these, they'd still be in business.