Hírek, események

How to Deploy a Ceph Storage to Bare Virtual Machines

Ceph is a freely available storage platform that implements object storage on a single distributed computer cluster and provides interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure. Ceph storage manages data replication and is generally quite fault-tolerant. As a result of its design, the system is both self-healing and self-managing.

Ceph has loads of benefits and great features, but the main drawback is that you have to host and manage it yourself. In this post, we’ll check two different approaches of virtual machine deployment with Ceph.

Anatomy of a Ceph cluster

Before we dive into the actual deployment process, let’s see what we’ll need to fire up for our own Ceph cluster.

There are three services that form the backbone of the cluster

  • ceph monitors (ceph-mon) maintain maps of the cluster state and are also responsible for managing authentication between daemons and clients
  • managers (ceph-mgr) are responsible for keeping track of runtime metrics and the current state of the Ceph cluster
  • object storage daemons (ceph-osd) store data, handle data replication, recovery, rebalancing, and provide some ceph monitoring information.

Additionally, we can add further parts to the cluster to support different storage solutions

  • metadata servers (ceph-mds) store metadata on behalf of the Ceph Filesystem
  • rados gateway (ceph-rgw) is an HTTP server for interacting with a Ceph Storage Cluster that provides interfaces compatible with OpenStack Swift and Amazon S3.

There are multiple ways of deploying these services. We’ll check two of them:

  • first, using the ceph/deploy tool,
  • then a docker-swarm based vm deployment.

Let’s kick it off!

Ceph Setup

Okay, a disclaimer first. As this is not a production infrastructure, we’ll cut a couple of corners.

You should not run multiple different Ceph demons on the same host, but for the sake of simplicity, we’ll only use 3 virtual machines for the whole cluster.

In the case of OSDs, you can run multiple of them on the same host, but using the same storage drive for multiple instances is a bad idea as the disk’s I/O speed might limit the OSD daemons’ performance.

For this tutorial, I’ve created 4 EC2 machines in AWS: 3 for Ceph itself and 1 admin node. For ceph-deploy to work, the admin node requires passwordless SSH access to the nodes and that SSH user has to have passwordless sudo privileges.

In my case, as all machines are in the same subnet on AWS, connectivity between them is not an issue. However, in other cases editing the hosts file might be necessary to ensure proper connection.

Depending on where you deploy Ceph security groups, firewall settings or other resources have to be adjusted to open these ports

  • 22 for SSH
  • 6789 for monitors
  • 6800:7300 for OSDs, managers and metadata servers
  • 8080 for dashboard
  • 7480 for rados gateway

Without further ado, let’s start deployment.

Ceph Storage Deployment

Install prerequisites on all machines

$ sudo apt update
$ sudo apt -y install ntp python

For Ceph to work seamlessly, we have to make sure the system clocks are not skewed. The suggested solution is to install ntp on all machines and it will take care of the problem. While we’re at it, let’s install python on all hosts as ceph-deploy depends on it being available on the target machines.

Prepare the admin node

$ ssh -i ~/.ssh/id_rsa -A ubuntu@13.53.36.123

As all the machines have my public key added to known_hosts thanks to AWS, I can use ssh agent forwarding to access the Ceph machines from the admin node. The first line ensures that my local ssh agent has the proper key in use and the -A flag takes care of forwarding my key.

$ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-nautilus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
$ sudo apt update
$ sudo apt -y install ceph-deploy

We’ll use the latest nautilus release in this example. If you want to deploy a different version, just change the debian-nautilus part to your desired release (luminous, mimic, etc.).

$ echo "StrictHostKeyChecking no" | sudo tee -a /etc/ssh/ssh_config > /dev/null

OR

$ ssh-keyscan -H 10.0.0.124,10.0.0.216,10.0.0.104 >> ~/.ssh/known_hosts

Ceph-deploy uses SSH connections to manage the nodes we provide. Each time you SSH to a machine that is not in the list of known_hosts (~/.ssh/known_hosts), you’ll get prompted whether you want to continue connecting or not. This interruption does not mesh well with the deployment process, so we either have to use ssh-keyscan to grab the fingerprint of all the target machines or disable the strict host key checking outright.

10.0.0.124 ip-10-0-0-124.eu-north-1.compute.internal ip-10-0-0-124
10.0.0.216 ip-10-0-0-216.eu-north-1.compute.internal ip-10-0-0-216
10.0.0.104 ip-10-0-0-104.eu-north-1.compute.internal ip-10-0-0-104

Even though the target machines are in the same subnet as our admin and they can access each other, we have to add them to the hosts file (/etc/hosts) for ceph-deploy to work properly. Ceph-deploy creates monitors by the provided hostname, so make sure it matches the actual hostname of the machines otherwise the monitors won’t be able to join the quorum and the deployment fails. Don’t forget to reboot the admin node for the changes to take effect.

$ mkdir ceph-deploy
$ cd ceph-deploy

As a final step of the preparation, let’s create a dedicated folder as ceph-deploy will create multiple config and key files during the process.

Deploy resources

$ ceph-deploy new ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

The command ceph-deploy new creates the necessary files for the deployment. Pass it the hostnames of the monitor nodes, and it will create cepf.conf and ceph.mon.keyring along with a log file.

The ceph-conf should look something like this

[global]
fsid = 0572e283-306a-49df-a134-4409ac3f11da
mon_initial_members = ip-10-0-0-124, ip-10-0-0-216, ip-10-0-0-104
mon_host = 10.0.0.124,10.0.0.216,10.0.0.104
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

It has a unique ID called fsid, the monitor hostnames and addresses and the authentication modes. Ceph provides two authentication modes: none (anyone can access data without authentication) or cephx (key based authentication).

The other file, the monitor keyring is another important piece of the puzzle, as all monitors must have identical keyrings in a cluster with multiple monitors. Luckily ceph-deploy takes care of the propagation of the key file during virtual deployments.

$ ceph-deploy install --release nautilus ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

As you might have noticed so far, we haven’t installed ceph on the target nodes yet. We could do that one-by-one, but a more convenient way is to let ceph-deploy take care of the task. Don’t forget to specify the release of your choice, otherwise you might run into a mismatch between your admin and targets.

$ ceph-deploy mon create-initial

Finally, the first piece of the cluster is up and running! create-initial will deploy the monitors specified in ceph.conf we generated previously and also gather various key files. The command will only complete successfully if all the monitors are up and in the quorum.

$ ceph-deploy admin ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

Executing ceph-deploy admin will push a Ceph configuration file and the ceph.client.admin.keyring to the /etc/ceph directory of the nodes, so we can use the ceph CLI without having to provide the ceph.client.admin.keyring each time to execute a command.

At this point, we can take a peek at our cluster. Let’s SSH into a target machine (we can do it directly from the admin node thanks to agent forwarding) and run sudo ceph status.

$ sudo ceph status
  cluster:
	id: 	0572e283-306a-49df-a134-4409ac3f11da
	health: HEALTH_OK

  services:
	mon: 3 daemons, quorum ip-10-0-0-104,ip-10-0-0-124,ip-10-0-0-216 (age 110m)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in

  data:
  	pools:   0 pools, 0 pgs
objects: 0 objects, 0 B
	usage:   0 B used, 0 B / 0 B avail
pgs:

Here we get a quick overview of what we have so far. Our cluster seems to be healthy and all three monitors are listed under services. Let’s go back to the admin and continue adding pieces.

$ ceph-deploy mgr create ip-10-0-0-124

For luminous+ builds a manager daemon is required. It’s responsible for monitoring the state of the Cluster and also manages modules/plugins.

Okay, now we have all the management in place, let’s add some storage to the cluster to make it actually useful, shall we?

First, we have to find out (on each target machine) the label of the drive we want to use. To fetch the list of available disks on a specific node, run

$ ceph-deploy disk list ip-10-0-0-104

Here’s a sample output:

ceph storage deploy sample output
$ ceph-deploy osd create --data /dev/nvme1n1 ip-10-0-0-124
$ ceph-deploy osd create --data /dev/nvme1n1 ip-10-0-0-216
$ ceph-deploy osd create --data /dev/nvme1n1 ip-10-0-0-104

In my case the label was nvme1n1 on all 3 machines (courtesy of AWS), so to add OSDs to the cluster I just ran these 3 commands.

At this point, our cluster is basically ready. We can run ceph status to see that our monitors, managers and OSDs are up and running. But nobody wants to SSH into a machine every time to check the status of the cluster. Luckily there’s a pretty neat dashboard that comes with Ceph, we just have to enable it.

…Or at least that’s what I thought. The dashboard was introduced in luminous release and was further improved in mimic. However, currently we’re deploying nautilus, the latest version of Ceph. After trying the usual way of enabling the dashboard via a manager

$ sudo ceph mgr module enable dashboard

we get an error message saying Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement.

Turns out, in nautilus the dashboard package is no longer installed by default. We can check the available modules by running

$ sudo ceph mgr module ls

and as expected, dashboard is not there, it comes in a form a separate package. So we have to install it first, luckily it’s pretty easy.

$ sudo apt install -y ceph-mgr-dashboard

Now we can enable it, right? Not so fast. There’s a dependency that has to be installed on all manager hosts, otherwise we get a slightly cryptic error message saying Error EIO: Module 'dashboard' has experienced an error and cannot handle commands: No module named routes.

$ sudo apt install -y python-routes

We’re all set to enable the dashboard module now. As it’s a public-facing page that requires login, we should set up a cert for SSL. For the sake of simplicity, I’ve just disabled the SSL feature. You should never do this in production, check out the official docs to see how to set up a cert properly. Also, we’ll need to create an admin user so we can log in to our dashboard.

$ sudo ceph mgr module enable dashboard
$ sudo ceph config set mgr mgr/dashboard/ssl false
$ sudo ceph dashboard ac-user-create admin secret administrator

By default, the dashboard is available on the host running the manager on port 8080. After logging in, we get an overview of the cluster status, and under the cluster menu, we get really detailed overviews of each running daemon.

ceph storage deployment dashboard
ceph cluster dashboard

If we try to navigate to the Filesystems or Object Gateway tabs, we get a notification that we haven’t configured the required resources to access these features. Our cluster can only be used as a block storage right now. We have to deploy a couple of extra things to extend its usability.

Quick detour: In case you’re looking for a company that can help you with Ceph, or DevOps in general, feel free to reach out to us at RisingStack!

Using the Ceph filesystem

Going back to our admin node, running

$ ceph-deploy mds create ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

will create metadata servers, that will be inactive for now, as we haven’t enabled the feature yet. First, we need to create two RADOS pools, one for the actual data and one for the metadata.

$ sudo ceph osd pool create cephfs_data 8
$ sudo ceph osd pool create cephfs_metadata 8

There are a couple of things to consider when creating pools that we won’t cover here. Please consult the documentation for further details.

After creating the required pools, we’re ready to enable the filesystem feature

$ sudo ceph fs new cephfs cephfs_metadata cephfs_data

The MDS daemons will now be able to enter an active state, and we are ready to mount the filesystem. We have two options to do that, via the kernel driver or as FUSE with ceph-fuse.

Before we continue with the mounting, let’s create a user keyring that we can use in both solutions for authorization and authentication as we have cephx enabled. There are multiple restrictions that can be set up when creating a new key specified in the docs. For example:

$ sudo ceph auth get-or-create client.user mon 'allow r' mds 'allow r, allow rw path=/home/cephfs' osd 'allow rw pool=cephfs_data' -o /etc/ceph/ceph.client.user.keyring

will create a new client key with the name user and output it into ceph.client.user.keyring. It will provide write access for the MDS only to the /home/cephfs directory, and the client will only have write access within the cephfs_data pool.

Mounting with the kernel

Now let’s create a dedicated directory and then use the key from the previously generated keyring to mount the filesystem with the kernel.

$ sudo mkdir /mnt/mycephfs
$ sudo mount -t ceph 13.53.114.94:6789:/ /mnt/mycephfs -o name=user,secret=AQBxnDFdS5atIxAAV0rL9klnSxwy6EFpR/EFbg==

Attaching with FUSE

Mounting the filesystem with FUSE is not much different either. It requires installing the ceph-fuse package.

$ sudo apt install -y ceph-fuse

Before we run the command we have to retrieve the ceph.conf and ceph.client.user.keyring files from the Ceph host and put the in /etc/ceph. The easiest solution is to use scp.

$ sudo scp ubuntu@13.53.114.94:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
$ sudo scp ubuntu@13.53.114.94:/etc/ceph/ceph.client.user.keyring /etc/ceph/ceph.keyring

Now we are ready to mount the filesystem.

$ sudo mkdir cephfs
$ sudo ceph-fuse -m 13.53.114.94:6789 cephfs

Using the RADOS gateway

To enable the S3 management feature of the cluster, we have to add one final piece, the rados gateway.

$ ceph-deploy rgw create ip-10-0-0-124

For the dashboard, it’s required to create a radosgw-admin user with the system flag to enable the Object Storage management interface. We also have to provide the user’s access_key and secret_key to the dashboard before we can start using it.

$ sudo radosgw-admin user create --uid=rg_wadmin --display-name=rgw_admin --system
$ sudo ceph dashboard set-rgw-api-access-key <access_key>
$ sudo ceph dashboard set-rgw-api-secret-key <secret_key>

Using the Ceph Object Storage is really easy as RGW provides an interface identical to S3. You can use your existing S3 requests and code without any modifications, just have to change the connection string, access, and secret keys.

Ceph Storage Monitoring

The dashboard we’ve deployed shows a lot of useful information about our cluster, but monitoring is not its strongest suit. Luckily Ceph comes with a Prometheus module. After enabling it by running:

$ sudo ceph mgr module enable prometheus

A wide variety of metrics will be available on the given host on port 9283 by default. To make use of these exposed data, we’ll have to set up a prometheus instance.

I strongly suggest running the following containers on a separate machine from your Ceph cluster. In case you are just experimenting (like me) and don’t want to use a lot of VMs, make sure you have enough memory and CPU left on your virtual machine before firing up docker, as it can lead to strange behaviour and crashes if it runs out of resources.

There are multiple ways of firing up Prometheus, probably the most convenient is with docker. After installing docker on your machine, create a prometheus.yml file to provide the endpoint where it can access our Ceph metrics.

# /etc/prometheus.yml

scrape_configs:
  - job_name: 'ceph'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    static_configs:
    - targets: ['13.53.114.94:9283]

Then launch the container itself by running:

$ sudo docker run -p 9090:9090 -v /etc/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus

Prometheus will start scraping our data, and it will show up on its dashboard. We can access it on port 9090 on its host machine. Prometheus dashboard is great but does not provide a very eye-pleasing dashboard. That’s the main reason why it’s usually used in pair with Graphana, which provides awesome visualizations for the data provided by Prometheus. It can be launched with docker as well.

$ sudo docker run -d -p 3000:3000 grafana/grafana

Grafana is fantastic when it comes to visualizations, but setting up dashboards can be a daunting task. To make our lives easier, we can load one of the pre-prepared dashboards, for example this one.

ceph storage grafana monitoring

Ceph Deployment: Lessons Learned & Next Up

CEPH can be a great alternative to AWS S3 or other object storages when running in the public operating your service in the private cloud is simply not an option. The fact that it provides an S3 compatible interface makes it a lot easier to port other tools that were written with a “cloud first” mentality. It also plays nicely with Prometheus, thus you don’t need to worry about setting up proper monitoring for it, or you can swap it a more simple, more battle-hardened solution such as Nagios.

In this article, we deployed CEPH to bare virtual machines, but you might need to integrate it into your Kubernetes or Docker Swarm cluster. While it is perfectly fine to install it on VMs next to your container orchestration tool, you might want to leverage the services they provide when you deploy your CEPH cluster. If that is your use case, stay tuned for our next post covering CEPH where we’ll take a look at the black magic required to use CEPH on Docker Swarm and Kubernetes.

In the next CEPH tutorial which we’ll release next week, we’re going to take a look at valid ceph storage alternatives with Docker or with Kubernetes.

PS: Feel free to reach out to us at RisingStack in case you need help with Ceph or Ops in general!

Getting Started with Ansible Tutorial – Automate your Infrastructure

This Ansible tutorial teaches the basics our favorite open-source software provisioning, configuration management, and application-deployment tool.

First, we’ll discuss the Infrastructure as Code concept, and we’ll also take a thorough look at the currently available IaC tool landscape. Then, we’ll dive deep into what is Ansible, how it works, and what are the best practices for its installation and configuration.

You’ll also learn how to automate your infrastructure with Ansible in an easy way through a Raspberry Pi fleet management example.

Table of contents:

Okay, let’s start with understanding the IaC Concept!

What is Infrastructure as Code?

Since the dawn of complex Linux server architectures, the way of configuring servers was either by using the command line, or by using bash scripts. However, the problem with bash scripts is that they are quite difficult to read, but more importantly, using bash scripts is a completely imperative way.

When relying on bash scripts, implementation details or small differences between machine states can break the configuration process. There’s also the question of what happens if someone SSH-s into the server, configures something through the command line, then later someone would try to run a script, expecting the old state.

The script might run successfully, simply break, or things could completely go haywire. No one can tell.

To alleviate the pain caused by the drawbacks of defining our server configurations by bash scripts, we needed a declarative way to apply idempotent changes to the servers’ state, meaning that it does not matter how many times we run our script, it should always result in reaching the exact same expected state.

This is the idea behind the Infrastructure as Code (IaC) concept: handling the state of infrastructure through idempotent changes, defined with an easily readable, domain-specific language.

What are these declarative approaches?

First, Puppet was born, then came Chef. Both of them were responses to the widespread adoption of using clusters of virtual machines that need to be configured together.

Both Puppet and Chef follow the so-called “pull-based” method of configuration management. This means that you define the configuration – using their respective domain-specific language- which is stored on a server. When new machines are spun up, they need to have a configured client that pulls the configuration definitions from the server and applies it to itself.

Using their domain-specific language was definitely clearer and more self-documenting than writing bash scripts. It is also convenient that they apply the desired configuration automatically after spinning up the machines.

However, one could argue that the need for a preconfigured client makes them a bit clumsy. Also, the configuration of these clients is still quite complex, and if the master node which stores the configurations is down, all we can do is to fall back to the old command line / bash script method if we need to quickly update our servers.

ansible tutorial for beginners by risingstack

To avoid a single point of failure, Ansible was created.

Ansible, like Puppet and Chef, sports a declarative, domain-specific language, but in contrast to them, Ansible follows a “push-based” method. That means that as long as you have Python installed, and you have an SSH server running on the hosts you wish to configure, you can run Ansible with no problem. We can safely say that expecting SSH connectivity from a server is definitely not inconceivable.

Long story short, Ansible gives you a way to push your declarative configuration to your machines.

Later came SaltStack. It also follows the push-based approach, but it comes with a lot of added features, and with it, a lot of added complexity both usage, and maintenance-wise.

Thus, while Ansible is definitely not the most powerful of the four most common solutions, it is hands down the easiest to get started with, and it should be sufficient to cover 99% of conceivable use-cases.

If you’re just getting started in the world of IaC, Ansible should be your starting point, so let’s stick with it for now.

Other IaC tools you should know about

While the above mentioned four (Pupper, Chef, Salt, Ansible) handles the configuration of individual machines in bulk, there are other IaC tools that can be used in conjunction with them. Let’s quickly list them for the sake of completeness, and so that you don’t get lost in the landscape.

Vagrant: It has been around for quite a while. Contrary to Puppet, Chef, Ansible, and Salt, Vagrant gives you a way to create blueprints of virtual machines. This also means that you can only create VMs using Vagrant, but you cannot modify them. So it can be a useful companion to your favorite configuration manager, to either set up their client, or SSH server, to get them started.

Terraform: Vagrant comes handy before you can use Ansible, if you maintain your own fleet of VMs. If you’re in the cloud, Terraform can be used to declaratively provision VMs, setup networks, or basically anything you can handle with the UI, API, or CLI of your favorite cloud provider. Feature support may vary, depending on the actual provider, and they mostly come with their own IaC solutions as well, but if you prefer not to be locked in to a platform, Terraform might be the best solution to go with.

Kubernetes: Container orchestration systems are considered Infrastructure as Code, as especially with Kubernetes, you have control over the internal network, containers, a lot of aspects of the actual machines, basically it’s more like an OS on it’s own right than anything. However, it requires you to have a running cluster of VMs with Kubernetes installed and configured.

All in all, you can use either Vagrant or Terraform to lay the groundwork for your fleet of VMs, then use Ansible, Puppet, Chef or Salt to handle their configuration continuously. Finally, Kubernetes can give you a way to orchestrate your services on them.

Are you looking for expert help with infrastructure related issues or project? Check out our DevOps and Infrastructure related services, or reach out to us at info@risingstack.com.

We’ve previously written a lot about Kubernetes, so this time we’ll take one step and take a look at our favorite remote configuration management tool:

What is Ansible?

Let’s take apart what we already know:

Ansible is a push-based IaC, providing a user-friendly domain-specific language so you can define your desired architecture in a declarative way.

Being push-based means that Ansible uses SSH for communicating between the machine that runs Ansible and the machines the configuration is being applied to.

The machines we wish to configure using Ansible are called managed nodes or Ansible hosts. In Ansible’s terminology, the list of hosts is called an inventory.

The machine that reads the definition files and runs Ansible to push the configuration to the hosts is called a control node.

How to Install Ansible

It is enough to install Ansible only on one machine, the control node.

Control node requirements are the following:

  • Python 2 (version 2.7) or Python 3 (versions 3.5 and higher) installed
  • Windows is not supported as a control node, but you can set it up on Windows 10 using WSL
  • Managed nodes also need Python to be installed.

RHEL and CentOS

sudo yum install ansible

Debian based distros and WSL

sudo apt update
sudo apt install software-properties-common
sudo apt-add-repository --yes --update ppa:ansible/ansible
sudo apt install ansible

MacOS

The preferred way to install Ansible on a Mac is via pip.

pip install --user ansible

Run the following Ansible command to verify the installation:

ansible --version

Ansible Setup, Configuration, and Automation Tutorial

For the purposes of this tutorial, we’ll set up a Raspberry Pi with Ansible, so even if the SD card gets corrupted, we can quickly set it up again and continue working with it.

  1. Flash image (Raspbian)
  2. Login with default credentials (pi/raspberry)
  3. Change default password
  4. Set up passwordless SSH
  5. Install packages you want to use

With Ansible, we can automate the process.

Let’s say we have a couple of Raspberry Pis, and after installing the operating system on them, we need the following packages to be installed on all devices:

  • vim
  • wget
  • curl
  • htop

We could install these packages one by one on every device, but that would be tedious. Let Ansible do the job instead.

First, we’ll need to create a project folder.

mkdir bootstrap-raspberry && cd bootstrap-raspberry

We need a config file and a hosts file. Let’s create them.

touch ansible.cfg
touch hosts 		// file extension not needed 

Ansible can be configured using a config file named ansible.cfg. You can find an example with all the options here.

Security risk: if you load ansible.cfg from a world-writable folder, another user could place their own config file there and run malicious code. More about that here.

The lookup order of the configuration file will be searched for in the following order:

  1. ANSIBLE_CONFIG (environment variable if set)
  2. ansible.cfg (in the current directory)
  3. ~/.ansible.cfg (in the home directory)
  4. /etc/ansible/ansible.cfg

So if we have an ANSIBLE_CONFIG environment variable, Ansible will ignore all the other files(2., 3., 4.). On the other hand, if we don’t specify a config file, /etc/ansible/ansible.cfg will be used.

Now we’ll use a very simple config file with contents below:

[defaults]
inventory = hosts
host_key_checking = False

Here we tell Ansible that we use our hosts file as an inventory and to not check host keys. Ansible has host key checking enabled by default. If a host is reinstalled and has a different key in the known_hosts file, this will result in an error message until corrected. If a host is not initially in known_hosts this will result in prompting for confirmation interactively which is not favorable if you want to automate your processes.

Now let’s open up the hosts file:

[raspberries]
192.168.0.74
192.168.0.75
192.168.0.76


[raspberries:vars]
ansible_connection=ssh  
ansible_user=pi 
ansible_ssh_pass=raspberry

We list the IP address of the Raspberry Pis under the [raspberries] block and then assign variables to them.

  • ansible_connection: Connection type to the host. Defaults to ssh. See other connection types here
  • ansible_user: The user name to use when connecting to the host
  • ansible_ssh_password: The password to use to authenticate to the host

Creating an Ansible Playbook

Now we’re done with the configuration of Ansible. We can start setting up the tasks we would like to automate. Ansible calls the list of these tasks “playbooks”.

In our case, we want to:

  1. Change the default password,
  2. Add our SSH public key to authorized_keys,
  3. Install a few packages.

Meaning, we’ll have 3 tasks in our playbook that we’ll call pi-setup.yml.

By default, Ansible will attempt to run a playbook on all hosts in parallel, but the tasks in the playbook are run serially, one after another.

Let’s take a look at our pi-setup.yml as an example:

- hosts: all
  become: 'yes'
  vars:
    user:
      - name: "pi"
        password: "secret"
        ssh_key: "ssh-rsa …"
    packages:
      - vim
      - wget
      - curl
      - htop
  tasks:
    - name: Change password for default user
      user:
        name: '"{{ item.name }}"'
        password: '"{{ item.password | password_hash('sha512') }}"'
        state: present
      loop:
        - '"{{ user }}"'
    - name: Add SSH public key
      authorized_key:
        user: '"{{ item.name }}"'
        key: '"{{ item.ssh_key }}"'
      loop:
        - '"{{ user }}"'
    - name: Ensure a list of packages installed
      apt:
        name: '"{{ packages }}"'
        state: present
    - name: All done!
      debug:
        msg: Packages have been successfully installed

Tearing down our Ansible Playbook Example

Let’s tear down this playbook.

- hosts: all
  become: 'yes'
  vars:
    user:
      - name: "pi"
        password: "secret"
        ssh_key: "ssh-rsa …"
    packages:
      - vim
      - wget
      - curl
      - htop
  tasks:  [ … ]

This part defines fields that are related to the whole playbook:

  1. hosts: all: Here we tell Ansible to execute this playbook on all hosts defined in our hostfile.
  2. become: yes: Execute commands as sudo user. Ansible uses privilege escalation systems to execute tasks with root privileges or with another user’s permissions. This lets you become another user, hence the name.
  3. vars: User defined variables. Once you’ve defined variables, you can use them in your playbooks using the Jinja2 templating system.There are other sources vars can come from, such as variables discovered from the system. These variables are called facts.
  4. tasks: List of commands we want to execute

Let’s take another look at the first task we defined earlier without addressing the user modules’ details. Don’t fret if it’s the first time you hear the word “module” in relation to Ansible, we’ll discuss them in detail later.

tasks:
  - name: Change password for default user
    user:
      name: '"{{ item.name }}"'
      password: '"{{ item.password | password_hash('sha512') }}"'
      state: present
    loop:
      - '"{{ user }}"'
  1. name: Short description of the task making our playbook self-documenting.
  2. user: The module the task at hand configures and runs. Each module is an object encapsulating a desired state. These modules can control system resources, services, files or basically anything. For example, the documentation for the user module can be found here. It is used for managing user accounts and user attributes.
  3. loop: Loop over variables. If you want to repeat a task multiple times with different inputs, loops come in handy. Let’s say we have 100 users defined as variables and we’d like to register them. With loops, we don’t have to run the playbook 100 times, just once.

Understanding the Ansible User Module

Zooming in on the user module:

user:
  name: '"{{ item.name }}"'
  password: '"{{ item.password | password_hash('sha512') }}"'
  state: present
loop:
  - '"{{ user }}"'

Ansible comes with a number of modules, and each module encapsulates logic for a specific task/service. The user module above defines a user and its password. It doesn’t matter if it has to be created or if it’s already present and only its password needs to be changed, Ansible will handle it for us.

Note that Ansible will only accept hashed passwords, so either you provide pre-hashed characters or – as above – use a hashing filter.

Are you looking for expert help with infrastructure related issues or project? Check out our DevOps and Infrastructure related services, or reach out to us at info@risingstack.com.

For the sake of simplicity, we stored our user’s password in our example playbook, but you should never store passwords in playbooks directly. Instead, you can use variable flags when running the playbook from CLI or use a password store such as Ansible Vault or the 1Password module .

Most modules expose a state parameter, and it is best practice to explicitly define it when it’s possible. State defines whether the module should make something present (add, start, execute) or absent (remove, stop, purge). Eg. create or remove a user, or start / stop / delete a Docker container.

Notice that the user module will be called at each iteration of the loop, passing in the current value of the user variable . The loop is not part of the module, it’s on the outer indentation level, meaning it’s task-related.

The Authorized Keys Module

The authorized_keys module adds or removes SSH authorized keys for a particular user’s account, thus enabling passwordless SSH connection.

- name: Add SSH public key
  authorized_key:
    user: '"{{ item.name }}"'
    key: '"{{ item.ssh_key }}"'

The task above will take the specified key and adds it to the specified user’s ~/.ssh/authorized_keys file, just as you would either by hand, or using ssh-copy-id.

The Apt module

We need a new vars block for the packages to be installed.

vars:
  packages:
    - vim
    - wget
    - curl
    - htop

tasks:
  - name: Ensure a list of packages installed
    apt:
      name: '"{{ packages }}"'
      state: present

The apt module manages apt packages (such as for Debian/Ubuntu). The name field can take a list of packages to be installed. Here, we define a variable to store the list of desired packages to keep the task cleaner, and this also gives us the ability to overwrite the package list with command-line arguments if we feel necessary when we apply the playbook, without editing the actual playbook.

The state field is set to be present, meaning that Ansible should install the package if it’s missing, or skip it, if it’s already present. In other words, it ensures that the package is present. It could be also set to absent (ensure that it’s not there), latest (ensure that it’s there and it’s the latest version, build-deps (ensure that it’s build dependencies are present), or fixed (attempt to correct a system with broken dependencies in place).

Let’s run our Ansible Playbook

Just to reiterate, here is the whole playbook together:

- hosts: all
  become: 'yes'
  vars:
    user:
      - name: "pi"
        password: "secret"
        ssh_key: "ssh-rsa …"
    packages:
      - vim
      - wget
      - curl
      - htop
  tasks:
    - name: Change password for default user
      user:
        name: '"{{ item.name }}"'
        password: '"{{ item.password | password_hash('sha512') }}"'
        state: present
      loop:
        - '"{{ user }}"'
    - name: Add SSH public key
      authorized_key:
        user: '"{{ item.name }}"'
        key: '"{{ item.ssh_key }}"'
      loop:
        - '"{{ user }}"'
    - name: Ensure a list of packages installed
      apt:
        name: '"{{ packages }}"'
        state: present
    - name: All done!
      debug:
        msg: Packages have been successfully installed

Now we’re ready to run the playbook:

ansible-playbook pi-setup.yml

Or we can run it with overwriting the config file:

$ ANSIBLE_HOST_KEY_CHECKING=False

$ ansible-playbook - i “192.168.0.74, 192.168.0.75” ansible_user=john  ansible_ssh_pass=johnspassword” -e  ‘{“user”: [{ “name”: “pi”, “password”: “raspberry”, “state”: “present” }] }’ -e '{"packages":["curl","wget"]}' pi-setup.yml

The command-line flags used in the snippet above are:

  • -i (inventory): specifies the inventory. It can either be a comma-separated list as above, or an inventory file.
  • -e (or –extra-vars): variables can be added or overridden through this flag. In our case we are overwriting the configuration laid out in our hosts file (ansible_useransible_ssh_pass) and the variables user and packages that we have previously set up in our playbook.

What to use Ansible for

Of course, Ansible is not used solely for setting up home-made servers.

Ansible is used to manage VM fleets in bulk, making sure that each newly created VM has the same configuration as the others. It also makes it easy to change the configuration of the whole fleet together by applying a change to just one playbook.

But Ansible can be used for a plethora of other tasks as well. If you have just a single server running in a cloud provider, you can define its configuration in a way that others can read and use easily. You can also define maintenance playbooks as well, such as creating new users and adding the SSH key of new employees to the server, so they can log into the machine as well.

Or you can use AWX or Ansible Tower to create a GUI based Linux server management system that provides a similar experience to what Windows Servers provide.

Stay tuned and make sure to reach out to us in case you’re looking for DevOps, SRE or Cloud Consulting Services

D3.js Bar Chart Tutorial: Build Interactive JavaScript Charts and Graphs

Recently, we had the pleasure to participate in a machine learning project that involved libraries like React and D3.js. Among many tasks, I developed a few d3 bar charts and line charts that helped to process the result of ML models like Naive Bayes.

In this article, I would like to present my progress with D3.js so far and show the basic usage of this javascript chart library through the simple example of a bar chart.

After reading this article, you’ll learn how to create D3.js charts like this easily:

d3-js-tutorial-bar-chart-made-with-javascript-small

The full source code is available here.

We at RisingStack are fond of the JavaScript ecosystem, backend, and front-end development as well. Personally, I am interested in both of them. On the backend, I can see through the underlying business logic of an application while I also have the opportunity to create awesome looking stuff on the front-end. That’s where D3.js comes into the picture!

Update: a 2nd part of my d3.js tutorial series is available as well: Building a D3.js Calendar Heatmap (to visualize StackOverflow Usage Data)

What is D3.js?

D3.js is a data driven JavaScript library for manipulating DOM elements.

“D3 helps you bring data to life using HTML, SVG, and CSS. D3’s emphasis on web standards gives you the full capabilities of modern browsers without tying yourself to a proprietary framework, combining powerful visualization components and a data-driven approach to DOM manipulation.” – d3js.org

Why would You create charts with D3.js in the first place? Why not just display an image?

Well, charts are based on information coming from third-party resources which requires dynamic visualization during render time. Also, SVG is a very powerful tool which fits well to this application case.

Let’s take a detour to see what benefits we can get from using SVG.

The benefits of SVG

SVG stands for Scalable Vector Graphics which is technically an XML based markup language.

It is commonly used to draw vector graphics, specify lines and shapes or modify existing images. You can find the list of available elements here.

Pros:

  • Supported in all major browsers;
  • It has DOM interface, requires no third-party lib;
  • Scalable, it can maintain high resolution;
  • Reduced size compared to other image formats.

Cons:

  • It can only display two-dimensional images;
  • Long learning curve;
  • Render may take long with compute-intensive operations.

Despite its downsides, SVG is a great tool to display icons, logos, illustrations or in this case, charts.

Getting started with D3.js

I picked barcharts to get started because it represents a low complexity visual element while it teaches the basic application of D3.js itself. This should not deceive You, D3 provides a great set of tools to visualize data. Check out its github page for some really nice use cases!

A bar chart can be horizontal or vertical based on its orientation. I will go with the vertical one in the form of a JavaScript Column chart.

On this diagram, I am going to display the top 10 most loved programming languages based on Stack Overflow’s 2018 Developer Survey result.

How to draw bar graphs with SVG?

SVG has a coordinate system that starts from the top left corner (0;0). Positive x-axis goes to the right, while the positive y-axis heads to the bottom. Thus, the height of the SVG has to be taken into consideration when it comes to calculating the y coordinate of an element.

d3-js-bar-chart-tutorial-javascript-coordinate

That’s enough background check, let’s write some code!

I want to create a chart with 1000 pixels width and 600 pixels height.

<body>
	<svg />
</body>
<script>
    const margin = 60;
    const width = 1000 - 2 * margin;
    const height = 600 - 2 * margin;

    const svg = d3.select('svg');
</script>

In the code snippet above, I select the created <svg> element in the HTML file with d3 select. This selection method accepts all kind of selector strings and returns the first matching element. Use selectAll if You would like to get all of them.

I also define a margin value which gives a little extra padding to the chart. Padding can be applied with a <g> element translated by the desired value. From now on, I draw on this group to keep a healthy distance from any other contents of the page.

const chart = svg.append('g')
    .attr('transform', `translate(${margin}, ${margin})`);

Adding attributes to an element is as easy as calling the attr method. The method’s first parameter takes an attribute I want to apply to the selected DOM element. The second parameter is the value or a callback function that returns the value of it. The code above simply moves the start of the chart to the (60;60) position of the SVG.

Supported D3.js input formats

To start drawing, I need to define the data source I’m working from. For this tutorial, I use a plain JavaScript array which holds objects with the name of the languages and their percentage rates but it’s important to mention that D3.js supports multiple data formats.

The library has built-in functionality to load from XMLHttpRequest, .csv files, text files etc. Each of these sources may contain data that D3.js can use, the only important thing is to construct an array out of them. Note that, from version 5.0 the library uses promises instead of callbacks for loading data which is a non-backward compatible change.

Scaling, Axes

Let’s go on with the axes of the chart. In order to draw the y-axis, I need to set the lowest and the highest value limit which in this case are 0 and 100.

I’m working with percentages in this tutorial, but there are utility functions for data types other than numbers which I will explain later.

I have to split the height of the chart between these two values into equal parts. For this, I create something that is called a scaling function.

const yScale = d3.scaleLinear()
    .range([height, 0])
    .domain([0, 100]);

Linear scale is the most commonly known scaling type. It converts a continuous input domain into a continuous output range. Notice the range and domain method. The first one takes the length that should be divided between the limits of the domain values.

Remember, the SVG coordinate system starts from the top left corner that’s why the range takes the height as the first parameter and not zero.

Creating an axis on the left is as simple as adding another group and calling d3’s axisLeft method with the scaling function as a parameter.

chart.append('g')
    .call(d3.axisLeft(yScale));

Now, continue with the x-axis.

const xScale = d3.scaleBand()
    .range([0, width])
    .domain(sample.map((s) => s.language))
    .padding(0.2)

chart.append('g')
    .attr('transform', `translate(0, ${height})`)
    .call(d3.axisBottom(xScale));
d3-js-tutorial-bar-chart-labels

Be aware that I use scaleBand for the x-axis which helps to split the range into bands and compute the coordinates and widths of the bars with additional padding.

D3.js is also capable of handling date type among many others. scaleTime is really similar to scaleLinear except the domain is here an array of dates.

Tutorial: Bar drawing in D3.js

Think about what kind of input we need to draw the bars. They each represent a value which is illustrated with simple shapes, specifically rectangles. In the next code snippet, I append them to the created group element.

chart.selectAll()
    .data(goals)
    .enter()
    .append('rect')
    .attr('x', (s) => xScale(s.language))
    .attr('y', (s) => yScale(s.value))
    .attr('height', (s) => height - yScale(s.value))
    .attr('width', xScale.bandwidth())

First, I selectAll elements on the chart which returns with an empty result set. Then, data function tells how many elements the DOM should be updated with based on the array length. enter identifies elements that are missing if the data input is longer than the selection. This returns a new selection representing the elements that need to be added. Usually, this is followed by an append which adds elements to the DOM.

Basically, I tell D3.js to append a rectangle for every member of the array.

Now, this only adds rectangles on top of each other which have no height or width. These two attributes have to be calculated and that’s where the scaling functions come handy again.

See, I add the coordinates of the rectangles with the attr call. The second parameter can be a callback which takes 3 parameters: the actual member of the input data, index of it and the whole input.

.attr(’x’, (actual, index, array) =>
    xScale(actual.value))

The scaling function returns the coordinate for a given domain value. Calculating the coordinates are a piece of cake, the trick is with the height of the bar. The computed y coordinate has to be subtracted from the height of the chart to get the correct representation of the value as a column.

I define the width of the rectangles with the scaling function as well. scaleBand has a bandwidth function which returns the computed width for one element based on the set padding.

d3-js-tutorial-bar-chart-drawn-out-with-javascript

Nice job, but not so fancy, right?

To prevent our audience from eye bleeding, let’s add some info and improve the visuals! 😉

Tips on making javascript bar charts

There are some ground rules with bar charts that worth mentioning.

  • Avoid using 3D effects;
  • Order data points intuitively – alphabetically or sorted;
  • Keep distance between the bands;
  • Start y-axis at 0 and not with the lowest value;
  • Use consistent colors;
  • Add axis labels, title, source line.

D3.js Grid System

I want to highlight the values by adding grid lines in the background.

Go ahead, experiment with both vertical and horizontal lines but my advice is to display only one of them. Excessive lines can be distracting. This code snippet presents how to add both solutions.

chart.append('g')
    .attr('class', 'grid')
    .attr('transform', `translate(0, ${height})`)
    .call(d3.axisBottom()
        .scale(xScale)
        .tickSize(-height, 0, 0)
        .tickFormat(''))

chart.append('g')
    .attr('class', 'grid')
    .call(d3.axisLeft()
        .scale(yScale)
        .tickSize(-width, 0, 0)
        .tickFormat(''))
d3-js-bar-chart-grid-system-added-with-javascript

I prefer the vertical grid lines in this case because they lead the eyes and keep the overall picture plain and simple.

Labels in D3.js

I also want to make the diagram more comprehensive by adding some textual guidance. Let’s give a name to the chart and add labels for the axes.

d3-js-tutorial-adding-labels-to-bar-chart-javascript

Texts are SVG elements that can be appended to the SVG or groups. They can be positioned with x and y coordinates while text alignment is done with the text-anchor attribute. To add the label itself, just call text method on the text element.

svg.append('text')
    .attr('x', -(height / 2) - margin)
    .attr('y', margin / 2.4)
    .attr('transform', 'rotate(-90)')
    .attr('text-anchor', 'middle')
    .text('Love meter (%)')

svg.append('text')
    .attr('x', width / 2 + margin)
    .attr('y', 40)
    .attr('text-anchor', 'middle')
    .text('Most loved programming languages in 2018')

Interactivity with Javascript and D3

We got quite an informative chart but still, there are possibilities to transform it into an interactive bar chart!

In the next code block I show You how to add event listeners to SVG elements.

svgElement
    .on('mouseenter', function (actual, i) {
        d3.select(this).attr(‘opacity’, 0.5)
    })
    .on('mouseleave’, function (actual, i) {
        d3.select(this).attr(‘opacity’, 1)
    })

Note that I use function expression instead of an arrow function because I access the element via this keyword.

I set the opacity of the selected SVG element to half of the original value on mouse hover and reset it when the cursor leaves the area.

You could also get the mouse coordinates with d3.mouse. It returns an array with the x and y coordinate. This way, displaying a tooltip at the tip of the cursor would be no problem at all.

Creating eye-popping diagrams is not an easy art form.

One might require the wisdom of graphic designers, UX researchers and other mighty creatures. In the following example I’m going to show a few possibilities to boost Your chart!

I have very similar values displayed on the chart so to highlight the divergences among the bar values, I set up an event listener for the mouseenter event. Every time the user hovers over a specific a column, a horizontal line is drawn on top of that bar. Furthermore, I also calculate the differences compared to the other bands and display it on the bars.

d3-js-tutorial-adding-interactivity-to-bar-chart

Pretty neat, huh? I also added the opacity example to this one and increased the width of the bar.

.on(‘mouseenter’, function (s, i) {
    d3.select(this)
        .transition()
        .duration(300)
        .attr('opacity', 0.6)
        .attr('x', (a) => xScale(a.language) - 5)
        .attr('width', xScale.bandwidth() + 10)

    chart.append('line')
        .attr('x1', 0)
        .attr('y1', y)
        .attr('x2', width)
        .attr('y2', y)
        .attr('stroke', 'red')

    // this is only part of the implementation, check the source code
})

The transition method indicates that I want to animate changes to the DOM. Its interval is set with the duration function that takes milliseconds as arguments. This transition above fades the band color and broaden the width of the bar.

To draw an SVG line, I need a start and a destination point. This can be set via the x1y1 and x2y2 coordinates. The line will not be visible until I set the color of it with the stroke attribute.

I only revealed part of the mouseenter event here so keep in mind, You have to revert or remove the changes on the mouseout event. The full source code is available at the end of the article.

Let’s Add Some Style to the Chart!

Let’s see what we achieved so far and how can we shake up this chart with some style. You can add class attributes to SVG elements with the same attr function we used before.

The diagram has a nice set of functionality. Instead of a dull, static picture, it also reveals the divergences among the represented values on mouse hover. The title puts the chart into context and the labels help to identify the axes with the unit of measurement. I also add a new label to the bottom right corner to mark the input source.

The only thing left is to upgrade the colors and fonts!

Charts with dark background makes the bright colored bars look cool. I also applied the Open Sans font family to all the texts and set size and weight for the different labels.

Do You notice the line got dashed? It can be done by setting the stroke-width and stroke-dasharray attributes. With stroke-dasharray, You can define pattern of dashes and gaps that alter the outline of the shape.

line#limit {
    stroke: #FED966;
    stroke-width: 3;
    stroke-dasharray: 3 6;
}

.grid path {
    stroke-width: 3;
}

.grid .tick line {
    stroke: #9FAAAE;
    stroke-opacity: 0.2;
}

Grid lines where it gets tricky. I have to apply stroke-width: 0 to path elements in the group to hide the frame of the diagram and I also reduce their visibility by setting the opacity of the lines.

All the other css rules cover the font sizes and colors which You can find in the source code.

Wrapping up our D3.js Bar Chart Tutorial

D3.js is an amazing library for DOM manipulation and for building javascript graphs and line charts. The depth of it hides countless hidden (actually not hidden, it is really well documented) treasures that waits for discovery. This writing covers only fragments of its toolset that help to create a not so mediocre bar chart.

Go on, explore it, use it and create spectacular JavaScript graphs & visualizations!

By the way, here’s the link to the source code.

Have You created something cool with D3.js? Share with us! Drop a comment if You have any questions or would like another JavaScript chart tutorial!

Thanks for reading and see You next time when I’m building a calendar heatmap with d3.js!

Puppeteer HTML to PDF Generation with Node.js

In this article I’m going to show how you can generate a Puppeteer PDF document from a heavily styled React web page using Node.js, headless Chrome & Docker.

Background: A few months ago one of the clients of RisingStack asked us to develop a feature where the user would be able to request a React page in PDF format. That page is basically a report/result for patients with data visualization, containing a lot of SVGs. Furthermore, there were some special requests to manipulate the layout and make some rearrangements of the HTML elements. So the PDF should have different styling and additions compared to the original React page.

puppeteer pdf

As the assignment was a bit more complex than what could have been solved with simple CSS rules, we first explored possible implementations. Essentially we found 3 main solutions. This blogpost will walk you through on these possibilities and the final implementations.

A personal comment before we get started: it’s quite a hassle, so buckle up!

Table of Contents:

Client side or Server side PDF generation?

It is possible to generate a PDF file both on the client-side and on the server-side. However, it probably makes more sense to let the backend handle it, as you don’t want to use up all the resources the user’s browser can offer.

Even so, I’ll still show solutions for both methods.

Option 1: Make a Screenshot from the DOM

At first sight, this solution seemed to be the simplest, and it turned out to be true, but it has its own limitations. If you don’t have special needs, like selectable or searchable text in the PDF, it is a good and simple way to generate one.

This method is plain and simple: create a screenshot from the page, and put it in a PDF file. Pretty straightforward. We used two packages for this approach:

Html2canvas, to make a screenshot from the DOM
jsPdf, a library to generate PDF

Let’s start coding.

npm install html2canvas jspdf

import html2canvas from 'html2canvas'
import jsPdf from 'jspdf'
 
function printPDF () {
    const domElement = document.getElementById('your-id')
    html2canvas(domElement, { onclone: (document) => {
      document.getElementById('print-button').style.visibility = 'hidden'
    }})
    .then((canvas) => {
        const img = canvas.toDataURL('image/png')
        const pdf = new jsPdf()
        pdf.addImage(imgData, 'JPEG', 0, 0, width, height)
        pdf.save('your-filename.pdf')
})

And that’s it!

Make sure you take a look at the html2canvas onclone method. It can prove to be handy when you quickly need to take a snapshot and manipulate the DOM (e.g. hide the print button) before taking the picture. I can see quite a lot of use cases for this package. Unfortunately, ours wasn’t one, as we needed to handle the PDF creation on the backend side.

Option 2: Use only a PDF Library

There are several libraries out there on NPM for this purpose, like jsPDF (mentioned above) or PDFKit. The problem with them that I would have to recreate the page structure again if I wanted to use these libraries. That definitely hurts maintainability, as I would have needed to apply all subsequent changes to both the PDF template and the React page.

Take a look at the code below. You need to create the PDF document yourself by hand. Now you could traverse the DOM and figure out how to translate each element to PDF ones, but that is a tedious job. There must be an easier way.

doc = new PDFDocument
doc.pipe fs.createWriteStream('output.pdf')
doc.font('fonts/PalatinoBold.ttf')
   .fontSize(25)
   .text('Some text with an embedded font!', 100, 100)
 
doc.image('path/to/image.png', {
   fit: [250, 300],
   align: 'center',
   valign: 'center'
});
 
doc.addPage()
   .fontSize(25)
   .text('Here is some vector graphics...', 100, 100)
 
doc.end()

This snippet is from the PDFKit docs. However, it can be useful if your target is a PDF file straight away and not the conversion of an already existing (and ever-changing) HTML page.

Final Option 3: Puppeteer, Headless Chrome with Node.js

What is Puppeteer? The documentation says:

Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.

It’s basically a browser which you can run from Node.js. If you read the docs, the first thing it says about Puppeteer is that you can use it to Generate screenshots and PDFs of pages’. Excellent! That’s what we were looking for.

Let’s install Puppeteer with npmi i puppeteer, and implement our use case.

const puppeteer = require('puppeteer')
 
async function printPDF() {
  const browser = await puppeteer.launch({ headless: true });
  const page = await browser.newPage();
  await page.goto('https://blog.risingstack.com', {waitUntil: 'networkidle0'});
  const pdf = await page.pdf({ format: 'A4' });
 
  await browser.close();
  return pdf
})

This is a simple function that navigates to a URL and generates a PDF file of the site.

First, we launch the browser (PDF generation only supported in headless browser mode), then we open a new page, set the viewport size, and navigate to the provided URL.

Setting the waitUntil: ‘networkidle0’ option means that Puppeteer considers navigation to be finished when there are no network connections for at least 500 ms. (Check API docs for further information.)

After that, we save the PDF to a variable, we close the browser and return the PDF.

Note: The page.pdfmethod receives an options object, where you can save the file to disk with the ‘path’ option as well. If path is not provided, the PDF won’t be saved to the disk, you’ll get a buffer instead. Later on, I discuss how you can handle it.)

In case you need to log in first to generate a PDF from a protected page, first you need to navigate to the login page, inspect the form elements for ID or name, fill them in, then submit the form:

await page.type('#email', process.env.PDF_USER)
await page.type('#password', process.env.PDF_PASSWORD)
await page.click('#submit')

Always store login credentials in environment variables, do not hardcode them!

Style Manipulation

Puppeteer has a solution for this style manipulation too. You can insert style tags before generating the PDF, and Puppeteer will generate a file with the modified styles.

await page.addStyleTag({ content: '.nav { display: none} .navbar { border: 0px} #print-button {display: none}' })

Send file to the client and save it

Okay, now you have generated a PDF file on the backend. What to do now?

As I mentioned above, if you don’t save the file to disk, you’ll get a buffer. You just need to send that buffer with the proper content type to the front-end.

printPDF().then(pdf => {
	res.set({ 'Content-Type': 'application/pdf', 'Content-Length': pdf.length })
	res.send(pdf)
})

Now you can simply send a request to the server, to get the generated PDF.

function getPDF() {
 return axios.get(`${API_URL}/your-pdf-endpoint`, {
   responseType: 'arraybuffer',
   headers: {
     'Accept': 'application/pdf'
   }
 })

Once you’ve sent the request, the buffer should start downloading. Now the last step is to convert the buffer into a PDF file.

savePDF = () => {
    this.openModal(‘Loading…’) // open modal
   return getPDF() // API call
     .then((response) => {
       const blob = new Blob([response.data], {type: 'application/pdf'})
       const link = document.createElement('a')
       link.href = window.URL.createObjectURL(blob)
       link.download = `your-file-name.pdf`
       link.click()
       this.closeModal() // close modal
     })
   .catch(err => /** error handling **/)
 }
<button onClick={this.savePDF}>Save as PDF</button>

That was it! If you click on the save button, the PDF will be saved by the browser.

Using Puppeteer with Docker

I think this is the trickiest part of the implementation – so let me save you a couple of hours of Googling.

The official documentation states that “getting headless Chrome up and running in Docker can be tricky”. The official docs have a Troubleshooting section, where at the time of writing you can find all the necessary information on installing puppeteer with Docker.

If you install Puppeteer on the Alpine image, make sure you scroll down a bit to this part of the page. Otherwise, you might gloss over the fact that you cannot run the latest Puppeteer version and you also need to disable shm usage, using a flag:

const browser = await puppeteer.launch({
  headless: true,
  args: ['--disable-dev-shm-usage']
});

Otherwise, the Puppeteer sub process might run out of memory before it even gets started properly. More info about that on the troubleshooting link above.

Option 3 + 1: CSS Print Rules

One might think that simply using CSS print rules is easy from a developers standpoint. No NPM or node modules, just pure CSS. But how do they fare when it comes to cross-browser compatibility?

When choosing CSS print rules, you have to test the outcome in every browser to make sure it provides the same layout, and it’s not 100% that it does.

For example, inserting a break after a given element cannot be considered an esoteric use case, yet you might be surprised that you need to use workarounds to get that working in Firefox.

Unless you are a battle-hardened CSS magician with a lot of experience in creating printable pages, this can be time-consuming.

Print rules are great if you can keep the print stylesheets simple.

Let’s see an example.

@media print {
    .print-button {
        display: none;
    }
    
    .content div {
        break-after: always;
    }
}

This CSS above hides the print button, and inserts a page break after every div with the class content. There is a great article that summarizes what you can do with print rules, and what are the difficulties with them including browser compatibility.

Taking everything into account, CSS print rules are great and effective if you want to make a PDF from a not so complex page.

Summary: Puppeteer PDF from HTML with Node.js

So let’s quickly go through the options we covered here for generating PDF files from HTML pages:

  • Screenshot from the DOM: This can be useful when you need to create snapshots from a page (for example to create a thumbnail), but falls short when you have a lot of data to handle.
  • Use only a PDF library: If you need to create PDF files programmatically from scratch, this is a perfect solution. Otherwise, you need to maintain the HTML and PDF templates which is definitely a no-go.
  • Puppeteer: Despite being relatively difficult to get it working on Docker, it provided the best result for our use case, and it was also the easiest to write the code with.
  • CSS print rules: If your users are educated enough to know how to print to a file and your pages are relatively simple, it can be the most painless solution. As you saw in our case, it wasn’t.

Make sure to reach out to RisingStack when you need help with Node, React, or just JS in general.

Have fun with your PDF HTML’s!

Async Await in Node.js – How to Master it?

In this article, you will learn how you can simplify your callback or Promise based Node.js application with async functions (async await).

Whether you’ve looked at async/await and promises in JavaScript before, but haven’t quite mastered them yet, or just need a refresher, this article aims to help you.

async await nodejs explained

What are async functions in Node.js?

Async functions are available natively in Node and are denoted by the async keyword in their declaration. They always return a promise, even if you don’t explicitly write them to do so. Also, the await keyword is only available inside async functions at the moment – it cannot be used in the global scope.

In an async function, you can await any Promise or catch its rejection cause.

So if you had some logic implemented with promises:

function handler (req, res) {
  return request('https://user-handler-service')
    .catch((err) => {
      logger.error('Http error', err);
      error.logged = true;
      throw err;
    })
    .then((response) => Mongo.findOne({ user: response.body.user }))
    .catch((err) => {
      !error.logged && logger.error('Mongo error', err);
      error.logged = true;
      throw err;
    })
    .then((document) => executeLogic(req, res, document))
    .catch((err) => {
      !error.logged && console.error(err);
      res.status(500).send();
    });
}

You can make it look like synchronous code using async/await:

async function handler (req, res) {
  let response;
  try {
    response = await request('https://user-handler-service')  ;
  } catch (err) {
    logger.error('Http error', err);
    return res.status(500).send();
  }

  let document;
  try {
    document = await Mongo.findOne({ user: response.body.user });
  } catch (err) {
    logger.error('Mongo error', err);
    return res.status(500).send();
  }

  executeLogic(document, req, res);
}

Currently in Node you get a warning about unhandled promise rejections, so you don’t necessarily need to bother with creating a listener. However, it is recommended to crash your app in this case as when you don’t handle an error, your app is in an unknown state. This can be done either by using the --unhandled-rejections=strict CLI flag, or by implementing something like this:

process.on('unhandledRejection', (err) => { 
  console.error(err);
  process.exit(1);
})

Automatic process exit will be added in a future Node release – preparing your code ahead of time for this is not a lot of effort, but will mean that you don’t have to worry about it when you next wish to update versions.

Patterns with async functions in JavaScript

There are quite a couple of use cases when the ability to handle asynchronous operations as if they were synchronous comes very handy, as solving them with Promises or callbacks requires the use of complex patterns.

Since node@10.0.0, there is support for async iterators and the related for-await-of loop. These come in handy when the actual values we iterate over, and the end state of the iteration, are not known by the time the iterator method returns – mostly when working with streams. Aside from streams, there are not a lot of constructs that have the async iterator implemented natively, so we’ll cover them in another post.

Retry with exponential backoff

Implementing retry logic was pretty clumsy with Promises:

function request(url) {
  return new Promise((resolve, reject) => {
    setTimeout(() => {
      reject(`Network error when trying to reach ${url}`);
    }, 500);
  });
}

function requestWithRetry(url, retryCount, currentTries = 1) {
  return new Promise((resolve, reject) => {
    if (currentTries <= retryCount) {
      const timeout = (Math.pow(2, currentTries) - 1) * 100;
      request(url)
        .then(resolve)
        .catch((error) => {
          setTimeout(() => {
            console.log('Error: ', error);
            console.log(`Waiting ${timeout} ms`);
            requestWithRetry(url, retryCount, currentTries + 1);
          }, timeout);
        });
    } else {
      console.log('No retries left, giving up.');
      reject('No retries left, giving up.');
    }
  });
}

requestWithRetry('http://localhost:3000')
  .then((res) => {
    console.log(res)
  })
  .catch(err => {
    console.error(err)
  });

This would get the job done, but we can rewrite it with async/await and make it a lot more simple.

function wait (timeout) {
  return new Promise((resolve) => {
    setTimeout(() => {
      resolve()
    }, timeout);
  });
}

async function requestWithRetry (url) {
  const MAX_RETRIES = 10;
  for (let i = 0; i <= MAX_RETRIES; i++) {
    try {
      return await request(url);
    } catch (err) {
      const timeout = Math.pow(2, i);
      console.log('Waiting', timeout, 'ms');
      await wait(timeout);
      console.log('Retrying', err.message, i);
    }
  }
}

A lot more pleasing to the eye isn’t it?

Intermediate values

Not as hideous as the previous example, but if you have a case where 3 asynchronous functions depend on each other the following way, then you have to choose from several ugly solutions.

functionA returns a Promise, then functionB needs that value and functionC needs the resolved value of both functionA‘s and functionB‘s Promise.

Solution 1: The .then Christmas tree

function executeAsyncTask () {
  return functionA()
    .then((valueA) => {
      return functionB(valueA)
        .then((valueB) => {          
          return functionC(valueA, valueB)
        })
    })
}

With this solution, we get valueA from the surrounding closure of the 3rd then and valueB as the value the previous Promise resolves to. We cannot flatten out the Christmas tree as we would lose the closure and valueA would be unavailable for functionC.

Solution 2: Moving to a higher scope

function executeAsyncTask () {
  let valueA
  return functionA()
    .then((v) => {
      valueA = v
      return functionB(valueA)
    })
    .then((valueB) => {
      return functionC(valueA, valueB)
    })
}

In the Christmas tree, we used a higher scope to make valueA available as well. This case works similarly, but now we created the variable valueA outside the scope of the .then-s, so we can assign the value of the first resolved Promise to it.

This one definitely works, flattens the .then chain and is semantically correct. However, it also opens up ways for new bugs in case the variable name valueA is used elsewhere in the function. We also need to use two names — valueA and v — for the same value.

Are you looking for help with enterprise-grade Node.js Development?
Hire the Node developers of RisingStack!

Solution 3: The unnecessary array

function executeAsyncTask () {
  return functionA()
    .then(valueA => {
      return Promise.all([valueA, functionB(valueA)])
    })
    .then(([valueA, valueB]) => {
      return functionC(valueA, valueB)
    })
}

There is no other reason for valueA to be passed on in an array together with the Promise functionB then to be able to flatten the tree. They might be of completely different types, so there is a high probability of them not belonging to an array at all.

Solution 4: Write a helper function

const converge = (...promises) => (...args) => {
  let [head, ...tail] = promises
  if (tail.length) {
    return head(...args)
      .then((value) => converge(...tail)(...args.concat([value])))
  } else {
    return head(...args)
  }
}

functionA(2)
  .then((valueA) => converge(functionB, functionC)(valueA))

You can, of course, write a helper function to hide away the context juggling, but it is quite difficult to read, and may not be straightforward to understand for those who are not well versed in functional magic.

By using async/await our problems are magically gone:

async function executeAsyncTask () {
  const valueA = await functionA();
  const valueB = await functionB(valueA);
  return function3(valueA, valueB);
}

Multiple parallel requests with async/await

This is similar to the previous one. In case you want to execute several asynchronous tasks at once and then use their values at different places, you can do it easily with async/await:

async function executeParallelAsyncTasks () {
  const [ valueA, valueB, valueC ] = await Promise.all([ functionA(), functionB(), functionC() ]);
  doSomethingWith(valueA);
  doSomethingElseWith(valueB);
  doAnotherThingWith(valueC);
}

As we’ve seen in the previous example, we would either need to move these values into a higher scope or create a non-semantic array to pass these values on.

Array iteration methods

You can use mapfilter and reduce with async functions, although they behave pretty unintuitively. Try guessing what the following scripts will print to the console:

  1. map
function asyncThing (value) {
  return new Promise((resolve) => {
    setTimeout(() => resolve(value), 100);
  });
}

async function main () {
  return [1,2,3,4].map(async (value) => {
    const v = await asyncThing(value);
    return v * 2;
  });
}

main()
  .then(v => console.log(v))
  .catch(err => console.error(err));
  1. filter
function asyncThing (value) {
  return new Promise((resolve) => {
    setTimeout(() => resolve(value), 100);
  });
}

async function main () {
  return [1,2,3,4].filter(async (value) => {
    const v = await asyncThing(value);
    return v % 2 === 0;
  });
}

main()
  .then(v => console.log(v))
  .catch(err => console.error(err));
  1. reduce

function asyncThing (value) {
  return new Promise((resolve) => {
    setTimeout(() => resolve(value), 100);
  });
}

async function main () {
  return [1,2,3,4].reduce(async (acc, value) => {
    return await acc + await asyncThing(value);
  }, Promise.resolve(0));
}

main()
  .then(v => console.log(v))
  .catch(err => console.error(err));

Solutions:

  1. [ Promise { <pending> }, Promise { <pending> }, Promise { <pending> }, Promise { <pending> } ]
  2. [ 1, 2, 3, 4 ]
  3. 10

If you log the returned values of the iteratee with map you will see the array we expect: [ 2, 4, 6, 8 ]. The only problem is that each value is wrapped in a Promise by the AsyncFunction.

So if you want to get your values, you’ll need to unwrap them by passing the returned array to a Promise.all:

main()
  .then(v => Promise.all(v))
  .then(v => console.log(v))
  .catch(err => console.error(err));

Originally, you would first wait for all your promises to resolve and then map over the values:

function main () {
  return Promise.all([1,2,3,4].map((value) => asyncThing(value)));
}

main()
  .then(values => values.map((value) => value * 2))
  .then(v => console.log(v))
  .catch(err => console.error(err));

This seems a bit more simple, doesn’t it?

The async/await version can still be useful if you have some long running synchronous logic in your iteratee and another long-running async task.

This way you can start calculating as soon as you have the first value – you don’t have to wait for all the Promises to be resolved to run your computations. Even though the results will still be wrapped in Promises, those are resolved a lot faster then if you did it the sequential way.

What about filter? Something is clearly wrong…

Well, you guessed it: even though the returned values are [ false, true, false, true ], they will be wrapped in promises, which are truthy, so you’ll get back all the values from the original array. Unfortunately, all you can do to fix this is to resolve all the values and then filter them.

Reducing is pretty straightforward. Bear in mind though that you need to wrap the initial value into Promise.resolve, as the returned accumulator will be wrapped as well and has to be await-ed.

.. As it is pretty clearly intended to be used for imperative code styles.

To make your .then chains more “pure” looking, you can use Ramda’s pipeP and composeP functions.

Rewriting callback-based Node.js applications

Async functions return a Promise by default, so you can rewrite any callback based function to use Promises, then await their resolution. You can use the util.promisify function in Node.js to turn callback-based functions to return a Promise-based ones.

Rewriting Promise-based applications

Simple .then chains can be upgraded in a pretty straightforward way, so you can move to using async/await right away.

function asyncTask () {
  return functionA()
    .then((valueA) => functionB(valueA))
    .then((valueB) => functionC(valueB))
    .then((valueC) => functionD(valueC))
    .catch((err) => logger.error(err))
}
 

will turn into

async function asyncTask () {
  try {
    const valueA = await functionA();
    const valueB = await functionB(valueA);
    const valueC = await functionC(valueB);
    return await functionD(valueC);
  } catch (err) {
    logger.error(err);
  }
}

Rewriting Node.js apps with async await

  • If you liked the good old concepts of if-else conditionals and for/while loops,
  • if you believe that a try-catch block is the way errors are meant to be handled,

you will have a great time rewriting your services using async/await.

As we have seen, it can make several patterns a lot easier to code and read, so it is definitely more suitable in several cases than Promise.then() chains. However, if you are caught up in the functional programming craze of the past years, you might wanna pass on this language feature.

Are you already using async/await in production, or you plan on never touching it? Let’s discuss it in the comments below.

Are you looking for help with enterprise-grade Node.js Development?
Hire the Node developers of RisingStack!

Handling runtime environment variables in create-react-apps

Have you ever run into a problem in production/staging, when you just wanted to change the API URL in your React app in a quick and easy way?

Usually, to change the API URL you need to rebuild your application and redeploy it. If it’s in a Docker container, you need to rebuild the whole image again to fix the issue, which can cause downtime. If it’s behind a CDN, you need to clear the cache too. Also, in most cases, you need to make/maintain two different builds for staging and production just because you are using different API URLs.

Of course, there have been solutions to solve these kinds of problems, but I found neither of them was self-explanatory and required some time to understand.

The resources out there are confusing, there are quite a lot of them and none was a package I could install and use easily. Many of them are Node.js servers which our client will query at the start on a specific URL (/config for example), require hard-coding the API URLs and changing them based on NODE_ENV, bash script injection (but that’s not cool with someone developing on Windows without WSL), etc.

I wanted something that works well on any OS and also works the same in production.

We have come up with our solutions over the years here at RisingStack, but everyone had a different opinion about what is the best way to handle runtime environment variables in client apps. So I decided to give it a try with a package to unify this problem (at least for me:)).

I believe that my new package, runtime-env-cra solves this problem in a quick and easy way. You won’t need to build different images anymore, because you want to change only an environment variable.

Cool, how should I use or migrate to runtime-env-cra?

Let’s say you have a .env file in your root already with the following environment variables.

NODE_ENV=production
REACT_APP_API_URL=https://api.my-awesome-website.com
REACT_APP_MAIN_STYLE=dark
REACT_APP_WEBSITE_NAME=My awesome website
REACT_APP_DOMAIN=https://my-awesome-website.com

You are using these environment variables in your code as process.env.REACT_APP_API_URL now.

Let’s configure the runtime-env-cra package, and see how our env usage will change in the code!

$ npm install runtime-env-cra

Modify your start script to the following in your package.json:

...
"scripts": {
"start": "NODE_ENV=development runtime-env-cra --config-name=./public/runtime-env.js && react-scripts start",
...
}
...

You can see the --config-name parameter for the script, which we use to describe where our config file should be after the start.

NOTE: You can change the name and location with the --config-name flag. If you want a different file name, feel free to change it, but in this article and examples, I’m going to use runtime-env.js. The config file in the provided folder will be injected during webpack build time.

If you are using another name than .env for your environment variables file, you can also provide that with the --env-file flag. By default --env-file flag uses ./.env.

Add the following to public/index.html inside the <head> tag:

<!-- Runtime environment variables -->
<script src="%PUBLIC_URL%/runtime-env.js"></script>

This runtime-env.js will look like this:

window.__RUNTIME_CONFIG__ = {"NODE_ENV":"development","API_URL":"https://my-awesome-api.com"};

During local development, we want to always use the .env file (or the one you provided with the --env-file flag), so that’s why you need to provide NODE_ENV=development to the script.

If it gets development, it means you want to use the content of your .env. If you provide anything else than development or nothing for NODE_ENV, it will parse the variables from your session.

And as the last step, replace process.env to window.__RUNTIME_CONFIG__ in our application, and we are good to go!

What if I’m using TypeScript?

If you are using TypeScript, you must be wondering how it will auto-complete for you? All you need to do is to create src/types/globals.ts file, with the following (modify the __RUNTIME_CONFIG__ properties to match your environment):

export {};

declare global {
 interface Window {
   __RUNTIME_CONFIG__: {
     NODE_ENV: string;
     REACT_APP_API_URL: string;
     REACT_APP_MAIN_STYLE: string;
     REACT_APP_WEBSITE_NAME: string;
     REACT_APP_DOMAIN: string;
   };
 }
}

Add "include": ["src/types"] to your tsconfig.json:

{
"compilerOptions": { ... },
"include": ["src/types"]
}

Now you have TypeScript support also. 🙂

What about Docker, and running in production?

Here’s an example of an alpine based Dockerfile with a multi-stage build, using just an Nginx to serve our client.

# -- BUILD --
FROM node:12.13.0-alpine as build

WORKDIR /usr/src/app

COPY package* ./
COPY . .

RUN npm install
RUN npm run build

# -- RELEASE --
FROM nginx:stable-alpine as release

COPY --from=build /usr/src/app/build /usr/share/nginx/html
# copy .env.example as .env to the release build
COPY --from=build /usr/src/app/.env.example /usr/share/nginx/html/.env
COPY --from=build /usr/src/app/nginx/default.conf /etc/nginx/conf.d/default.conf

RUN apk add --update nodejs
RUN apk add --update npm
RUN npm install -g runtime-env-cra@0.2.2

WORKDIR /usr/share/nginx/html

EXPOSE 80

CMD ["/bin/sh", "-c", "runtime-env-cra && nginx -g \"daemon off;\""]

The key point here is to have a .env.example in your project, which represents your environment variable layout. The script will know what variable it will need to parse from the system. Inside the container, we can lean on that .env as a reference point.

Make sure you start the app with runtime-env-cra && nginx in the CMD section, this way the script can always parse the newly-added/modified environment variables to your container.

Examples

Here you can find more detailed and working examples on this topic (docker + docker-compose):

Link for the package on npm and Github:

Hope you will find it useful!

Building a Real-Time Webapp with Node.js and Socket.io

In this blogpost we showcase a project we recently finished for National Democratic Institute, an NGO that supports democratic institutions and practices worldwide. NDI’s mission is to strengthen political and civic organizations, safeguard elections and promote citizen participation, openness and accountability in government.

Our assignment was to build an MVP of an application that supports the facilitators of a cybersecurity themed interactive simulation game. As this webapp needs to be used by several people on different machines at the same time, it needed real-time synchronization which we implemented using Socket.io.

In the following article you can learn more about how we approached the project, how we structured the data access layer and how we solved challenges around creating our websocket server, just to mention a few. The final code of the project is open-source, and you’re free to check it out on Github.

A Brief Overview of the CyberSim Project

Political parties are at extreme risk to hackers and other adversaries, however, they rarely understand the range of threats they face. When they do get cybersecurity training, it’s often in the form of dull, technically complicated lectures. To help parties and campaigns better understand the challenges they face, NDI developed a cybersecurity simulation (CyberSim) about a political campaign rocked by a range of security incidents. The goal of the CyberSim is to facilitate buy-in for and implementation of better security practices by helping political campaigns assess their own readiness and experience the potential consequences of unmitigated risks.

The CyberSim is broken down into three core segments: preparation, simulation, and an after action review. During the preparation phase, participants are introduced to a fictional (but realistic) game-play environment, their roles, and the rules of the game. They are also given an opportunity to select security-related mitigations from a limited budget, providing an opportunity to “secure their systems” to the best of their knowledge and ability before the simulation begins.

real-time-nodejs-app-with-websocket-socket-io

The simulation itself runs for 75 minutes, during which time the participants have the ability to take actions to raise funds, boost support for their candidate and, most importantly, respond to events that occur that may negatively impact their campaign’s success. These events are meant to test the readiness, awareness and skills of the participants related to information security best practices. The simulation is designed to mirror the busyness and intensity of a typical campaign environment.

socketio-actions

The after action review is in many ways the most critical element of the CyberSim exercise. During this segment, CyberSim facilitators and participants review what happened during the simulation, what events lead to which problems during the simulation, and what actions the participants took (or should have taken) to prevent security incidents from occurring. These lessons are closely aligned with the best practices presented in the Cybersecurity Campaigns Playbook, making the CyberSim an ideal opportunity to reinforce existing knowledge or introduce new best practices presented there.

assessment-screen-socketio

Since data representation serves as the skeleton of each application, Norbert – who built part of the app will first walk you through the data layer created using knex and Node.js. Then he will move to the program’s hearth, the socket server that manages real-time communication.

This is going to be a series of articles, so in the next part, we will look at the frontend, which is built with React. Finally, in the third post, Norbert will present the muscle that is the project’s infrastructure. We used Amazon’s tools to create the CI/CD, host the webserver, the static frontend app, and the database.

Now that we’re through with the intro, you can enjoy reading this Socket.io tutorial / Case Study from Norbert:

The Project’s Structure

Before diving deep into the data access layer, let’s take a look at the project’s structure:

.

├── migrations
│   └── ...
├── seeds
│   └── ...
├── src
│   ├── config.js
│   ├── logger.js
│   ├── constants
│   │   └── ...
│   ├── models
│   │   └── ...
│   ├── util
│   │   └── ...
│   ├── app.js
│   └── socketio.js
└── index.js

As you can see, the structure is relatively straightforward, as we’re not really deviating from a standard Node.js project structure. To better understand the application, let’s start with the data model.

The Data Access Layer

Each game starts with a preprogrammed poll percentage and an available budget. Throughout the game, threats (called injections) occur at a predefined time (e.g., in the second minute) to which players have to respond. To spice things up, the staff has several systems required to make responses and take actions. These systems often go down as a result of injections. The game’s final goal is simple: the players have to maximize their party’s poll by answering each threat.

We used a PostgreSQL database to store the state of each game. Tables that make up the data model can be classified into two different groups: setup and state tables. Setup tables store data that are identical and constant for each game, such as:

  • injections – contains each threat player face during the game, e.g., Databreach
  • injection responses – a one-to-many table that shows the possible reactions for each injection
  • action – operations that have an immediate on-time effect, e.g., Campaign advertisement
  • systems – tangible and intangible IT assets, which are prerequisites of specific responses and actions, e.g., HQ Computers
  • mitigations – tangible and intangible assets that mitigate upcoming injections, e.g., Create a secure backup for the online party voter database
  • roles – different divisions of a campaign party, e.g., HQ IT Team
  • curveball events – one-time events controlled by the facilitators, e.g., Banking system crash

On the other hand, state tables define the state of a game and change during the simulation. These tables are the following:

  • game – properties of a game like budgetpoll, etc.
  • game systems – stores the condition of each system (is it online or offline) throughout the game
  • game mitigations – shows if players have bought each mitigation
  • game injection – stores information about injections that have happened, e.g., was it preventedresponses made to it
  • game log

To help you visualize the database schema, have a look at the following diagram. Please note that the game_log table was intentionally left from the image since it adds unnecessary complexity to the picture and doesn’t really help understand the core functionality of the game:

database_schema_socketio

To sum up, state tables always store any ongoing game’s current state. Each modification done by a facilitator must be saved and then transported back to every coordinator. To do so, we defined a method in the data access layer to return the current state of the game by calling the following function after the state is updated:

socketio_db_schema_faded
// ./src/game.js
const db = require('./db');

const getGame = (id) =>
db('game')
  .select(
    'game.id',
    'game.state',
    'game.poll',
    'game.budget',
    'game.started_at',
    'game.paused',
    'game.millis_taken_before_started',
    'i.injections',
    'm.mitigations',
    's.systems',
    'l.logs',
  )
  .where({ 'game.id': id })
  .joinRaw(
    `LEFT JOIN (SELECT gm.game_id, array_agg(to_json(gm)) AS mitigations FROM game_mitigation gm GROUP BY gm.game_id) m ON m.game_id = game.id`,
  )
  .joinRaw(
    `LEFT JOIN (SELECT gs.game_id, array_agg(to_json(gs)) AS systems FROM game_system gs GROUP BY gs.game_id) s ON s.game_id = game.id`,
  )
  .joinRaw(
    `LEFT JOIN (SELECT gi.game_id, array_agg(to_json(gi)) AS injections FROM game_injection gi GROUP BY gi.game_id) i ON i.game_id = game.id`,
  )
  .joinRaw(
    `LEFT JOIN (SELECT gl.game_id, array_agg(to_json(gl)) AS logs FROM game_log gl GROUP BY gl.game_id) l ON l.game_id = game.id`,
  )
  .first();

The const db = require('./db'); line returns a database connection established via knex, used for querying and updating the database. By calling the function above, the current state of a game can be retrieved, including each mitigation already purchased and still available for sale, online and offline systems, injections that have happened, and the game’s log. Here is an example of how this logic is applied after a facilitator triggers a curveball event:

// ./src/game.js
const performCurveball = async ({ gameId, curveballId }) => {
 try {
   const game = await db('game')
     .select(
       'budget',
       'poll',
       'started_at as startedAt',
       'paused',
       'millis_taken_before_started as millisTakenBeforeStarted',
     )
     .where({ id: gameId })
     .first();

   const { budgetChange, pollChange, loseAllBudget } = await db('curveball')
     .select(
       'lose_all_budget as loseAllBudget',
       'budget_change as budgetChange',
       'poll_change as pollChange',
     )
     .where({ id: curveballId })
     .first();

   await db('game')
     .where({ id: gameId })
     .update({
       budget: loseAllBudget ? 0 : Math.max(0, game.budget + budgetChange),
       poll: Math.min(Math.max(game.poll + pollChange, 0), 100),
     });

   await db('game_log').insert({
     game_id: gameId,
     game_timer: getTimeTaken(game),
     type: 'Curveball Event',
     curveball_id: curveballId,
   });
 } catch (error) {
   logger.error('performCurveball ERROR: %s', error);
   throw new Error('Server error on performing action');
 }
 return getGame(gameId);
};

As you can examine, after the update on the game’s state happens, which this time is a change in budget and poll, the program calls the getGame function and returns its result. By applying this logic, we can manage the state easily. We have to arrange each coordinator of the same game into groups, somehow map each possible event to a corresponding function in the models folder, and broadcast the game to everyone after someone makes a change. Let’s see how we achieved it by leveraging WebSockets.

Creating Our Real-Time Socket.io Server with Node.js

As the software we’ve created is a companion app to an actual tabletop game played at different locations, it is as real time as it gets. To handle such use cases, where the state of the UI-s needs to be synchronized across multiple clients, WebSockets are the go-to solution. To implement the WebSocket server and client, we chose to use Socket.io. While Socket.io clearly comes with a huge performance overhead, it freed us from a lot of hassle that arises from the stafeful nature of WebSocket connections. As the expected load was minuscule, the overhead Socket.io introduced was way overshadowed by the savings in development time it provided. One of the killer features of Socket.io that fit our use case very well was that operators who join the same game can be separated easily using socket.io rooms. This way, after a participant updates the game, we can broadcast the new state to the entire room (everyone who currently joined a particular game).

To create a socket server, all we need is a Server instance created by the createServer method of the default Node.js http module. For maintainability, we organized the socket.io logic into its separate module (see: .src/socketio.js). This module exports a factory function with one argument: an http Server object. Let’s have a look at it:

// ./src/socketio.js
const socketio = require('socket.io');

const SocketEvents = require('./constants/SocketEvents');

module.exports = (http) => {
const io = socketio(http);

io.on(SocketEvents.CONNECT, (socket) => {
  socket.on('EVENT', (input) => {
      // DO something with the given input
  })
}
}
// index.js
const { createServer } = require('http');
const app = require('./src/app'); // Express app
const createSocket = require('./src/socketio');

const port = process.env.PORT || 3001;
const http = createServer(app);
createSocket(http);

const server = http.listen(port, () => {
  logger.info(`Server is running at port: ${port}`);
});

As you can see, the socket server logic is implemented inside the factory function. In the index.js file then this function is called with the http Server. We didn’t have to implement authorization during this project, so there isn’t any socket.io middleware that authenticates each client before establishing the connection. Inside the socket.io module, we created an event handler for each possible action a facilitator can perform, including the documentation of responses made to injections, buying mitigations, restoring systems, etc. Then we mapped our methods defined in the data access layer to these handlers.

Bringing together facilitators

I previously mentioned that rooms make it easy to distinguish facilitators by which game they currently joined in. A facilitator can enter a room by either creating a fresh new game or joining an existing one. By translating this to “WebSocket language”, a client emits a createGame or joinGame event. Let’s have a look at the corresponding implementation:

// ./src/socketio.js
const socketio = require('socket.io');

const SocketEvents = require('./constants/SocketEvents');
const logger = require('./logger');
const {
 createGame,
 getGame,
} = require('./models/game');

module.exports = (http) => {
 const io = socketio(http);

 io.on(SocketEvents.CONNECT, (socket) => {
   logger.info('Facilitator CONNECT');
   let gameId = null;

   socket.on(SocketEvents.DISCONNECT, () => {
     logger.info('Facilitator DISCONNECT');
   });

   socket.on(SocketEvents.CREATEGAME, async (id, callback) => {
     logger.info('CREATEGAME: %s', id);
     try {
       const game = await createGame(id);
       if (gameId) {
         await socket.leave(gameId);
       }
       await socket.join(id);
       gameId = id;
       callback({ game });
     } catch (_) {
       callback({ error: 'Game id already exists!' });
     }
   });

   socket.on(SocketEvents.JOINGAME, async (id, callback) => {
     logger.info('JOINGAME: %s', id);
     try {
       const game = await getGame(id);
       if (!game) {
         callback({ error: 'Game not found!' });
       }
       if (gameId) {
         await socket.leave(gameId);
       }
       await socket.join(id);
       gameId = id;
       callback({ game });
     } catch (error) {
       logger.error('JOINGAME ERROR: %s', error);
       callback({ error: 'Server error on join game!' });
     }
   });
 }
}

If you examine the code snippet above, the gameId variable contains the game’s id, the facilitators currently joined. By utilizing the javascript closures, we declared this variable inside the connect callback function. Hence the gameId variable will be in all following handlers’ scope. If an organizer tries to create a game while already playing (which means that gameId is not null), the socket server first kicks the facilitator out of the previous game’s room then joins the facilitator in the new game room. This is managed by the leave and join methods. The process flow of the joinGame handler is almost identical. The only keys difference is that this time the server doesn’t create a new game. Instead, it queries the already existing one using the infamous getGame method of the data access layer.

What Makes Our Event Handlers?

After we successfully brought together our facilitators, we had to create a different handler for each possible event. For the sake of completeness, let’s look at all the events that occur during a game:

  • createGamejoinGame: these events’ single purpose is to join the correct game room organizer.
  • startSimulationpauseSimulationfinishSimulation: these events are used to start the event’s timer, pause the timer, and stop the game entirely. Once someone emits a finishGame event, it can’t be restarted.
  • deliverInjection: using this event, facilitators trigger security threats, which should occur in a given time of the game.
  • respondToInjectionnonCorrectRespondToInjection: these events record the responses made to injections.
  • restoreSystem: this event is to restore any system which is offline due to an injection.
  • changeMitigation: this event is triggered when players buy mitigations to prevent injections.
  • performAction: when the playing staff performs an action, the client emits this event to the server.
  • performCurveball: this event occurs when a facilitator triggers unique injections.

These event handlers implement the following rules:

  • They take up to two arguments, an optional input, which is different for each event, and a predefined callback. The callback is an exciting feature of socket.io called acknowledgment. It lets us create a callback function on the client-side, which the server can call with either an error or a game object. This call will then affect the client-side. Without diving deep into how the front end works (since this is a topic for another day), this function pops up an alert with either an error or a success message. This message will only appear for the facilitator who initiated the event.
  • They update the state of the game by the given inputs according to the event’s nature.
  • They broadcast the new state of the game to the entire room. Hence we can update the view of all organizers accordingly.

First, let’s build on our previous example and see how the handler implemented the curveball events.

// ./src/socketio.js
const socketio = require('socket.io');

const SocketEvents = require('./constants/SocketEvents');
const logger = require('./logger');
const {
 performCurveball,
} = require('./models/game');

module.exports = (http) => {
 const io = socketio(http);

 io.on(SocketEvents.CONNECT, (socket) => {
   logger.info('Facilitator CONNECT');
   let gameId = null;

   socket.on(
     SocketEvents.PERFORMCURVEBALL,
     async ({ curveballId }, callback) => {
       logger.info(
         'PERFORMCURVEBALL: %s',
         JSON.stringify({ gameId, curveballId }),
       );
       try {
         const game = await performCurveball({
           gameId,
           curveballId,
         });
         io.in(gameId).emit(SocketEvents.GAMEUPDATED, game);
         callback({ game });
       } catch (error) {
         callback({ error: error.message });
       }
     },
   );
 }
}

The curveball event handler takes one input, a curveballId and the callback as mentioned earlier.

The performCurveball method then updates the game’s poll and budget and returns the new game object. If the update is successful, the socket server emits a gameUpdated event to the game room with the latest state. Then it calls the callback function with the game object. If any error occurs, it is called with an error object.

After a facilitator creates a game, first, a preparation view is loaded for the players. In this stage, staff members can spend a portion of their budget to buy mitigations before the game starts. Once the game begins, it can be paused, restarted, or even stopped permanently. Let’s have a look at the corresponding implementation:

// ./src/socketio.js
const socketio = require('socket.io');

const SocketEvents = require('./constants/SocketEvents');
const logger = require('./logger');
const {
 startSimulation,
 pauseSimulation
} = require('./models/game');

module.exports = (http) => {
 const io = socketio(http);

 io.on(SocketEvents.CONNECT, (socket) => {
   logger.info('Facilitator CONNECT');
   let gameId = null;

   socket.on(SocketEvents.STARTSIMULATION, async (callback) => {
     logger.info('STARTSIMULATION: %s', gameId);
     try {
       const game = await startSimulation(gameId);
       io.in(gameId).emit(SocketEvents.GAMEUPDATED, game);
       callback({ game });
     } catch (error) {
       callback({ error: error.message });
     }
   });

   socket.on(SocketEvents.PAUSESIMULATION, async (callback) => {
     logger.info('PAUSESIMULATION: %s', gameId);
     try {
       const game = await pauseSimulation({ gameId });
       io.in(gameId).emit(SocketEvents.GAMEUPDATED, game);
       callback({ game });
     } catch (error) {
       callback({ error: error.message });
     }
   });

   socket.on(SocketEvents.FINISHSIMULATION, async (callback) => {
     logger.info('FINISHSIMULATION: %s', gameId);
     try {
       const game = await pauseSimulation({ gameId, finishSimulation: true });
       io.in(gameId).emit(SocketEvents.GAMEUPDATED, game);
       callback({ game });
     } catch (error) {
       callback({ error: error.message });
     }
   });
 }
}

The startSimulation kicks the game’s timer, and the pauseSimulation method pauses and stops the game. Trigger time is essential to determine which injection facilitators can invoke. After organizers trigger a threat, they hand over all necessary assets to the players. Staff members can then choose how they respond to the injection by providing a custom response or choosing from the predefined options. Next to facing threats, staff members perform actions, restore systems, and buy mitigations. The corresponding events to these activities can be triggered anytime during the game. These event handlers follow the same pattern and implement ourthe three fundamental rules. Please check the public GitHub repo if you would like to examine these callbacks.

Serving The Setup Data

In the chapter explaining the data access layer, I classified tables into two different groups: setup and state tables. State tables contain the condition of ongoing games. This data is served and updated via the event-based socket server. On the other hand, setup data consists of the available systems, game mitigations, actions, and curveball events, injections that occur during the game, and each possible response to them. This data is exposed via a simple http server. After a facilitator joins a game, the React client requests this data and caches and uses it throughout the game. The HTTP server is implemented using the express library. Let’s have a look at our app.js.

// .src/app.js
const helmet = require('helmet');
const express = require('express');
const cors = require('cors');
const expressPino = require('express-pino-logger');

const logger = require('./logger');
const { getResponses } = require('./models/response');
const { getInjections } = require('./models/injection');
const { getActions } = require('./models/action');

const app = express();

app.use(helmet());
app.use(cors());
app.use(
 expressPino({
   logger,
 }),
);

// STATIC DB data is exposed via REST api

app.get('/mitigations', async (req, res) => {
 const records = await db('mitigation');
 res.json(records);
});

app.get('/systems', async (req, res) => {
 const records = await db('system');
 res.json(records);
});

app.get('/injections', async (req, res) => {
 const records = await getInjections();
 res.json(records);
});

app.get('/responses', async (req, res) => {
 const records = await getResponses();
 res.json(records);
});

app.get('/actions', async (req, res) => {
 const records = await getActions();
 res.json(records);
});

app.get('/curveballs', async (req, res) => {
 const records = await db('curveball');
 res.json(records);
});

module.exports = app;

As you can see, everything is pretty standard here. We didn’t need to implement any method other than GET since this data is inserted and changed using seeds.

Final Thoughts On Our Socket.io Game

Now we can put together how the backend works. State tables store the games’ state, and the data access layer returns the new game state after each update. The socket server organizes the facilitators into rooms, so each time someone changes something, the new game is broadcasted to the entire room. Hence we can make sure that everyone has an up-to-date view of the game. In addition to dynamic game data, static tables are accessible via the http server.

Next time, we will look at how the React client manages all this, and after that I’ll present the infrastructure behind the project. You can check out the code of this app in the public GitHub repo!

In case you’re looking for experienced full-stack developers, feel free to reach out to us via info@risingstack.com, or via using the form below this article.

You can also check out our Node.js Development & Consulting service page for more info on our capabilities.

Sometimes you do need Kubernetes! But how should you decide?

At RisingStack, we help companies to adopt cloud-native technologies, or if they have already done so, to get the most mileage out of them.

Recently, I’ve been invited to Google DevFest to deliver a presentation on our experiences working with Kubernetes.

Below I talk about an online learning and streaming platform where the decision to use Kubernetes has been contested both internally and externally since the beginning of its development.

The application and its underlying infrastructure were designed to meet the needs of the regulations of several countries:

  • The app should be able to run on-premises, so students’ data could never leave a given country. Also, the app had to be available as a SaaS product as well.
  • It can be deployed as a single-tenant system where a business customer only hosts one instance serving a handful of users, but some schools could have hundreds of users.
  • Or it can be deployed as a multi-tenant system where the client is e.g. a government and needs to serve thousands of schools and millions of users.

The application itself was developed by multiple, geographically scattered teams, thus a Microservices architecture was justified, but both the distributed system and the underlying infrastructure seemed to be an overkill when we considered the fact that during the product’s initial entry, most of its customers needed small instances.

Was Kubernetes suited for the job, or was it an overkill? Did our client really need Kubernetes?

Let’s figure it out.

(Feel free to check out the video presentation, or the extended article version below!)

Let’s talk a bit about Kubernetes itself!

Kubernetes is an open-source container orchestration engine that has a vast ecosystem. If you run into any kind of problem, there’s probably a library somewhere on the internet that already solves it.

But Kubernetes also has a daunting learning curve, and initially, it’s pretty complex to manage. Cloud ops / infrastructure engineering is a complex and big topic in and of itself.

Kubernetes does not really mask away the complexity from you, but plunges you into deep water as it merely gives you a unified control plane to handle all those moving parts that you need to care about in the cloud.

So, if you’re just starting out right now, then it’s better to start with small things and not with the whole package straight away! First, deploy a VM in the cloud. Use some PaaS or FaaS solutions to play around with one of your apps. It will help you gradually build up the knowledge you need on the journey.

So you want to decide if Kubernetes is for you.

First and foremost, Kubernetes is for you if you work with containers! (It kinda speaks for itself for a container orchestration system). But you should also have more than one service or instance.

kubernetes-is-for-you-if

Kubernetes makes sense when you have a huge microservice architecture, or you have dedicated instances per tenant having a lot of tenants as well.

Also, your services should be stateless, and your state should be stored in databases outside of the cluster. Another selling point of Kubernetes is the fine gradient control over the network.

And, maybe the most common argument for using Kubernetes is that it provides easy scalability.

Okay, and now let’s take a look at the flip side of it.

Kubernetes is not for you if you don’t need scalability!

If your services rely heavily on disks, then you should think twice if you want to move to Kubernetes or not. Basically, one disk can only be attached to a single node, so all the services need to reside on that one node. Therefore you lose node auto-scaling, which is one of the biggest selling points of Kubernetes.

For similar reasons, you probably shouldn’t use k8s if you don’t host your infrastructure in the public cloud. When you run your app on-premises, you need to buy the hardware beforehand and you cannot just conjure machines out of thin air. So basically, you also lose node auto-scaling, unless you’re willing to go hybrid cloud and bleed over some of your excess load by spinning up some machines in the public cloud.

kubernetes-is-not-for-you-if

If you have a monolithic application that serves all your customers and you need some scaling here and there, then cloud service providers can handle it for you with autoscaling groups.

There is really no need to bring in Kubernetes for that.

Let’s see our Kubernetes case-study!

Maybe it’s a little bit more tangible if we talk about an actual use case, where we had to go through the decision making process.

online-learning-platform-kubernetes

Online Learning Platform is an application that you could imagine as if you took your classroom and moved it to the internet.

You can have conference calls. You can share files as handouts, you can have a whiteboard, and you can track the progress of your students.

This project started during the first wave of the lockdowns around March, so one thing that we needed to keep in mind is that time to market was essential.

In other words: we had to do everything very, very quickly!

This product targets mostly schools around Europe, but it is now used by corporations as well.

So, we’re talking about millions of users from the point we go to the market.

The product needed to run on-premise, because one of the main targets were governments.

Initially, we were provided with a proposed infrastructure where each school would have its own VM, and all the services and all the databases would reside in those VMs.

Handling that many virtual machines, properly handling rollouts to those, and monitoring all of them sounded like a nightmare to begin with. Especially if we consider the fact that we only had a couple of weeks to go live.

After studying the requirements and the proposal, it was time to call the client to..

Discuss the proposed infrastructure.

So the conversation was something like this:

  • “Hi guys, we would prefer to go with Kubernetes because to handle stuff at that scale, we would need a unified control plane that Kubernetes gives us.”
  • "Yeah, sure, go for it."

And we were happy, but we still had a couple of questions:

  • “Could we, by any chance, host it on the public cloud?”
  • "Well, no, unfortunately. We are negotiating with European local governments and they tend to be squeamish about sending their data to the US. "

Okay, anyways, we can figure something out…

  • “But do the services need filesystem access?”
  • "Yes, they do."

Okay, crap! But we still needed to talk to the developers so all was not lost.

Let’s call the developers!

It turned out that what we were dealing with was an usual microservice-based architecture, which consisted of a lot of services talking over HTTP and messaging queues.

Each service had its own database, and most of them stored some files in Minio.

kubernetes-app-architecture

In case you don’t know it, Minio is an object storage system that implements the S3 API.

Now that we knew the fine-grained architectural layout, we gathered a few more questions:

  • “Okay guys, can we move all the files to Minio?”
  • "Yeah, sure, easy peasy."

So, we were happy again, but there was still another problem, so we had to call the hosting providers:

  • “Hi guys, do you provide hosted Kubernetes?”
  • "Oh well, at this scale, we can manage to do that!"

So, we were happy again, but..

Just to make sure, we wanted to run the numbers!

Our target was to be able to run 60 000 schools on the platform in the beginning, so we had to see if our plans lined up with our limitations!

We shouldn’t have more than 150 000 total pods!

10 (pod/tenant) times 6000 tenants is 60 000 Pods. We’re good!

We shouldn’t have more than 300 000 total containers!

It’s one container per pod, so we’re still good.

We shouldn’t have more than 100 pods per node and no more than 5 000 nodes.

Well, what we have is 60 000 pods over 100 pod per node. That’s already 6 000 nodes, and that’s just the initial rollout, so we’re already over our 5 000 nodes limit.

kubernetes-limitations

Okay, well… Crap!

But, is there a solution to this?

Sure, it’s federation!

We could federate our Kubernetes clusters..

..and overcome these limitations.

We have worked with federated systems before, so Kubernetes surely provides something for that, riiight? Well yeah, it does… kind of.

It’s the stable Federation v1 API, which is sadly deprecated.

kubernetes-federation-v1

Then we saw that Kubernetes Federation v2 is on the way!

It was still in alpha at the time when we were dealing with this issue, but the GitHub page said it was rapidly moving towards beta release. By taking a look at the releases page we realized that it had been overdue by half a year by then.

Since we only had a short period of time to pull this off, we really didn’t want to live that much on the edge.

So what could we do? We could federate by hand! But what does that mean?

In other words: what could have been gained by using KubeFed?

Having a lot of services would have meant that we needed a federated Prometheus and Logging (be it Graylog or ELK) anyway. So the two remaining aspects of the system were rollout / tenant generation, and manual intervention.

Manual intervention is tricky. To make it easy, you need a unified control plane where you can eyeball and modify anything. We could have built a custom one that gathers all information from the clusters and proxies all requests to each of them. However, that would have meant a lot of work, which we just did not have the time for. And even if we had the time to do it, we would have needed to conduct a cost/benefit analysis on it.

The main factor in the decision if you need a unified control plane for everything is scale, or in other words, the number of different control planes to handle.

The original approach would have meant 6000 different planes. That’s just way too much to handle for a small team. But if we could bring it down to 20 or so, that could be bearable. In that case, all we need is an easy mind map that leads from services to their underlying clusters. The actual route would be something like:

Service -> Tenant (K8s Namespace) -> Cluster.

The Service -> Namespace mapping is provided by Kubernetes, so we needed to figure out the Namespace -> Cluster mapping.

This mapping is also necessary to reduce the cognitive overhead and time of digging around when an outage may happen, so it needs to be easy to remember, while having to provide a more or less uniform distribution of tenants across Clusters. The most straightforward way seemed to be to base it on Geography. I’m the most familiar with Poland’s and Hungary’s Geography, so let’s take them as an example.

Poland comprises 16 voivodeships, while Hungary comprises 19 counties as main administrative divisions. Each country’s capital stands out in population, so they have enough schools to get a cluster on their own. Thus it only makes sense to create clusters for each division plus the capital. That gives us 17 or 20 clusters.

So if we get back to our original 60 000 pods, and 100 pod / tenant limitation, we can see that 2 clusters are enough to host them all, but that leaves us no room for either scaling or later expansions. If we spread them across 17 clusters – in the case of Poland for example – that means we have around 3.500 pods / cluster and 350 nodes, which is still manageable.

This could be done in a similar fashion for any European country, but still needs some architecting when setting up the actual infrastructure. And when KubeFed becomes available (and somewhat battle tested) we can easily join these clusters into one single federated cluster.

Great, we have solved the problem of control planes for manual intervention. The only thing left was handling rollouts..

gitops-kubernetes-federation

As I mentioned before, several developer teams had been working on the services themselves, and each of them already had their own Gitlab repos and CIs. They already built their own Docker images, so we simply needed a place to gather them all, and roll them out to Kubernetes. So we created a GitOps repo where we stored the helm charts and set up a GitLab CI to build the actual releases, then deploy them.

From here on, it takes a simple loop over the clusters to update the services when necessary.

The other thing we needed to solve was tenant generation.

It was easy as well, because we just needed to create a CLI tool which could be set up by providing the school’s name, and its county or state.

tenant-generation-kubernetes

That’s going to designate its target cluster, and then push it to our Gitops repo, and that basically triggers the same rollout as new versions.

We were almost good to go, but there was still one problem: on-premises.

Although our hosting providers turned into some kind of public cloud (or something we can think of as public clouds), we were also targeting companies who want to educate their employees.

Huge corporations – like a Bank – are just as squeamish about sending their data out to the public internet as governments, if not more..

So we needed to figure out a way to host this on servers within vaults completely separated from the public internet.

kubespray

In this case, we had two main modes of operation.

  • One is when a company just wanted a boxed product and they didn’t really care about scaling it.
  • And the other one was where they expected it to be scaled, but they were prepared to handle this.

In the second case, it was kind of a bring your own database scenario, so you could set up the system in a way that we were going to connect to your database.

And in the other case, what we could do is to package everything — including databases — in one VM, in one Kubernetes cluster. But! I just wrote above that you probably shouldn’t use disks and shouldn’t have databases within your cluster, right?

However, in that case, we already had a working infrastructure.

Kubernetes provided us with infrastructure as code already, so it only made sense to use that as a packaging tool as well, and use Kubespray to just spray it to our target servers.

It wasn’t a problem to have disks and DBs within our cluster because the target were companies that didn’t want to scale it anyway.

So it’s not about scaling. It is mostly about packaging!

Previously I told you, that you probably don’t want to do this on-premises, and this is still right! If that’s your main target, then you probably shouldn’t go with Kubernetes.

However, as our main target was somewhat of a public cloud, it wouldn’t have made sense to just recreate the whole thing – basically create a new product in a sense – for these kinds of servers.

So as it is kind of a spin-off, it made sense here as well as a packaging solution.

Basically, I’ve just given you a bullet point list to help you determine whether Kubernetes is for you or not, and then I just tore it apart and threw it into a basket.

And the reason for this is – as I also mentioned:

Cloud ops is difficult!

There aren’t really one-size-fits-all solutions, so basing your decision on checklists you see on the internet is definitely not a good idea.

We’ve seen that a lot of times where companies adopt Kubernetes because it seems to fit, but when they actually start working with it, it turns out to be an overkill.

If you want to save yourself about a year or two of headache, it’s a lot better to first ask an expert, and just spend a couple of hours or days going through your use cases, discussing those and save yourself that year of headache.

In case you’re thinking about adopting Kubernetes, or getting the most out of it, don’t hesitate to reach out to us at info@risingstack.com, or by using the contact form below!

Case Study: Building a Mobile Game with Dart and Flutter

Hello, and welcome to the last episode of this Flutter series! ?

In the previous episodes, we looked at some basic Dart and Flutter concepts ranging from data structures and types, OOP and asynchrony to widgets, layouts, states, and props.

Alongside this course, I promised you (several times) that we’d build a fun mini-game in the last episode of this series – and the time has come.

drumrolls, please! gif

The game we’ll build: ShapeBlinder

The name of the project is shapeblinder.

Just a little fun fact: I’ve already built this project in PowerPoint and Unity a few years ago. ? If you’ve read my previous, React-Native focused series, you may have noticed that the name is a bit alike to the name of the project in that one (colorblinder), and that’s no coincidence: this project is a somewhat similar minigame, and it’s the next episode of that casual game series.

We always talk about how some people just have a natural affinity for coding, or how some people feel the code after some time. While a series can’t help you getting to this level, we could write some code that we can physically feel when it’s working, so we’ll be aiming for that.

The concept of this game is that there is a shape hidden on the screen. Tapping the hidden shape will trigger a gentle haptic feedback on iPhones and a basic vibration on Android devices. Based on where you feel the shape, you’ll be able to guess which one of the three possible shapes is hidden on the screen.

Before getting to code, I created a basic design for the project. I kept the feature set, the distractions on the UI, and the overall feeling of the app as simple and chic as possible. This means no colorful stuff, no flashy stuff, some gentle animations, no in-app purchases, no ads, and no tracking.

the design concept of our dart and flutter game

We’ll have a home screen, a game screen and a “you lost” screen. A title-subtitle group will be animated across these screens. Tapping anywhere on the home screen will start, and on the lost screen will restart the game. We’ll also have some data persistency for storing the high scores of the user.

The full source code is available on GitHub here. You can download the built application from both Google Play and App Store.

Now go play around with the game, and after that, we’ll get started! ✨

Initializing the project

First, and foremost, I used the already discussed flutter create shapeblinder CLI command. Then, I deleted most of the code and created my usual go-to project structure for Flutter:

├── README.md
├── android
├── assets
├── build
├── ios
├── lib
│   ├── core
│   │   └── ...
│   ├── main.dart
│   └── ui
│       ├── screens
│       │   └── ...
│       └── widgets
│           └── ...
├── pubspec.lock
└── pubspec.yaml

Inside the lib, I usually create a core and a ui directory to separate the business logic from the UI code. Inside the ui dir, I also add a screens and widgets directory. I like keeping these well-separated – however, these are just my own preferences!

Feel free to experiment with other project structures on your own and see which one is the one you naturally click with. (The most popular project structures you may want to consider are MVC, MVVM, or BLoC, but the possibilities are basically endless!)

After setting up the folder structure, I usually set up the routing with some very basic empty screens. To achieve this, I created a few dummy screens inside the lib/ui/screens/.... A simple centered text widget with the name of the screen will do it for now:

// lib/ui/screens/Home.dart
 
import 'package:flutter/material.dart';
 
class Home extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
   return Scaffold(
     body: Center(
       child: Text("home"),
     ),
   );
 }
}

Notice that I only used classes, methods, and widgets that we previously discussed. Just a basic StatelessWidget with a Scaffold so that our app has a body, and a Text wrapped with a Center. Nothing heavy there. I copied and pasted this code into the Game.dart and Lost.dart files too, so that I can set up the routing in the main.dart:

// lib/main.dart
 
import 'package:flutter/material.dart';
 
// import the screens we created in the previous step
import './ui/screens/Home.dart';
import './ui/screens/Game.dart';
import './ui/screens/Lost.dart';
 
// the entry point to our app
void main() {
 runApp(Shapeblinder());
}
 
class Shapeblinder extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
   return MaterialApp(
     title: 'ShapeBlinder',
     // define the theme data
     // i only added the fontFamily to the default theme
     theme: ThemeData(
       primarySwatch: Colors.grey,
       visualDensity: VisualDensity.adaptivePlatformDensity,
       fontFamily: "Muli",
     ),
     home: Home(),
     // add in the routes
     // we'll be able to use them later in the Navigator.pushNamed method
     routes: <String, WidgetBuilder>{
       '/home': (BuildContext context) => Home(),
       '/game': (BuildContext context) => Game(),
       '/lost': (BuildContext context) => Lost(),
     },
   );
 }
}

Make sure that you read the code comments for some short inline explanation! Since we already discussed these topics, I don’t really want to take that much time into explaining these concepts from the ground up – we’re just putting them into practice to see how they work before you get your hands dirty with real-life projects.

Adding assets, setting up the font

You may have noticed that I threw in a fontFamily: “Muli” in the theme data. How do we add this font to our project? There are several ways: you could, for example, use the Google Fonts package, or manually add the font file to the project. While using the package may be handy for some, I prefer bundling the fonts together with the app, so we’ll add them manually.

The first step is to acquire the font files: in Flutter, .ttf is the preferred format. You can grab the Muli font this project uses from Google Fonts here.

(Update: the font has been removed from Google Fonts. You’ll be able to download it soon bundled together with other assets such as the app icon and the svgs, or you could also use a new, almost identical font by the very same author, Mulish).

Then, move the files somewhere inside your project. The assets/fonts directory is a perfect place for your font files – create it, move the files there and register the fonts in the pubspec.yaml:

flutter:
 fonts:
   - family: Muli
     fonts:
       - asset: assets/fonts/Muli.ttf
       - asset: assets/fonts/Muli-Italic.ttf
         style: italic

You can see that we were able to add the normal and italic versions in a single family: because of this, we won’t need to use altered font names (like “Muli-Italic”). After this – boom! You’re done. ? Since we previously specified the font in the app-level theme, we won’t need to refer to it anywhere else – every rendered text will use Muli from now on.

Now, let’s add some additional assets and the app icon. We’ll have some basic shapes as SVGs that we’ll display on the bottom bar of the Game screen. You can grab every asset (including the app icon, font files, and svgs) from here. You can just unzip this and move it into the root of your project and expect everything to be fine.

Before being able to use your svgs in the app, you need to register them in the pubspec.yaml, just like you had to register the fonts:

flutter:
 uses-material-design: true
 
 assets:
   - assets/svg/tap.svg
 
   - assets/svg/circle.svg
   - assets/svg/cross.svg
   - assets/svg/donut.svg
   - assets/svg/line.svg
   - assets/svg/oval.svg
   - assets/svg/square.svg
 
 fonts:
   - family: Muli
     fonts:
       - asset: assets/fonts/Muli.ttf
       - asset: assets/fonts/Muli-Italic.ttf
         style: italic

And finally, to set up the launcher icon (the icon that shows up in the system UI), we’ll use a handy third-party package flutter_launcher_icons. Just add this package into the dev_dependencies below the normal deps in the pubspec.yaml:

dev_dependencies:
 flutter_launcher_icons: "^0.7.3"

…and then configure it, either in the pubspec.yaml or by creating a flutter_launcher_icons.yaml config file. A very basic configuration is going to be just enough for now:

flutter_icons:
 android: "launcher_icon"
 ios: true
 image_path: "assets/logo.png"

And then, you can just run the following commands, and the script will set up the launcher icons for both Android and iOS:

flutter pub get
flutter pub run flutter_launcher_icons:main

After installing the app either on a simulator, emulator, or a connected real-world device with flutter run, you’ll see that the app icon and the font family is set.

You can use a small r in the CLI to reload the app and keep its state, and use a capital R to restart the application and drop its state. (This is needed when big changes are made in the structure. For example, a StatelessWidget gets converted into a stateful one; or when adding new dependencies and assets into your project.)

Building the home screen

Before jumping right into coding, I always like to take my time and plan out how I’ll build that specific screen based on the screen designs. Let’s have another, closer look at the designs I made before writing them codez:

the design concept of our dart and flutter game

We can notice several things that will affect the project structure:

  • The Home and the Lost screen look very identical to each other
  • All three screens have a shared Logo component with a title (shapeblinder / you lost) and a custom subtitle

So, let’s break down the Home and Lost screens a bit:

sb-design-home-breakdown

The first thing we’ll notice is that we’ll need to use a Column for the layout. (We may also think about the main and cross axis alignments – they are center and start, respectively. If you wouldn’t have known it by yourself, don’t worry – you’ll slowly develop a feeling for it. Until then, you can always experiment with all the options you have until you find the one that fits.)

After that, we can notice the shared Logo or Title component and the shared Tap component. Also, the Tap component says “tap anywhere [on the screen] to start (again)”. To achieve this, we’ll wrap our layout in a GestureDetector so that the whole screen can respond to taps.

Let’s hit up Home.dart and start implementing our findings. First, we set the background color in the Scaffold to black:

return Scaffold(
     backgroundColor: Colors.black,

And then, we can just go on and create the layout in the body. As I already mentioned, I’ll first wrap the whole body in a GestureDetector. It is a very important step because later on, we’ll just be able to add an onTap property, and we’ll be just fine navigating the user to the next screen.

Inside the GestureDetector, however, I still won’t be adding the Column widget. First, I’ll wrap it in a SafeArea widget. SafeArea is a handy widget that adds additional padding to the UI if needed because of the hardware (for example, because of a notch, a swipeable bottom bar, or a camera cut-out). Then, inside that, I’ll also add in a Padding so that the UI can breathe, and inside that, will live our Column. The widget structure looks like this so far:

Home
├── Scaffold
│   └── GestureDetector
│   │   └── SafeArea
│   │   │   └── Column

Oh, and by the way, just to flex with the awesome tooling of Flutter – you can always have a peek at how your widget structure looks like in the VS Code sidebar:

VS-Code-sidebar-widget-structure-helper

And this is how our code looks right now:

import 'package:flutter/material.dart';
 
class Home extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
   return Scaffold(
     backgroundColor: Colors.black,
     body: GestureDetector(
       // tapping on empty spaces would not trigger the onTap without this
       behavior: HitTestBehavior.opaque,
       onTap: () {
         // navigate to the game screen
       },
       // SafeArea adds padding for device-specific reasons
       // (e.g. bottom draggable bar on some iPhones, etc.)
       child: SafeArea(
         child: Padding(
           padding: const EdgeInsets.all(40.0),
           child: Column(
             mainAxisAlignment: MainAxisAlignment.center,
             crossAxisAlignment: CrossAxisAlignment.start,
             children: <Widget>[
 
             ],
           ),
         ),
       ),
     ),
   );
 }
}

Creating Layout template

And now, we have a nice frame or template for our screen. We’ll use the same template on all three screens of the app (excluding the Game screen where we won’t include a GestureDetector), and in cases like this, I always like to create a nice template widget for my screens. I’ll call this widget Layout now:

 // lib/ui/widgets/Layout.dart
import 'package:flutter/material.dart';
 
class Layout extends StatelessWidget {
 // passing named parameters with the ({}) syntax
 // the type is automatically inferred from the type of the variable
 // (in this case, the children prop will have a type of List<Widget>)
 Layout({this.children});
 
 final List<Widget> children;
 
 @override
 Widget build(BuildContext context) {
   return Scaffold(
     backgroundColor: Colors.black,
     // SafeArea adds padding for device-specific reasons
     // (e.g. bottom draggable bar on some iPhones, etc.)
     body: SafeArea(
       child: Padding(
         padding: const EdgeInsets.all(40.0),
         child: Column(
           mainAxisAlignment: MainAxisAlignment.center,
           crossAxisAlignment: CrossAxisAlignment.start,
           children: children,
         ),
       ),
     ),
   );
 }
}

Now, in the Home.dart, we can just import this layout and wrap it in a GestureDetector, and we’ll have the very same result that we had previously, but we saved tons of lines of code because we can reuse this template on all other screens:

import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
 
import "../widgets/Layout.dart";
 
class Home extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
   return GestureDetector(
     // tapping on empty spaces would not trigger the onTap without this
     behavior: HitTestBehavior.opaque,
     onTap: () {
       // navigate to the game screen
     },
     child: Layout(
       children: <Widget>[
 
       ],
     ),
   );
 }
}

Oh, and remember this because it’s a nice rule of thumb: whenever you find yourself copying and pasting code from one widget to another, it’s time to extract that snippet into a separate widget. It really helps to keep spaghetti code away from your projects. ?

Now that the overall wrapper and the GestureDetector is done, there are only a few things left on this screen:

  • Implementing the navigation in the onTap prop
  • Building the Logo widget (with the title and subtitle)
  • Building the Tap widget (with that circle-ey svg, title, and subtitle)

Implementing navigation

Inside the GestureDetector, we already have an onTap property set up, but the method itself is empty as of now. To get started with it, we should just throw in a console.log, or, as we say in Dart, a print statement to see if it responds to our taps.

onTap: () {
 // navigate to the game screen
 print("hi!");
},

Now, if you run this code with flutter run, anytime you’ll tap the screen, you’ll see “hi!” being printed out into the console. (You’ll see it in the CLI.)

That’s amazing! Now, let’s move forward to throwing in the navigation-related code. We already looked at navigation in the previous episode, and we already configured named routes in a previous step inside the main.dart, so we’ll have a relatively easy job now:

onTap: () {
 // navigate to the game screen
 Navigator.pushNamed(context, "/game");
},

And boom, that’s it! Tapping anywhere on the screen will navigate the user to the game screen. However, because both screens are empty, you won’t really notice anything – so let’s build the two missing widgets!

Building the Logo widget, Hero animation with text in Flutter

Let’s have another look at the Logo and the Tap widgets before we implement them:

sb-widgets-breakdown

We’ll start with the Logo widget because it’s easier to implement. First, we create an empty StatelessWidget:

// lib/ui/widgets/Logo.dart
import "package:flutter/material.dart";
 
class Logo extends StatelessWidget {
 
}

Then we define two properties, title and subtitle, with the method we already looked at in the Layout widget:

import "package:flutter/material.dart";
 
class Logo extends StatelessWidget {
 Logo({this.title, this.subtitle});
 
 final String title;
 final String subtitle;
 
 @override
 Widget build(BuildContext context) {
  
 }
}

And now, we can just return a Column from the build because we are looking forward to rendering two text widgets underneath each other.

@override
Widget build(BuildContext context) {
 return Column(
   crossAxisAlignment: CrossAxisAlignment.start,
   children: <Widget>[
     Text(
       title,
     ),
     Text(
       subtitle,
     ),
   ],
 );
}

And notice how we were able to just use title and subtitle even though they are properties of the widget. We’ll also add in some text styling, and we’ll be done for now – with the main body.

return Column(
  crossAxisAlignment: CrossAxisAlignment.start,
  children: <Widget>[
    Text(
      title,
      style: TextStyle(
        fontWeight: FontWeight.bold,
        fontSize: 34.0,
        color: Colors.white,
      ),
    ),
    Text(
      subtitle,
      style: TextStyle(
        fontSize: 24.0,
        // The Color.xy[n] gets a specific shade of the color
        color: Colors.grey[600],
        fontStyle: FontStyle.italic,
      ),
    ),
  ],
)

Now this is cool and good, and it matches what we wanted to accomplish – however, this widget could really use a nice finishing touch. Since this widget is shared between all of the screens, we could add a really cool Hero animation. The Hero animation is somewhat like the Magic Move in Keynote. Go ahead and watch this short Widget of The Week episode to know what a Hero animation is and how it works:

This is very cool, isn’t it? We’d imagine that just wrapping our Logo component in a Hero and passing a key would be enough, and we’d be right, but the Text widget’s styling is a bit odd in this case. First, we should wrap the Column in a Hero and pass in a key like the video said:

return Hero(
 tag: "title",
 transitionOnUserGestures: true,
 child: Column(
   crossAxisAlignment: CrossAxisAlignment.start,
   children: <Widget>[
     Text(
       title,
       style: TextStyle(
         fontWeight: FontWeight.bold,
         fontSize: 34.0,
         color: Colors.white,
       ),
     ),
     Text(
       subtitle,
       style: TextStyle(
         fontSize: 24.0,
         // The Color.xy[n] gets a specific shade of the color
         color: Colors.grey[600],
         fontStyle: FontStyle.italic,
       ),
     ),
   ],
 ),
);

But when the animation is happening, and the widgets are moving around, you’ll see that Flutter drops the font family and the Text overflows its container. So we’ll need to hack around Flutter with some additional components and theming data to make things work:

import "package:flutter/material.dart";
 
class Logo extends StatelessWidget {
 Logo({this.title, this.subtitle});
 
 final String title;
 final String subtitle;
 
 @override
 Widget build(BuildContext context) {
   return Hero(
     tag: "title",
     transitionOnUserGestures: true,
     child: Material(
       type: MaterialType.transparency,
       child: Container(
         width: MediaQuery.of(context).size.width,
         child: Column(
           crossAxisAlignment: CrossAxisAlignment.start,
           children: <Widget>[
             Text(
               title,
               style: TextStyle(
                 fontWeight: FontWeight.bold,
                 fontSize: 34.0,
                 color: Colors.white,
               ),
             ),
             Text(
               subtitle,
               style: TextStyle(
                 fontSize: 24.0,
                 // The Color.xy[n] gets a specific shade of the color
                 color: Colors.grey[600],
                 fontStyle: FontStyle.italic,
               ),
             ),
           ],
         ),
       ),
     ),
   );
 }
}

This code will ensure that the text has enough space even if the content changes between screens (which will of course happen), and that the font style doesn’t randomly change while in-flight (or while the animation is happening).

Now, we’re finished with the Logo component, and it will work and animate perfectly and seamlessly between screens.

Building the Tap widget, rendering SVGs in Flutter

The Tap widget will render an SVG, a text from the props, and the high score from the stored state underneath each other. We could start by creating a new widget in the lib/ui/widgets directory. However, we’ll come to a dead-end after writing a few lines of code as Flutter doesn’t have native SVG rendering capabilities. Since we want to stick with SVGs instead of rendering them into PNGs, we’ll have to use a 3rd party package, flutter_svg.

To install it, we just simply add it to the pubspec.yaml into the dependencies:

dependencies:
 flutter:
   sdk: flutter
 
 cupertino_icons: ^0.1.3
 flutter_svg: any

And after saving the file, VS Code will automatically run flutter pub get and thus install the dependencies for you. Another great example of the powerful Flutter developer tooling! ?

Now, we can just create a file under lib/ui/widgets/Tap.dart, import this dependency, and expect things to be going fine. If you were already running an instance of flutter run, you’ll need to restart the CLI when adding new packages (by hitting Ctrl-C to stop the current instance and running flutter run again):

// lib/ui/widgets/Tap.dart
 
import "package:flutter/material.dart";
// import the dependency
import "package:flutter_svg/flutter_svg.dart";

We’ll just start out with a simple StatelessWidget now, but we’ll refactor this widget later after we implemented storing the high scores! Until then, we only need to think about the layout: it’s a Column because children are underneath each other, but we wrap it into a Center so that it’s centered on the screen:

import "package:flutter/material.dart";
// import the dependency
import "package:flutter_svg/flutter_svg.dart";
 
class Tap extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
   return Center(
     child: Column(
       children: <Widget>[
        
       ],
     ),
   );
 }
}

Now you may be wondering that setting the crossAxisAlignment: CrossAxisAlignment.center in the Column would center the children of the column, so why the Center widget?

The crossAxisAlignment only aligns children inside its parent’s bounds, but the Column doesn’t fill up the screen width. (You could, however, achieve this by using the Flexible widget, but that would have some unexpected side effects.).

On the other hand, Center aligns its children to the center of the screen. To understand why we need the Center widget and why setting crossAxisAlignment to center isn’t just enough, I made a little illustration:

sb-alignment-breakdown

Now that this is settled, we can define the properties of this widget:

 Tap({this.title});
 final String title;

And move on to building the layout. First comes the SVG – the flutter_svg package exposes an SvgPicture.asset method that will return a Widget and hence can be used in the widget tree, but that widget will always try to fill up its ancestor, so we need to restrict the size of it. We can use either a SizedBox or a Container for this purpose. It’s up to you:

Container(
 height: 75,
 child: SvgPicture.asset(
   "assets/svg/tap.svg",
   semanticsLabel: 'tap icon',
 ),
),

And we’ll just render the two other texts (the one that comes from the props and the best score) underneath each other, leaving us to this code:

import "package:flutter/material.dart";
// import the dependency
import "package:flutter_svg/flutter_svg.dart";
 
class Tap extends StatelessWidget {
 Tap({this.title});
 final String title;
 
 @override
 Widget build(BuildContext context) {
   return Center(
     child: Column(
       children: <Widget>[
         Container(
           height: 75,
           child: SvgPicture.asset(
             "assets/svg/tap.svg",
             semanticsLabel: 'tap icon',
           ),
         ),
         // give some space between the illustration and the text:
         Container(
           height: 14,
         ),
         Text(
           title,
           style: TextStyle(
             fontSize: 18.0,
             color: Colors.grey[600],
           ),
         ),
         Text(
           "best score: 0",
           style: TextStyle(
             fontSize: 18.0,
             color: Colors.grey[600],
             fontStyle: FontStyle.italic,
           ),
         ),
       ],
     ),
   );
 }
}

Always take your time examining the code examples provided, as you’ll soon start writing code just like this.

Putting it all together into the final Home screen

Now that all two widgets are ready to be used on our Home and Lost screens, we should get back to the Home.dart and start putting them together into a cool screen.

First, we should import these classes we just made:

// lib/ui/screens/Home.dart
 
import "../widgets/Layout.dart";
// ADD THIS:
import "../widgets/Logo.dart";
import "../widgets/Tap.dart";

And inside the Layout, we already have a blank space as children, we should just fill it up with our new, shiny components:

import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
 
import "../widgets/Layout.dart";
import "../widgets/Logo.dart";
import "../widgets/Tap.dart";
 
class Home extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
   return GestureDetector(
     // tapping on empty spaces would not trigger the onTap without this
     behavior: HitTestBehavior.opaque,
     onTap: () {
       // navigate to the game screen
       HapticFeedback.lightImpact();
       Navigator.pushNamed(context, "/game");
     },
     child: Layout(
       children: <Widget>[
         Logo(
           title: "shapeblinder",
           subtitle: "a game with the lights off",
         ),
         Tap(
           title: "tap anywhere to start",
         ),
       ],
     ),
   );
 }
}

And boom! After reloading the app, you’ll see that the new widgets are on-screen. There’s only one more thing left: the alignment is a bit off on this screen, and it doesn’t really match the design. Because of that, we’ll add in some Spacers.

In Flutter, a Spacer is your <div style={{ flex: 1 }}/>, except that they are not considered to be a weird practice here. Their sole purpose is to fill up every pixel of empty space on a screen, and we can also provide them a flex value if we want one Spacer to be larger than another.

In our case, this is exactly what we need: we’ll need one large spacer before the logo and a smaller one after the logo:

Spacer(
 flex: 2,
),
// add hero cross-screen animation for title
Logo(
 title: "shapeblinder",
 subtitle: "a game with the lights off",
),
Spacer(),
Tap(
 title: "tap anywhere to start",
),

And this will push everything into place.

Building the Lost screen, passing properties to screens in Flutter with Navigator

Because the layout of the Lost screen is an exact copy of the Home screen except some differences here and there, we’ll just copy and paste the Home.dart into the Lost.dart and modify it like this:

class Lost extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
   return GestureDetector(
     behavior: HitTestBehavior.opaque,
     onTap: () {
       // navigate to the game screen
       Navigator.pop(context);
     },
     child: Layout(
       children: <Widget>[
         Spacer(
           flex: 2,
         ),
         Logo(
           title: "you lost",
           subtitle: "score: 0",
         ),
         Spacer(),
         Tap(
           title: "tap anywhere to start again",
         ),
       ],
     ),
   );
 }
}

However, this just won’t be enough for us now. As you can see, there is a hard-coded “score: 0” on the screen. We want to pass the score as a prop to this screen, and display that value here.

To pass properties to a named route in Flutter, you should create an arguments class. In this case, we’ll name it LostScreenArguments. Because we only want to pass an integer (the points of the user), this class will be relatively simple:

// passing props to this screen with arguments
// you'll need to construct this class in the sender screen, to
// (in our case, the Game.dart)
class LostScreenArguments {
 final int points;
 
 LostScreenArguments(this.points);
}

And we can extract the arguments inside the build method:

@override
Widget build(BuildContext context) {
 // extract the arguments from the previously discussed class
 final LostScreenArguments args = ModalRoute.of(context).settings.arguments;
 // you'll be able to access it by: args.points

And just use the ${...}string interpolation method in the Text widget to display the score from the arguments:

Logo(
 title: "you lost",
 // string interpolation with the ${} syntax
 subtitle: "score: ${args.points}",
),

And boom, that’s all the code needed for receiving arguments on a screen! We’ll look into passing them later on when we are building the Game screen…

Building the underlying Game logic

…which we’ll start right now. So far, this is what we’ve built and what we didn’t implement yet:

  • ✅ Logo widget
    • ✅ Hero animation
  • ✅ Tap widget
    • ✅ Rendering SVGs
  • ✅ Home screen
  • ✅ Lost screen
    • ✅ Passing props
  • Underlying game logic
  • Game screen
  • Drawing shapes
  • Using haptic feedback
  • Storing high scores – persistent data

So there’s still a lot to learn! ?First, we’ll build the underlying game logic and classes. Then, we’ll build the layout for the Game screen. After that, we’ll draw shapes on the screen that will be tappable. We’ll hook them into our logic, add in haptic feedback, and after that, we’ll just store and retrieve the high scores, test the game on a real device, and our game is going to be ready for production!

The underlying game logic will pick three random shapes for the user to show, and it will also pick one correct solution. To pass around this generated data, first, we’ll create a class named RoundData inside the lib/core/RoundUtilities.dart:

class RoundData {
 List<String> options;
 int correct;
 
 RoundData({this.options, this.correct});
}

Inside the assets/svg directory, we have some shapes lying around. We’ll store the names of the files in an array of strings so that we can pick random strings from this list:

// import these!!
import 'dart:core';
import 'dart:math';
 
class RoundData {
 List<String> options;
 int correct;
 
 RoundData({this.options, this.correct});
}
 
// watch out - new code below!
Random random = new Random();
 
// the names represent all the shapes in the assets/svg directory
final List<String> possible = [
 "circle",
 "cross",
 "donut",
 "line",
 "oval",
 "square"
];

And notice that I also created a new instance of the Random class and imported a few native Dart libraries. We can use this random variable to get new random numbers between two values:

// this will generate a new random int between 0 and 5
random.nextInt(5);

The nextInt’s upper bound is exclusive, meaning that the code above can result in 0, 1, 2, 3, and 4, but not 5.

To get a random item from an array, we can combine the .length property with this random number generator method:

int randomItemIndex = random.nextInt(array.length);

Then, I’ll write a method that will return a RoundData instance:

RoundData generateRound() {
 // new temporary possibility array
 // we can remove possibilities from it
 // so that the same possibility doesn't come up twice
 List<String> temp = possible.map((item) => item).toList();
 
 // we'll store possibilities in this array
 List<String> res = new List<String>();
 
 // add three random shapes from the temp possibles to the options
 for (int i = 0; i < 3; i++) {
   // get random index from the temporary array
   int randomItemIndex = random.nextInt(temp.length);
 
   // add the randomth item of the temp array to the results
   res.add(temp[randomItemIndex]);
 
   // remove possibility from the temp array so that it doesn't come up twice
   temp.removeAt(randomItemIndex);
 }
 
 // create new RoundData instance that we'll be able to return
 RoundData data = RoundData(
   options: res,
   correct: random.nextInt(3),
 );
 
 return data;
}

Take your time reading the code with the comments and make sure that you understand the hows and whys.

Game screen

Now that we have the underlying game logic in the lib/core/RoundUtilities.dart, let’s navigate back into the lib/ui/screens/Game.dart and import the utilities we just created:

import 'package:flutter/material.dart';
 
// import this:
import '../../core/RoundUtilities.dart';
import "../widgets/Layout.dart";
import "../widgets/Logo.dart";

And since we’d like to update this screen regularly (whenever a new round is generated), we should convert the Game class into a StatefulWidget. We can achieve this with a VS Code shortcut (right-click on class definition > Refactor… > Convert to StatefulWidget):

class Game extends StatefulWidget {
 @override
 _GameState createState() => _GameState();
}
 
class _GameState extends State<Game> {
 @override
 Widget build(BuildContext context) {
   return Layout(
     children: <Widget>[
       Logo(
         title: "shapeblinder",
         subtitle: "current score: 0 | high: 0",
       ),
     ],
   );
 }
}

And now, we’ll build the layout. Let’s take a look at the mock for this screen:

sb-game-design-breakdown

Our screen already contains the shared Logo widget, and we’ll work with drawing shapes a bit later, so we’ll only have to cover

  • Proper spacing with Spacers
  • Creating a container for our shape
  • Drawing the three possible shapes on the bottom of the screen
  • Hooking them up to a tap handler
  • If the guess is correct, show a SnackBar and create a new round
  • If the guess in incorrect, end the session and navigate the user to the lost screen

Initializing data flow

So let’s get started! First, I’ll define the variables inside the state. Since this is a StatefulWidget, we can just define some variables inside the State and expect them to be hooked up to Flutter’s inner state management engine.

I’d also like to give them some values., so I’ll create a reset method. It will set the points to zero and create a new round with the generator we created previously. We’ll run this method when the initState method runs so that the screen is initialized with game data:

class _GameState extends State<Game> {
 RoundData data;
 int points = 0;
 int high = 0;
 final GlobalKey scaffoldKey = GlobalKey();
 
// the initState method is ran by Flutter when the element is first time painted
// it's like componentDidMount in React
 @override
 void initState() {
   reset();
   super.initState();
 }
 
 void reset() {
   setState(() {
     points = 0;
     data = generateRound();
   });
 }
 
 ...

And now, we can move on to defining our layout:

Initializing the UI

Now that we have some data we can play around with, we can create the overall layout of this screen. First, I’ll create a runtime constant (or a final) I’ll call width. It will contain the available screen width:

@override
Widget build(BuildContext context) {
 final width = MediaQuery.of(context).size.width;

I can use this to create a perfect square container for the shape that we’ll render later:

Container(
 height: width / 1.25,
 width: width / 1.25,
),

After this comes a simple centered text:

Center(
 child: Text(
   "select the shape that you feel",
   style: TextStyle(
     fontSize: 18.0,
     color: Colors.grey[600],
     fontStyle: FontStyle.italic,
   ),
 ),
),

And we’ll draw out the three possible shapes in a Row because they are positioned next to each other. First, I’ll just define the container:

Row(
 mainAxisAlignment: MainAxisAlignment.spaceBetween,
 children: <Widget>[   
  
 ],
),

And we can use the state’s RoundData instance, data, to know which three possible shapes we need to render out. We can just simply map over it and use the spread operator to pass the results into the Row:

...data.options.map(
 (e) => Container(
   height: width / 5,
   width: width / 5,
   child: GestureDetector(
     onTap: () => guess(context, e),
     child: SvgPicture.asset(
       "assets/svg/$e.svg",
       semanticsLabel: '$e icon',
     ),
   ),
 ),
),

This will map over the three possibilities in the state, render their corresponding icons in a sized container, and add a GestureDetector to it so that we can know when the user taps on the shape (or when the user makes a guess). For the guess method, we’ll pass the current BuildContext and the name of the shape the user had just tapped on. We’ll look into why the context is needed in a bit, but first, let’s just define a boilerplate void and print out the name of the shape the user tapped:

void guess(BuildContext context, String name) {
 print(name);
}

Now, we can determine if the guess is correct or not by comparing this string to the one under data.options[data.correct]:

void guess(BuildContext context, String name) {
 if (data.options[data.correct] == name) {
   // correct guess!
   correctGuess(context);
 } else {
   // wrong guess
   lost();
 }
}

And we should also create a correctGuess and a lost handler:

void correctGuess(BuildContext context) {
 // show snackbar
 Scaffold.of(context).showSnackBar(
   SnackBar(
     backgroundColor: Colors.green,
     duration: Duration(seconds: 1),
     content: Column(
       mainAxisAlignment: MainAxisAlignment.center,
       crossAxisAlignment: CrossAxisAlignment.center,
       children: <Widget>[
         Icon(
           Icons.check,
           size: 80,
         ),
         Container(width: 10),
         Text(
           "Correct!",
           style: TextStyle(
             fontSize: 24,
             fontWeight: FontWeight.bold,
           ),
         ),
       ],
     ),
   ),
 );
 
 // add one point, generate new round
 setState(() {
   points++;
   data = generateRound();
 });
}
 
void lost() {
 // navigate the user to the lost screen
 Navigator.pushNamed(
   context,
   "/lost",
   // pass arguments with this constructor:
   arguments: LostScreenArguments(points),
 );
 
 // reset the game so that when the user comes back from the "lost" screen,
 // a new, fresh round is ready
 reset();
}

There’s something special about the correctGuess block: the Scaffold.of(context) will look up the Scaffold widget in the context. However, the context we are currently passing comes from the build(BuildContext context) line, and that context doesn’t contain a Scaffold yet. We can create a new BuildContext by either extracting the widget into another widget (which we won’t be doing now), or by wrapping the widget in a Builder.

So I’ll wrap the Row with the icons in a Builder and I’ll also throw in an Opacity so that the icons have a nice gray color instead of being plain white:

Builder(
 builder: (context) => Opacity(
   opacity: 0.2,
   child: Row(
     mainAxisAlignment: MainAxisAlignment.spaceBetween,
     children: <Widget>[
       ...data.options.map(

And now, when tapping on the shapes on the bottom, the user will either see a full-screen green snackbar with a check icon and the text “Correct!”, or find themselves on the “Lost” screen. Great! Now, there’s only one thing left before we can call our app a game – drawing the tappable shape on the screen.

Drawing touchable shapes in Flutter

Now that we have the core game logic set up and we have a nice Game screen we can draw on, it’s time to get dirty with drawing on a canvas. Whilst we could use Flutter’s native drawing capabilities, we’d lack a very important feature – interactivity.

Lucky for us, there’s a package that despite having a bit limited drawing capabilities, has support for interactivity – and it’s called touchable. Let’s just add it into our dependencies in the pubspec.yaml:

touchable: any

And now, a few words about how we’re going to achieve drawing shapes. I’ll create some custom painters inside lib/core/shapepainters. They will extend the CustomPainter class that comes from the touchable library. Each of these painters will be responsible for drawing a single shape (e.g. a circle, a line, or a square). I won’t be inserting the code required for all of them inside the article. Instead, you can check it out inside the repository here.

Then, inside the RoundUtilities.dart, we’ll have a method that will return the corresponding painter for the string name of it – e.g. if we pass “circle”, we’ll get the Circle CustomPainter.

We’ll be able to use this method in the Game screen, and we’ll pass the result of this method to the CustomPaint widget coming from the touchable package. This widget will paint the shape on a canvas and add the required interactivity.

Creating a CustomPainter

Let’s get started! First, let’s look at one of the CustomPainters (the other ones only differ in the type of shape they draw on the canvas, so we won’t look into them). First, we’ll initialize an empty CustomPainter with the default methods and two properties, context and onTap:

import 'package:flutter/material.dart';
import 'package:touchable/touchable.dart';
 
class Square extends CustomPainter {
 final BuildContext context;
 final Function onTap;
 
 Square(this.context, this.onTap);
 
 @override
 void paint(Canvas canvas, Size size) {
 }
 
 @override
 bool shouldRepaint(CustomPainter oldDelegate) {
   return false;
 }
}

We’ll use the context later when creating the canvas, and the onTap will be the tap handler for our shape. Now, inside the paint overridden method, we can create a TouchyCanvas coming from the package:

var myCanvas = TouchyCanvas(context, canvas);

And draw on it with the built-in methods:

myCanvas.drawRect(
 Rect.fromLTRB(
   0,
   0,
   MediaQuery.of(context).size.width / 1.25,
   MediaQuery.of(context).size.width / 1.25,
 ),
 Paint()..color = Colors.transparent,
 onTapDown: (tapdetail) {
   onTap();
 },
);

This will create a simple rectangle. The arguments in the Rect.fromLTRB define the coordinates of the two points between which the rect will be drawn. It’s 0, 0 and width / 1.25, width / 1.25 for our shape – this will fill in the container we created on the Game screen.

We also pass a transparent color (so that the shape is hidden) and an onTapDown, which will just run the onTap property which we pass. Noice!

brooklyn nine-nine noice gif

This is it for drawing our square shape. I created the other CustomPainter classes that we’ll need for drawing a circle, cross, donut, line, oval, and square shapes. You could either try to implement them yourself, or just copy and paste them from the repository here.

Drawing the painter on the screen

Now that our painters are ready, we can move on to the second step: the getPainterForName method. First, I’ll import all the painters into the RoundUtilities.dart:

import 'shapepainters/Circle.dart';
import 'shapepainters/Cross.dart';
import 'shapepainters/Donut.dart';
import 'shapepainters/Line.dart';
import 'shapepainters/Oval.dart';
import 'shapepainters/Square.dart';

And then just write a very simple switch statement that will return the corresponding painter for the input string:

dynamic getPainterForName(BuildContext context, Function onTap, String name) {
 switch (name) {
   case "circle":
     return Circle(context, onTap);
   case "cross":
     return Cross(context, onTap);
   case "donut":
     return Donut(context, onTap);
   case "line":
     return Line(context, onTap);
   case "oval":
     return Oval(context, onTap);
   case "square":
     return Square(context, onTap);
 }
}

And that’s it for the utilities! Now, we can move back into the Game screen and use this getPainterForName utility and the canvas to draw the shapes on the screen:

Container(
 height: width / 1.25,
 width: width / 1.25,
 child: CanvasTouchDetector(
   builder: (context) {
     return CustomPaint(
       painter: getPainterForName(
         context,
         onShapeTap,
         data.options[data.correct],
       ),
     );
   },
 ),
),

And that’s it! We only need to create an onShapeTap handler to get all these things working – for now, it’s okay to just throw in a print statement, and we’ll add the haptic feedbacks and the vibrations later on:

void onShapeTap() {
 print(
   "the user has tapped inside the shape. we should make a gentle haptic feedback!",
 );
}

And now, when you tap on the shape inside the blank space, the Flutter CLI will pop up this message in the console. Awesome! We only need to add the haptic feedback, store the high scores, and wrap things up from now on.

Adding haptic feedback and vibration in Flutter

When making mobile applications, you should always aim for designing native experiences on both platforms. That means using different designs for Android and iOS, and using the platform’s native capabilities like Google Pay / Apple Pay or 3D Touch. To be able to think about which designs and experiences feel native on different platforms, you should use both platforms while developing, or at least be able to try out them sometimes.

One of the places where Android and iOS devices differ is how they handle vibrations. While Android has a basic vibration capability, iOS comes with a very extensive haptic feedback engine that enables creating gentle hit-like feedback, with custom intensities, curves, mimicking the 3D Touch effect, tapback and more. It helps the user feel their actions, taps, and gestures, and as a developer, it’s a very nice finishing touch for your app to add some gentle haptic feedback to your app. It will help the user feel your app native and make the overall experience better.

Some places where you can try out this advanced haptic engine on an iPhone (6s or later) are the home screen when 3D Touching an app, the Camera app when taking a photo, the Clock app when picking out an alarm time (or any other carousel picker), some iMessage effects, or on notched iPhones, when opening the app switcher from the bottom bar. Other third party apps also feature gentle physical feedback: for example, the Telegram app makes a nice and gentle haptic feedback when sliding for a reply.

Before moving on with this tutorial, you may want to try out this effect to get a feeling of what we are trying to achieve on iOS – and make sure that you are holding the device in your whole palm so that you can feel the gentle tapbacks.

In our app, we’d like to add these gentle haptic feedbacks in a lot of places: when navigating, making a guess, or, obviously, when tapping inside the shape. On Android, we’ll only leverage the vibration engine when the user taps inside a shape or loses.

And since we’d like to execute different code based on which platform the app is currently running on, we need a way to check the current platform in the runtime. Lucky for us, the dart:io provides us with a Platform API that we can ask if the current platform is iOS or Android. We can use the HapticFeedback API from the flutter/services.dart to call the native haptic feedback and vibration APIs:

// lib/core/HapticUtilities.dart
 
import 'dart:io' show Platform;
import 'package:flutter/services.dart';
 
void lightHaptic() {
 if (Platform.isIOS) {
   HapticFeedback.lightImpact();
 }
}
 
void vibrateHaptic() {
 if (Platform.isIOS) {
   HapticFeedback.heavyImpact();
 } else {
   // this will work on most Android devices
   HapticFeedback.vibrate();
 }
}

And we can now import this file on other screens and use the lightHaptic and vibrateHaptic methods to make haptic feedback for the user that works on both platforms that we’re targeting:

// lib/ui/screens/Game.dart
import '../../core/HapticUtilities.dart'; // ADD THIS LINE
 
...
 
void guess(BuildContext context, String name) {
   lightHaptic(); // ADD THIS LINE
 
...
 
void lost() {
   vibrateHaptic(); // ADD THIS LINE
 
...
 
Container(
 height: width / 1.25,
 width: width / 1.25,
 child: CanvasTouchDetector(
   builder: (context) {
     return CustomPaint(
       painter: getPainterForName(
         context,
         vibrateHaptic, // CHANGE THIS LINE
 

And on the Home and Lost screens:

// Home.dart
// Home.dart
return GestureDetector(
 // tapping on empty spaces would not trigger the onTap without this
 behavior: HitTestBehavior.opaque,
 onTap: () {
   // navigate to the game screen
   lightHaptic(); // ADD THIS LINE
   Navigator.pushNamed(context, "/game");
 },
 
...
 
// Lost.dart
return GestureDetector(
 behavior: HitTestBehavior.opaque,
 onTap: () {
   // navigate to the game screen
   lightHaptic(); // ADD THIS LINE
   Navigator.pop(context);
 },

…aaaaand you’re done for iOS! On Android, there’s still a small thing required – you need permission for using the vibration engine, and you can ask for permission from the system in the shapeblinder/android/app/src/main/AndroidManifest.xml:

<manifest ...>
 <uses-permission android:name="android.permission.VIBRATE"/>
 ...

Now when running the app on a physical device, you’ll feel either the haptic feedback or the vibration, depending on what kind of device you’re using. Isn’t it amazing? You can literally feel your code!

feel it gif

Storing high scores – data persistency in Flutter

There’s just one new feature left before we finish the MVP of this awesome game. The users are now happy – they can feel a sense of accomplishment when they guess right, and they get points, but they can’t really flex with their highest score for their friends as we don’t store them. We should fix this by storing persistent data in Flutter! ?

To achieve this, we’ll use the shared_preferences package. It can store simple key/value pairs on the device. You should already know what to do with this dependency: go into pubspec.yaml, add it into the deps, wait until VS Code runs the flutter pub get command automatically or run it by yourself, and then restart the current Flutter session by hitting Ctrl + C and running flutter run again.

Now that the shared_preferences package is injected, we can start using it. The package has two methods that we’ll take use of now: .getInt() and .setInt(). This is how we’ll implement them:

  • We’ll store the high score when the user loses the game
  • We’ll retrieve it in the Tap widget, and on the Game screen

Let’s get started by storing the high score! Inside the lib/ui/screens/Game.dart, we’ll create two methods: loadHigh and setHigh:

void loadHigh() async {
 SharedPreferences prefs = await SharedPreferences.getInstance();
 
 setState(() {
   high = prefs.getInt('high') ?? 0;
 });
}
 
void setHigh(int pts) async {
 SharedPreferences prefs = await SharedPreferences.getInstance();
 prefs.setInt('high', pts);
 
 setState(() {
   high = pts;
 });
}

And because we’re displaying the high score in the Logo widget, we’ll want to call setState when the score is updated – so that the widget gets re-rendered with our new data. We’ll also want to call the loadHigh when the screen gets rendered the first time – so that we’re displaying the actual stored high score for the user:

// the initState method is ran by Flutter when the element is first time painted
// it's like componentDidMount in React
@override
void initState() {
 reset();
 loadHigh(); // ADD THIS
 super.initState();
}

And when the user loses, we’ll store the high score:

 void lost() {
   vibrateHaptic();
 
   // if the score is higher than the current high score,
   // update the high score
   if (points > high) {
     setHigh(points);
   }
 
   ...

And that’s it for the game screen! We’ll also want to load the high score on the Tap widget, which – currently – is a StatelessWidget. First, let’s refactor the Tap widget into a StatefulWidget by right-clicking on the name of the class, hitting “Refactor…”, and then “Convert to StatefulWidget”.

Then, define the state variables and use the very same methodology we already looked at to load the high score and update the state:

class _TapState extends State<Tap> {
 int high = 0;
 
 void loadHigh() async {
   SharedPreferences prefs = await SharedPreferences.getInstance();
 
   setState(() {
     high = prefs.getInt('high') ?? 0;
   });
 }

Then, call this loadHigh method inside the build so that the widget is always caught up on the latest new high score:

@override
Widget build(BuildContext context) {
 loadHigh();
 
 return Center(
   ...

Oh, and we should also replace the hard-coded “high score: 0”s with the actual variable that represents the high score:

Text(
 "best score: $high",

Make sure that you update your code both in the Game and the Tap widgets. We’re all set now with storing and displaying the high score now, so there’s only one thing left:

Summing our Dart and Flutter series up

Congratulations! ? I can’t really explain with words how far we’ve come into the whole Dart and Flutter ecosystem in these three episodes together:

  • First, we looked at Dart and OOP: We looked at variables, constants, functions, arrays, objects, object-oriented programming, and asynchrony, and compared these concepts to what we’ve seen in JavaScript.
  • Then, we started with some Flutter theory: We took a peek at the Flutter CLI, project structuring, state management, props, widgets, layouts, rendering lists, theming, and proper networking.
  • Then we created a pretty amazing game together: We built a cross-platform game from scratch. We mastered the Hero animation, basic concepts about state management, importing third-party dependencies, building multiple screens, navigating, storing persistent data, adding vibration, and more…

I really hope you enjoyed this course! If you have any questions, feel free to reach out in the comments section. It was a lot to take in, but there’s still even more to learn! If you want to stay tuned, subscribe to our newsletter – and make sure that you check out these awesome official Dart and Flutter related resources later on your development journey:

I’m excited to see what you all will build with this awesome tool. Happy Fluttering!

All the bests, ❤️
Daniel from RisingStack

Flutter Crash Course for JavaScript Developers

Welcome! I’m glad you’re here again for some more Dart and Flutter magic.

✨ In the previous episode of this series, we looked at Dart and went from basically zero to hero with all those types, classes and asynchrony. I hope you had enough practice on Dart because today, we’ll move forward to Flutter. Let’s get started!

Quick heads up: the “?” emoji will compare JS and React with Dart and Flutter language examples as of now. Just like in the previous episode,, the left side will be the JS/React, and the right side will be the Dart/Flutter equivalent, e.g. console.log("hi!"); ? print("hello!");

What is Flutter, and why we’ll use it

Flutter and Dart are both made by Google. While Dart is a programming language, Flutter is a UI toolkit that can compile to native Android and iOS code. Flutter has experimental web and desktop app support, and it’s the native framework for building apps for Google’s Fuchsia OS.

This means that you don’t need to worry about the platform, and you can focus on the product itself. The compiled app is always native code as Dart compiles to ARM, hence providing you the best cross-platform performance you can get right now with over 60 fps.

Flutter also helps the fast development cycle with stateful hot reload, which we’ll make use of mostly in the last episode of this series.

Intro to the Flutter CLI

When building apps with Flutter, one of the main tools on your belt is the Flutter CLI. With the CLI, you can create new Flutter projects, run tests on them, build them, and run them on your simulators or emulators. The CLI is available on WindowsLinuxmacOS and x64-based ChromeOS systems.

Once you have the CLI installed, you’ll also need either Android StudioXcode, or both, depending on your desired target platform(s).

(Flutter is also available on the web and for desktop, but they are still experimental, so this tutorial will only cover the Android and iOS related parts).

If you don’t wish to use Android Studio for development, I recommend VSCode. You can also install the Dart and Flutter plugins for Visual Studio Code.

Once you’re all set with all these new software, you should be able to run flutter doctor. This utility will check if everything is working properly on your machine. At the time of writing, Flutter printed this into the console for me:

[✓] Flutter (Channel stable, v1.17.4, on Mac OS X 10.15.4 19E287, locale en-HU)

[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
[✓] Xcode - develop for iOS and macOS (Xcode 11.5)
[!] Android Studio (version 3.5)
    ✗ Flutter plugin not installed; this adds Flutter specific functionality.
    ✗ Dart plugin not installed; this adds Dart specific functionality.
[✓] VS Code (version 1.46.1)
[!] Connected device
    ! No devices available

You should get similar results for at least for the Flutter part too. Everything else depends on your desired target platforms and your preferred IDEs like Android Studio or VS Code. If you get an X for something, check again if everything is set up properly.

Only move forward in this tutorial if everything works properly.

To create a new Flutter project, cd into your preferred working directory, and run flutter create <projectname>. The CLI will create a directory and place the project files in there. If you use VS Code on macOS with an iOS target, you can use this little snippet to speed up your development process:

# Create a new project
flutter create <projectname>

# move there
cd projectname

# open VS code editor
code .

# open iOS Simulator - be patient, it may take a while
open -a Simulator.app

# start running the app
flutter run

And boom, you’re all set! ?

If you don’t wish to use the iOS simulator, you can always spin up your Android Studio emulator. Use Genymotion (or any other Android emulation software), or even connect a real device to your machine. This is a slower and more error-prone solution, so I recommend to only test on real devices when necessary.

Once they have booted, you can run flutter doctor again and see if Flutter sees the connected device. You should get an output something just like this:

...
[✓] Connected device (1 available)
...

If you got this output – congratulations! ? You’re all set to move on with this tutorial. If, for some reason Flutter didn’t recognize your device, please go back and check everything again as you won’t be able to follow the instructions from now on.

Hello world! ?

If you didn’t run the magic snippet previously, run these commands now:

# Create a new project
flutter create <projectname>

# move there
cd projectname

# open VS code editor (optional if you use Studio)
code .

# start running the app
flutter run

This will spin up the Flutter development server with stateful hot reload and a lot more for you. You’ll see, that by default, Flutter creates a project with a floating action button and a counter:

flutter demo homepage

Once you’re finished with playing around the counter, let’s dig into the code! ?‍?

Flutter project structure

Before we dig right into the code, let’s take a look at the project structure of our Flutter app for a moment:

├── README.md
├── android
│   └── ton of stuff going on here...
├── build
│   └── ton of stuff going on here...
├── ios
│   └── ton of stuff going on here...
├── lib
│   └── main.dart
├── pubspec.lock
├── pubspec.yaml
└── test
    └── widget_test.dart

We have a few platform-specific directories: android and ios. These contain the necessary stuff for building, like the AndroidManifestbuild.gradle, or your xcodeproj.

At this moment, we don’t need to modify the contents of these directories so we’ll ignore them for now. We’ll also ignore the test directory as we won’t cover testing Flutter in this series (but we may look into it later if there’s interest ?), so that only leaves us to these:

├── lib
│   └── main.dart
├── pubspec.lock
├── pubspec.yaml

And this is where the magic happens. Inside the lib directory, you have the main.dart: that’s where all the code lives right now. We’ll peek into it later, but let’s just have a look at the pubspec.yaml and pubspec.lock.

What are those?

wait a minute, who are you gif

Package management in Flutter – pub.dev

When building a project with JavaScript, we often use third party components, modules, packages, libraries, and frameworks so that we don’t have to reinvent the wheel. The JavaScript ecosystem has npm and yarn to provide you with all those spicy zeroes and ones, and they also handle the dependencies inside your project.

In the Dart ecosystem, this is all handled by pub.dev.

So, just a few quick facts:
npm ? pub.dev
package.json ? pubspec.yaml
package-lock.json ? pubspec.lock

We’ll look into installing packages and importing them into our app in the last episode of this series, in which we’ll create a fun mini-game.

Digging into the Dart code

The only thing left from the file tree is main.dartmain is the heart of our app, it’s like the index.js of most JS-based projects. By default, when creating a project with flutter create, you’ll get a very well documented code with a StatelessWidget, a StatefulWidget, and its State.

So instead of observing the demo code line by line together, I encourage you to read the generated code and comments by yourself and come back here later.

In the next part, we’ll look into what are widgets and the build method.

We’ll learn why it is @overrided, and what’s the difference between stateful and stateless widgets. Then we’ll delete all the code from main.dart and create a Hello world app by ourselves so that you can get the hang of writing declarative UI code in Flutter.

Go ahead, read the generated code and the documentation now! ?

In Flutter, everything is a widget!

As you have been reading the code, you may have noticed a few things. The first thing after importing Flutter is the entry method I have been talking about in the previous episode:

void main() {
 runApp(MyApp());
}

And then, you could see all those classes and OOP stuff come back with the line class MyApp extends StatelessWidget.

First things first: in Flutter, everything is a widget!
Oh, and speaking of widgets. Components ? Widgets!

The StatelessWidget is a class from the Flutter framework, and it’s a type of widget. Another kind of widget is StatefulWidget and we’ll look into the difference between those and how to use them later.

We can create our reusable widget by extending the base class StatelessWidget with our own build method. (By the way, render in ReactJS ? build in Flutter). We can see that the build returns a Widget because the return type is defined, and we can see an odd keyword in the previous line: @override.

It’s needed because the StatelessWidget class has a definition for build by default, but we want to replace it (or override it) with our own implementation – hence the keyword @override. Before we dig further into the code, let’s have a peek at using widgets in Flutter:

// using a React component
<button onClick={() => console.log(‘clicked!’)}>Hi, I’m a button</button>
// using a Flutter widget
RawMaterialButton(
 onPressed: () {
   print("hi, i'm pressed");
 },
 child: Text("press me!"),
),

You can see that Flutter has a different approach with declarative UI code.

Instead of wrapping children between ><s and passing props next to the component name (e.g. <button onClick ...), everything is treated as a property. This enables Flutter to create more flexible and well-typed widgets: we’ll always know if a child is supposed to be a standalone widget or if it can accept multiple widgets as a property, for example. This will come in handy later when we’ll build layouts with Rows and Columns.

Now that we know a bit more about widgets in Flutter, let’s take a look at the generated code again:

@override
Widget build(BuildContext context) {
 return MaterialApp(
   title: 'Flutter Demo',
   theme: ThemeData(
     primarySwatch: Colors.blue,
   ),
   home: MyHomePage(title: 'Flutter Demo Home Page'),
 );
}

The build method returns a MaterialApp that has a type of Widget and – unsurprisingly – comes from Flutter. This MaterialApp widget is a skeleton for your Flutter app. It contains all the routes, theme data, metadata, locales, and other app-level black magic you want to have set up. ?

You can see the MyHomePage class being referenced as the home screen. It also has a propertytitle, set up. MyHomePage is also a widget, and we can confirm that by looking at the definition of this class.

Quick tip: if you are using VSCode as your editor, hold Command and hover or click on the class reference and you’ll be directed to the code of the class.

vscode class reference

We can see that MyHomePage extends a StatefulWidget. However, the structure of the code itself is a bit squiggly and weird. What’s this MyHomePage({Key key, this.title}) : super(key: key); syntax? Why doesn’t this widget have a build method? What’s a State? What is createState?

To answer these questions, we’ll have to look into one of the more hard-code topics in Flutter: state management.

Local state management in Flutter: StatefulWidgets

I previously talked about the two main types of widgets in Flutter: StatelessWidgets and StatefulWidgets. StatelessWidgets are pretty straightforward: a snippet of code that returns a Widget, maybe some properties are being passed around, but that’s all complexity.

However, we don’t want to write applications that just display stuff! We want to add interactivity! And most interactions come with some state, whether it’s the data stored in an input field or some basic counter somewhere in your app. And once the state is updated, we want to re-render the affected widgets in our app – so that the new data is being displayed for the user.

Think of state management in React: it has the very same purpose with the goal of being as efficient as possible. It’s no different in Flutter: we want to have some very simple widgets (or StatelessWidgets), and some widgets with a bit of complexity and interactivity (or StatefulWidgets).

Let’s dive into the code: a StatefulWidget consists of two main components:

  • StatefulWidget (that is called MyHomePage in our case)
  • a typed State object (that is called _MyHomePageState in this example)

We’ll call these “widget” and “state” (respectively) for the sake of simplicity. The widget itself contains all the props, and a createState overridden method. As you can see, the prop is marked with a final – that’s because you cannot change the prop from within the widget. When you modify a prop of a widget, Flutter throws the current instance away and creates a brand new StatefulWidget.

Note that changing either the prop or the state will trigger a rebuild in Flutter – the key difference between the two is that changing the state can be initiated from within the widget while changing a prop is initiated by the parent widget.

Props help you pass data from parent to children. State helps you handle data change inside the children.

Now, let’s look into changing the state: inside the widget, we have a createState method that only returns the state, _MyHomePageState(). When modifying the state with the setState method, this createState method gets called and returns a new instance of your state. The old instance gets thrown away, and a new instance of your widget will be inserted into the widget tree.

(Sidenote: the widget tree is only a blueprint of your app, the element tree is the one that gets rendered for the user. It’s a bit more advanced, under-the-hood topic, so it won’t be covered in this series – however, I’ll link some video resources later on that will help you understand how Flutter works and what’s the deal with the widget tree and the element tree.)

The _MyHomePageState class has a type of State, typed with MyHomePage.

This is needed so that you can access the properties set in the MyHomePage instance with the widget keyword – for example, to access the title prop, write widget.title. Inside the state, you have an overridden build method, just like you’d see in a typical StatelessWidget. This method returns a widget that renders some nice data, both from props (widget.title) and from the state (_counter).

Notice that you don’t need to type in anything before the _counter. No this.state._counter, no State.of(context)._counter, just a plain old _counter. That’s because from the perspective of the code, this variable is declared just like any other would be:

int _counter = 0;

However, when modifying this variable, we need to wrap our code in setState, like this:

setState(() {
 _counter++;
});

This will tell Flutter that “Hey! It’s time to re-render me!”.

The framework will call the previously discussed createState method; a new instance of your state gets created; built; rendered; and boom! ? The new data is now on-screen.

It may seem a bit complicated or seem like you have to write a lot of boilerplate code to get this running. But don’t worry! With VS Code, you can refactor any StatelessWidget into a stateful one with just one click:

vscode convert flutter stateless widget to stateful widget

And that’s it for managing your widget’s state! It may be a lot at first, but you’ll get used to it after building a few widgets.

A few notes about global state management in Flutter

Right now, we only looked at working with local state in Flutter – handling app-level, or global state is a bit more complex. There are, just like in JS, tons of solutions, ranging from the built-in InheritedWidget to a number of third-party state management libraries. Some of those may already be familiar, for example, there is RxDart and Redux, just to name a few. To learn more about the most popular solutions, and which one to choose for your project, I suggest you watch this awesome video about global state management in Flutter by Fireship.

Widgets, widgets, and widgets

I already talked about how everything is a widget in Flutter – however, I didn’t really introduce you to some of the most useful and popular widgets in Flutter, so let’s have a look at them before we move on!

Flutter has widgets for displaying texts, buttons, native controls like switches and sliders (cupertino for iOS and material for Android style widgets), layout widgets like StackRowColumn and more. There are literally hundreds of widgets that are available for you out of the box, and the list keeps growing.

The whole widget library can be found here in the Widget Catalog, and the Flutter team is also working on a very nice video series with new episodes being released weekly. This series is called Flutter Widget of the Week, and they introduce you to a Flutter widget, it’s use cases, show you code examples and more, in just about one minute! It’s really binge-worthy if you want to get to know some useful Flutter widgets, tips, and tricks.

Here a link for the whole series playlist, and here is the intro episode.

Some useful widgets in Flutter

As you’ll work with Flutter, you’ll explore more and more widgets, but there are some basic Flutter widgets you’ll absolutely need to build your first application. (We’ll probably use most of them in the next and last episode of this series, so stay tuned!)

First and foremost: Text.

The Text widget delivers what its name promises: you can display strings with it. You can also style or format your text and even make multiline texts. (There’s are a lot of line of text-related widgets available, covering your needs from displaying rich text fields to creating selectable texts.)

An example Text widget in Flutter:

Text('hello world!'),

Adding buttons to your Flutter app is also easy as one two three. There are numerous button-related widgets available for you ranging from RawMaterialButton to FlatButtonIconButton, and RaisedButton, and there are also specific widgets for creating FloatingActionButtons and OutlineButtons. I randomly picked ? the RaisedButton for us so that we can have a peek at how easy it is to add a nice, stylish button into our app:

RaisedButton(
 onPressed: () {
   print(
     "hi! it's me, the button, speaking via the console. over.",
   );
 },
 child: Text("press meeeeeee"),
),

Building layouts in Flutter

When building flexible and complex layouts on the web and in React-Native, the most important tool you used was flexbox. While Flutter isn’t a web-based UI library and hence lacks flexbox, the main concept of using flexible containers with directions and whatnot is implemented and preferred in Flutter. It can be achieved by using Rows and Columns, and you can stack widgets on each other by using Stacks.

Consider the following cheatsheet I made:

layouts-in-flutter-guide

Remember how I previously praised typing the props of a widget and how it’s one of the best tools in Flutter’s declarative UI pattern? The RowColumn and Stack widgets all have a children property that want an array of widgets, or [Widget]. Lucky for you, the VS Code automatically completes the code for you once you start working with these widgets:

vscode layout autocomplete

Just hit tab to let Code complete the code for you! Maybe in the future, you won’t need to write code at all, Flutter will just suck out the app idea out of your brain and compile that – but until then, get used to hitting tab.

Let’s look at an example where we display some names underneath each other:

Column(
 children: <Widget>[
   Text("Mark"),
   Text("Imola"),
   Text("Martin"),
   Text("Zoe"),
 ],
),

You can see that you create a typed list with the <Widget>[] syntax, you pass it as a prop for the Column, create some amazing widgets inside the list, and boom! The children will be displayed underneath each other. Don’t believe me? Believe this amazing screenshot. ?

Simulator screenshot of the result from the code snippet

Alignment

The real power of Columns and Rows isn’t just placing stuff next to each other, just like flexbox isn’t only about flex-direction either. In Flutter, you can align the children of a Column and Row on two axes, mainAxis and crossAxis.

These two properties are contextual: whilst in a Row, the main axis would be horizontal, and the crossing axis would be vertical, it would be switched in a Column. To help you better understand this axis concept, I created a handy cheat sheet with code examples and more.

layout alignments in flutter

So, for example, if you want to perfectly center something, you’d want to use either the Center widget; or a Row or Column with both mainAxisAlignment and crossAxisAlignment set to .center; or a Row and Column with their mainAxisAlignments set to .center. The possibilities are basically endless with these widgets! ✨

Rendering lists (FlatLists ? ListViews)

Whilst thinking about possible use cases for columns, you may have wondered about creating scrollable, dynamic, reorderable, or endless lists.

While these features could be achieved by using Columns, it would take a lot of effort to do so, not even mentioning updating your list data or lazy rendering widgets when there’s a crapton of data. Lucky you, Flutter has a class for rendering lists of data, and it’s called a ListView!

There are several ways to use a ListView, but the most important ones are the ListView(...) widget and the ListView.builder method. Both of them achieve the very same functionality from the perspective of the user, but programmatically, they differ big time.

First, let’s look into the ListView(..) widget. Syntactically, they are very similar to a Column except that they lack the main and cross-axis alignment properties. To continue on with our previous example for columns when we placed names under each other, I’ll display the very same column converted into a ListView:

ListView(
 children: <Widget>[
   Text("Mark"),
   Text("Imola"),
   Text("Martin"),
   Text("Zoe"),
 ],
),

Tada! ? Your first ListView in Flutter! When refreshing or rebuilding the app (by either pressing a small or capital R in the Flutter CLI), you’ll see the very same thing you saw previously.

However, if you try to drag it, you are now able to scroll inside the container! Note that when a Column has bigger children than its bounds, it will overflow, but a ListView will be scrollable.

ListView builder

While the ListView widget is cool and good, it may not be suitable for every use case. For example, when displaying a list of tasks in a todo app, you won’t exactly know the number of items in your list while writing the code, and it may even change over time. Sure, you are able to run .map on the data source, return widgets as results, and then spread it with the ... operator, but that obviously wouldn’t be performant, nor is it a good practice for long lists. Instead, Flutter provides us a really nice ListView builder.

Sidenote: while working with Flutter, you’ll see the word “builder” a lot. For example, in places like FutureBuilder, StreamBuilder, AnimatedBuilder, the build method, the ListView builder, and more. It’s just a fancy word for methods that return a Widget or [Widget], don’t let this word intimidate or confuse you!

So how do we work with this awesome method? First, you should have an array or list that the builder can iterate over. I’ll quickly define an array with some names in it:

final List<String> source = ["Sarah", "Mac", "Jane", "Daniel"];

And then, somewhere in your widget tree, you should be able to call the ListView.builder method, provide some properties, and you’ll be good to go:

ListView.builder(
 itemCount: source.length,
 itemBuilder: (BuildContext context, int i) => Text(source[i]),
),

Oh, and notice how I was able to use an arrow function, just like in JavaScript!

The itemCount parameter is not required, but it’s recommended. Flutter will be able to optimize your app better if you provide this parameter. You can also limit the maximum number of rendered items by providing a number smaller than the length of your data source.

When in doubt, you can always have a peek at the documentation of a class, method, or widget by hovering over its name in your editor:

VS Code documentation peek

And that sums up the layout and list-related part of this episode. We’ll look into providing “stylesheets” (or theme data) for your app, look at some basic routing (or navigation) methods, and fetch some data from the interwebs with HTTP requests.

Theming in Flutter

While building larger applications with custom UI components, you may want to create stylesheets. In Flutter, they are called Themes, and they can be used in a lot of places. For example, you can set a default app color, and then the selected texts, buttons, ripple animations, and more will follow this color. You can also set up text styles (like headings and more), and you’ll be able to access these styles across the app.

To do so, you should provide a theme property for your MaterialApp at the root level of the application. Here’s an example:

return MaterialApp(
     title: 'RisingStack Flutter Demo',
     theme: ThemeData(
       // Define the default brightness and colors.
       brightness: Brightness.light,
       primaryColor: Colors.green[300],
       accentColor: Colors.green,
 
       // Define button theme
       buttonTheme: ButtonThemeData(
         buttonColor: Colors.green,
         shape: CircleBorder(),
       ),
 
         // Define the default font family
        // (this won’t work as we won’t have this font asset yet)
       fontFamily: 'Montserrat',
 
       // Define the default TextTheme. Use this to specify the default
       // text styling for headlines, titles, bodies of text, and more.
       textTheme: TextTheme(
         headline1: TextStyle(fontSize: 72.0, fontWeight: FontWeight.bold),
         headline6: TextStyle(fontSize: 36.0, fontStyle: FontStyle.italic),
         bodyText2: TextStyle(fontSize: 14.0, fontFamily: 'Muli'),
       ),
     ),
     home: Scaffold(...),
   );

These colors will be used throughout our app, and accessing the text themes is also simple as a pickle! I added a RaisedButton on top of the app so that we can see the new ButtonThemeData being applied to it:

themed button in flutter

It’s ugly and all, but it’s ours! ? Applying the text style won’t be automatic, though. As we previously discussed, Flutter can’t really read your mind, so you explicitly need to tag Text widgets as a headline1 or bodyText2, for example.

To do so, you’ll use the Theme.of(context) method. This will look up the widget tree for the nearest Theme providing widget (and note that you can create custom or local themes for subparts of your app with the Theme widget!) and return that theme. Let’s look at an example:

Text(
 "cool names",
 style: Theme.of(context).textTheme.headline6,
),

You can see that we are accessing the theme with the Theme.of(context) method, and then we are just accessing properties like it’s an object. This is all you need to know about theming a Flutter app as it really isn’t a complex topic!

Designing mobile navigation experiences

On the web, when managing different screens of the app, we used paths (e.g. fancysite.com/registration) and routing (e.g., react-router) to handle navigating back and forth the app. In a mobile app, it works a bit differently, so I’ll first introduce you to navigation on mobile, and then we’ll look into implementing it in Flutter.

Mobile navigation differs from the web in a lot of ways. Gestures and animations play a very heavy role in structuring out the hierarchy of the app for your user. For example, when a user navigates to a new screen, and it slides in from the right side of the screen, the user will expect to be able to move back with a slide from the left. Users also don’t expect flashy loadings and empty screens when navigating – and even though there are advancements on the web in this segment (e.g. PWAs), it’s by far not the default experience when using websites.

There are also different hierarchies when designing mobile apps. The three main groups are:

  • Hierarchical Navigation (e.g. the Settings app on iOS)
    • New screens slide in from left to right. The expected behavior for navigating back is with a back button on the upper left corner and by swiping from the left edge of the screen to the right.
  • Flat Navigation (e.g. the Apple Music app)
    • The default behavior for this hierarchy is a tab bar on the bottom.
    • Tabs should always preserve location (e.g. if you navigate to a subscreen inside on tab one, switch to tab two and switch back to tab one, you’d expect to be on the subscreen, not on the root level screen.)
    • Swiping between tabs is optional. It isn’t the default behavior and it may conflict with other gestures on the screen itself – be cautious and think twice before implementing swipeable tab bars.
  • Custom, content-driven, or experimental navigation (Games, books and other content)
    • When making experimental navigation, always try to be sane with the navigation. The user should always be able to navigate back and undo stuff.

I created a handy cheat sheet for you that will remind you of the most important things when in doubt:

mobile navigation ux guide for flutter developers

Also, all of these can be mixed together, and other screens like modals can be added to the stack. Always try to KISS and make sure that the user can always navigate back and undo things. Don’t try to reinvent the wheel with navigation (e.g., reverse the direction of opening up a new screen) as it will just confuse the user.

Also, always indicate where the user is in the hierarchy (e.g., with labeling buttons, app title bar, coloring the bottom bar icons, showing little dots, etc.). If you want to know more about designing mobile navigation experiences and implementing them in a way that feels natural to the user, check out Apple’s Human Interface Guideline’s related articles.

keep it simple gif

When routing on the web with React or React-Native, you had to depend on third-party libraries to get the dirty work done for you (e.g. react-router). Luckily, Flutter has native navigation capabilities out of the box, and they cover every need of most of the apps, and they are provided to you via the Navigator API.

The applications of this API and the possibilities to play around with navigation are endless. You can, for example, animate a widget between screens; build a bottom navigation bar or a hamburger menu; pass arguments; or send data back and forth. You can explore every navigation-related Flutter cookbook here. In this series, we’ll only look into initializing two screens, navigating between them, and sharing some widgets between them.

To get started with navigation, let’s create two widgets that we’ll use as screens and pass the first into a MaterialApp as the home property:

import 'package:flutter/material.dart';
 
void main() {
 runApp(MyApp());
}
 
class MyApp extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
   return MaterialApp(
     title: 'Flutter Demo',
     home: ScreenOne(),
   );
 }
}
 
class ScreenOne extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
   return Scaffold(
     body: Center(
       child: Text("hey! ?"),
     ),
   );
 }
}
 
class ScreenTwo extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
   return Scaffold(
     body: Center(
       child: Text("hi! ??"),
     ),
   );
 }
}

This was easy as a breeze. If you run this app in a simulator, you’ll see “hey! ?” on the center of the screen. Now, inside the MaterialApp, we can define our routes:

return MaterialApp(
 title: 'Flutter Demo',
 home: ScreenOne(),
 routes: <String, WidgetBuilder>{
   '/hey': (BuildContext context) => ScreenOne(),
   '/hi': (BuildContext context) => ScreenTwo(),
 },
);

Then, we’ll need something that will trigger the navigation. I’ll add a RaisedButton to the ScreenOne:

return Scaffold(
 body: Column(
   mainAxisAlignment: MainAxisAlignment.center,
   crossAxisAlignment: CrossAxisAlignment.center,
   children: <Widget>[
     Text("hey! ?"),
     RaisedButton(
       child: Text("take me there"),
       onPressed: () {
         print("hi!");
       },
     ),
   ],
 ),
);

And now, we can navigate the user to the next screen when the button is pressed. Notice that I replaced the Center with a Column with both its main and cross axises centered. This was required because I wanted to have two children underneath each other: a Text and a RaisedButton. Inside the RaisedButton, we only have to push the route to the stack and let Flutter handle the routing and animation:

Navigator.pushNamed(context, '/hi');

By default, we can navigate back to the previous screen by swiping from the left edge of the screen. This is the expected behavior, and we don’t intend to change it, so we’ll leave it as it is. If you want to add a button on the second screen to navigate back to the first screen, you can use Navigator.pop(); method.

Don’t ever push to the same screen the user is on, nor the previous screen. Always use pop when navigating backward.

This will be just enough to cover your basic navigation needs. Don’t forget, if you want to check out more advanced navigation features such as animating widgets between screens or passing data back and forth, check out the related Flutter cookbooks.

Networking, HTTP requests

Now that you can build widgets, layouts, display lists, and you can navigate between screens with Flutter, there’s only one thing left: communicating with your backend API. One of the most popular BaaS providers for mobile and Flutter is Firebase by Google. It allows you to use real-time databases, push notifications, crash reporting, app analytics, and a lot more out of the box. You can find the Flutter Firebase packages on pub.dev or you can follow this step-by-step tutorial.

If you are a more experienced developer and you have a complex project with a custom backend in mind, or if you are just genuinely looking forward to using your own selection of backend APIs – Firebase just won’t suit your needs.

That’s where the http package comes in handy.

Just add it into your dependency list inside the pubspec.yaml, wait until flutter pub get finishes (VSCode automatically runs it for you if it detects changes in the pubspec.yaml), and then continue reading:

dependencies:
 flutter:
   sdk: flutter
  http: any

http is a Future-based library for making HTTP requests. To get started with it, just import it:

import 'package:http/http.dart' as http;

And then, you can start making requests with top-level methods like http.post or http.get. To help you experiment with making HTTP requests in Flutter, I have made a demo API that you can GET on. It will return some names and ages. You can access it here (https://demo-flutter-api.herokuapp.com/people).

Parsing JSON data in Flutter and Dart

After making your GET request on the API, you’ll be able to get data out of it by accessing properties like this:

void request() async {
 final response =
     await http.get("https://demo-flutter-api.herokuapp.com/people");
 print(response.body); // => [{"name":"Leo","age":17},{"name":"Isabella","age":30},{"name":"Michael","age":23},{"name":"Sarah","age":12}]
 print(json.decode(response.body)[0]["name"]); // => Leo
}

However, this solution should not be used in production. Not only it lacks automatic code completion and developer tooling, but it’s very error-prone and not really well documented. It’s just straight-up crap coding. ?

Instead, you should always create a Dart class with the desired data structure for your response object and then process the raw body into a native Dart object. Since we are receiving an array of objects, in Dart, we’ll create a typed List with a custom class. I’ll name the class Person, and it will have two properties: a name (with a type of String) and age (int). I’ll also want to define a .fromJson constructor on it so that we can set up our class to be able to construct itself from a raw JSON string.

First, you’ll want to import dart:convert to access native JSON-related methods like a JSON encoder and decoder:

import 'dart:convert';

Create our very basic class:

class Person {
 String name;
 int age;
}

Extend it with a simple constructor:

Person({this.name, this.age});

And add in the .fromJson method, tagged with the factory keyword. This keyword informs the compiler that this isn’t a method on the class instance itself. Instead, it will return a new instance of our class:

factory Person.fromJson(String str) => Person.fromMap(json.decode(str));
factory Person.fromMap(Map<String, dynamic> json) => new Person(
     name: json["name"],
     age: json["age"],
   );

Notice that I created two separate methods: a fromMap and a fromJson. The fromMap method itself does the dirty work by deconstructing the received Map. The fromJson just parses our JSON string and passes it into the fromMap factory method.

Now, we should just map over our raw response, use the .fromMap factory method, and expect everything to go just fine:

List<Person> listOfPeople = json
   .decode(response.body)
   .map<Person>((i) => Person.fromMap(i))
   .toList();
 
print(listOfPeople[0].name); // => Leo

Sidenote: I didn’t use the .fromJson method because we already parsed the body before mapping over it, hence it’s unneeded right now.

There is a lot to unwrap in these few lines! First, we define a typed list and decode the response.body. Then, we map over it, and we throw in the return type <Person> to the map so that Dart will know that we expect to see a Person as a result of the map function. Then, we convert it to a List as otherwise it would be an MappedListIterable.

Rendering the parsed JSON: FutureBuilder and ListView.builder

Now that we have our app up and running with our basic backend, it’s time to render our data. We already discussed the ListView.builder API, so we’ll just work with that.

But before we get into rendering the list itself, we want to handle some state changes: the response may be undefined at the moment of rendering (because it is still loading), and we may get an error as a response. There are several great approaches to wrap your head around handling these states, but we’ll use FutureBuilder now for the sake of practicing using new Flutter widgets.

FutureBuilder is a Flutter widget that takes a Future and a builder as a property. This builder will return the widget we want to render on the different states as the Future progresses.

Note that FutureBuilder handles state changes inside the widget itself, so you can still use it in a StatelessWidget! Since the http package is Future-based, we can just use the http.get method as the Future for our FutureBuilder:

@override
Widget build(BuildContext context) {
 return Scaffold(
   body: FutureBuilder(
     future: http.get("https://demo-flutter-api.herokuapp.com/people"),
   ),
 );
}

And we should also pass a builder. This builder should be able to respond to three states: loadingdone and error. At first, I’ll just throw in a centered CircularProgressIndicator() to see that our app renders something:

return Scaffold(
 body: FutureBuilder(
   future: http.get("https://demo-flutter-api.herokuapp.com/people"),
   builder: (BuildContext context, AsyncSnapshot<http.Response> response) {
     return Center(
       child: CircularProgressIndicator(),
     );
   },
 ),
);

If you run this app, you’ll see a progress indicator in the center of the screen running indefinitely. We can get the state of the response by the response.hasData property:

builder: (BuildContext context, AsyncSnapshot<http.Response> response) {
 if (response.hasData) {
   // loaded!
 } else if (response.hasError) {
   // error!
   return Center(
     child: Text("error!"),
   );
 } else {
   // loading...
   return Center(
     child: CircularProgressIndicator(),
   );
 }
},

And now, we can be sure that nothing comes between us and processing, then rendering the data, so inside the response.hasData block, we’ll process the raw response with previously discussed parsing and mapping method, then return a ListView.builder:

// loaded!
List<Person> listOfPeople = json
   .decode(response.data.body)
   .map<Person>((i) => Person.fromMap(i))
   .toList();
 
return ListView.builder(
 itemCount: listOfPeople.length,
 itemBuilder: (BuildContext context, int i) => Text(
   "${listOfPeople[i].name} (${listOfPeople[i].age})",
 ),
);

And that’s it! ? If you run this snippet right now, it will render four names and their corresponding ages next to them. Isn’t this amazing? It may have seemed like a lot of work for a simple list like this, but don’t forget that we created a whole-blown class, parsed JSON, and converted it into class instances, and we even handled loading and error states.

Summing it all up

Congratulations on making it this far into the course! You have learned a lot and came along a long way since we started in the previous episode.

You went from zero to hero both with Dart (types, control flow statements, data structures, OOP, and asynchrony) and Flutter (CLI, widgets, alignment, lists, themes, navigation and networking).

This really has been a lot of work, and you’ll still have to learn a lot until you get fluent in Flutter, but in the end, the only thing that will matter is the result of your hard work. And that’s what we’re going to harvest in the next and last episode of this Flutter series: we’ll build a fun mini-game with Dart and Flutter! ?

I’m really looking forward to seeing you here next week. Until then, stay tuned, and happy Fluttering! ✌️

All the bests, ?
Daniel from RisingStack