Hírek, események

History of JavaScript on a Timeline

In the early 1990s, Brendan Eich was working on a project at Netscape Communications Corporation. He needed a scripting language for web pages that would be easy to use, so he created one himself. He called it JavaScript. And the rest, as they say, is history.

In this blog post, we’ll take a look at the history of JavaScript on a timeline. We’ll see how it has evolved over the years and what new features have been added along the way. So sit back and enjoy learning about one of the most popular programming languages in the world!

1994-1998: The Netscape era

  • On December 15, 1994, Netscape Communications Corporation released the Netscape Navigator 1.0 web browser.
  • Brendan Eich created the very first version of JavaScript, codenamed “Mocha”, then later (still internally) renamed to LiveScript
  • “Netscape and Sun announce JavaScript, the open, cross-platform object scripting language for enterprise networks and the internet”
  • Microsoft introduced JScript in Internet Explorer to compete with Netscape.
  • Netscape 2 was released with JavaScript 1.0
  • Netscape submitted JavaScript to Ecma International, as the starting point for a standard specification.
  • Official release of the first ECMAScript language specification.

1999-2007: The showdown of Internet Explorer VS Mozilla Firefox

  • Microsoft releases Internet Explorer 5, that uses even more proprietary technology than before.
  • ECMAScript 2: Editorial changes to align ECMA-262 with the standard ISO/IEC 16262
  • ECMAScript 3: do-while, regular expressions, new string methods (concat, match, replace, slice, split with a regular expression, etc.), exception handling, and more
  • Firefox is released to compete with Internet Explorer.
  • Jesse James Garrett released a white paper in which he coined the term Ajax.

2008-2012: Netscape died, and Google Chrome was created

  • Netscape Navigator: end of life
  • ECMAScript 4 is officially abandoned.
  • Google releases the Chrome browser, the fastest web browser at the time.
  • Node.js was created by Ryan Dahl
  • ECMAScript 5 (formerly ECMAScript 3.1), that adds a strict mode, getters and setters, new array methods, support for JSON, and more.
  • TypeScript: a language for application-scale JavaScript development

2013-2014: from ASM.js to WebAssembly

  • ASM.js has been released
  • React, a JavaScript library for building user interfaces
  • “Disable Javascript” option removed in Firefox 23
  • Facebook Launches Flow, Static Type Checker for JavaScript

2015-2020: the rise of Node.js

  • Introduction of the Node.js Foundation
  • ECMAScript 6 (ES2015) is released.
  • WebAssembly
  • Object.observe withdrawn from TC39
  • Microsoft Edge’s JavaScript engine to go open-source
  • ECMAScript 2016 Language Specification
  • ECMAScript 2017 Language Specification
  • ECMA TC39: “SmooshGate” was officially resolved by renaming flatten to flat
  • ECMAScript 2018 Language Specification
  • JavaScript is now required to sign in to Google
  • ECMAScript modules in Node.js
  • ECMAScript 2019 Language Specification
  • QuickJS JavaScript Engine

2020-2022: Deno is created and Internet Explorer is officially retired

  • Deno: initial release
  • ECMAScript 2020 Language Specification
  • ECMAScript 2021 Language Specification
  • Deno joins TC39
  • Internet Explorer 11 has retired and is officially out of support

RedwoodJS vs. BlitzJS: The Future of Fullstack JavaScript Meta-Frameworks

Redwood and Blitz are two up-and-coming full-stack meta-frameworks that provide tooling for creating SPAs, server-side rendered pages, and statically generated content, providing a CLI to generate end-to-end scaffolds. I’ve been waiting for a worthy Rails replacement in JavaScript since who-knows-when. This article is an overview of the two, and while I’ve given more breadth to Redwood (as it differs from Rails a great deal), I personally prefer Blitz.

As the post ended up being quite lengthy, below, we provide a comparison table for the hasty ones.

A bit of history first

If you started working as a web developer in the 2010s, you might not have even heard of Ruby on Rails, even though it gave us apps like Twitter, GitHub, Urban Dictionary, Airbnb, and Shopify. Compared to the web frameworks of its time, it was a breeze to work with. Rails broke the mold of web technologies by being a highly opinionated MVC tool, emphasizing the use of well-known patterns such as convention over configuration and DRY, with the addition of a powerful CLI that created end-to-end scaffolds from model to the template to be rendered. Many other frameworks have built on its ideas, such as Django for Python, Laravel for PHP, or Sails for Node.js. Thus, arguably, it is a piece of technology just as influential as the LAMP stack before its time.

However, the fame of Ruby on Rails has faded quite a bit since its creation in 2004. By the time I started working with Node.js in 2012, the glory days of Rails were over. Twitter — built on Rails — was infamous for frequently showcasing its fail whale between 2007 and 2009. Much of it was attributed to the lack of Rails’ scalability, at least according to word of mouth in my filter bubble. This Rails bashing was further reinforced when Twitter switched to Scala, even though they did not completely ditch Ruby then.

The scalability issues of Rails (and Django, for that matter) getting louder press coverage coincided with the transformation of the Web too. More and more JavaScript ran in the browser. Webpages became highly interactive WebApps, then SPAs. Angular.js revolutionized that too when it came out in 2010. Instead of the server rendering the whole webpage by combining the template and the data, we wanted to consume APIs and handle the state changes by client-side DOM updates.

Thus, full-stack frameworks fell out of favor. Development got separated between writing back-end APIs and front-end apps. And these apps could have meant Android and iOS apps too by that time, so it all made sense to ditch the server-side rendered HTML strings and send over the data in a way that all our clients could work with.

UX patterns developed as well. It wasn’t enough anymore to validate the data on the back-end, as users need quick feedback while they’re filling out bigger and bigger forms. Thus, our life got more and more complicated: we needed to duplicate the input validations and type definitions, even if we wrote JavaScript on both sides. The latter got simpler with the more widespread (re-)adoption of monorepos, as it got somewhat easier to share code across the whole system, even if it was built as a collection of microservices. But monorepos brought their own complications, not to mention distributed systems.

And ever since 2012, I have had a feeling that whatever problem we solve generates 20 new ones. You could argue that this is called “progress”, but maybe merely out of romanticism, or longing for times past when things used to be simpler, I’ve been waiting for a “Node.js on Rails” for a while now. Meteor seemed like it could be the one, but it quickly fell out of favor, as the community mostly viewed it as something that is good for MVPs but does not scale… The Rails problem all over again, but breaking down at an earlier stage of the product lifecycle. I must admit, I never even got around to try it.

However, it seemed like we were getting there slowly but steadily. Angular 2+ embraced the code generators á la Rails, alongside with Next.js, so it seemed like it could be something similar. Next.js got API Routes, making it possible to handle the front-end with SSR and write back-end APIs too. But it still lacks a powerful CLI generator and has nothing to do with the data layer either. And in general, a good ORM was still missing from the equation to reach the power level of Rails. At least this last point seems to be solved with Prisma being around now.

Wait a minute. We have code generators, mature back-end and front-end frameworks, and finally, a good ORM. Maybe we have all pieces of the puzzle in place? Maybe. But first, let’s venture a bit further from JavaScript and see if another ecosystem has managed to further the legacy of Rails, and whether we can learn from it.

Enter Elixir and Phoenix

Elixir is a language built on Erlang’s BEAM and OTP, providing a nice concurrency model based on the actor model and processes, which also results in easy error handling due to the “let it crash” philosophy in contrast to defensive programming. It also has a nice, Ruby-inspired syntax, yet remains to be an elegant, functional language.

Phoenix is built on top of Elixir’s capabilities, first as a simple reimplementation of Rails, with a powerful code generator, an data mapping toolkit (think ORM), good conventions, and generally good dev experience, with the inbuilt scalability of the OTP.

Yeah.. So far, I wouldn’t have even raised an eyebrow. Rails got more scalable over time, and I can get most of the things I need from a framework writing JavaScript these days, even if wiring it all up is still pretty much DIY. Anyhow, if I need an interactive browser app, I’ll need to use something like React (or at least Alpine.js) to do it anyway.

Boy, you can’t even start to imagine how wrong the previous statement is. While Phoenix is a full-fledged Rails reimplementation in Elixir, it has a cherry on top: your pages can be entirely server-side rendered and interactive at the same time, using its superpower called LiveView. When you request a LiveView page, the initial state gets prerendered on the server side, and then a WebSocket connection is built. The state is stored in memory on the server, and the client sends over events. The backend updates the state, calculates the diff, and sends over a highly compressed changeset to the UI, where a client-side JS library updates the DOM accordingly.

I heavily oversimplified what Phoenix is capable of, but this section is already getting too long, so make sure to check it out yourself!

We’ve taken a detour to look at one of the best, if not the best full-stack frameworks out there. So when it comes to full-stack JavaScript frameworks, it only makes sense to achieve at least what Phoenix has achieved. Thus, what I would want to see:

  1. A CLI that can generate data models or schemas, along with their controllers/services and their corresponding pages
  2. A powerful ORM like Prisma
  3. Server-side rendered but interactive pages, made simple
  4. Cross-platform usability: make it easy for me to create pages for the browser, but I want to be able to create an API endpoint responding with JSON by just adding a single line of code.
  5. Bundle this whole thing together

With that said, let’s see whether Redwood or Blitz is the framework we have been waiting for.

BlitzJS vs. RedwoodJS comparison

What is RedwoodJS?

Redwood markets itself as THE full-stack framework for startups. It is THE framework everyone has been waiting for, if not the best thing since the invention of sliced bread. End of story, this blog post is over.

At least according to their tutorial.

I felt a sort of boastful overconfidence while reading the docs, which I personally find difficult to read. The fact that it takes a lighter tone compared to the usual, dry, technical texts is a welcome change. Still, as a text moves away from the safe, objective description of things, it also wanders into the territory of matching or clashing with the reader’s taste. 

In my case, I admire the choice but could not enjoy the result.

Still, the tutorial is worth reading through. It is very thorough and helpful. The result is also worth the… well, whatever you feel while reading it, as Redwood is also nice to work with. Its code generator does what I would expect it to do. Actually, it does even more than I expected, as it is very handy not just for setting up the app skeleton, models, pages, and other scaffolds. It even sets your app up to be deployed to different deployment targets like AWS Lambdas, Render, Netlify, Vercel.

Speaking of the listed deployment targets, I have a feeling that Redwood pushes me a bit strongly towards serverless solutions, Render being the only one in the list where you have a constantly running service. And I like that idea too: if I have an opinionated framework, it sure can have its own opinions about how and where it wants to be deployed. As long as I’m free to disagree, of course.

But Redwood has STRONG opinions not just about the deployment, but overall on how web apps should be developed, and if you don’t agree with those, well…

I want you to use GraphQL

Let’s take a look at a freshly generated Redwood app. Redwood has its own starter kit, so we don’t need to install anything, and we can get straight to creating a skeleton.

$ yarn create redwood-app --ts ./my-redwood-app

You can omit the --ts flag if you want to use plain JavaScript instead.

Of course, you can immediately start up the development server and see that you got a nice UI already with yarn redwood dev. One thing to notice, which is quite commendable in my opinion, is that you don’t need to globally install a redwood CLI. Instead, it always remains project local, making collaboration easier.

Now, let’s see the directory structure.

my-redwood-app
├── api/
├── scripts/
├── web/
├── graphql.config.js
├── jest.config.js
├── node_modules
├── package.json
├── prettier.config.js
├── README.md
├── redwood.toml
├── test.js
└── yarn.lock

We can see the regular prettier.config.js, jest.config.js, and there’s also a redwood.toml for configuring the port of the dev-server. We have an api and web directory for separating the front-end and the back-end into their own paths using yarn workspaces.

But wait, we have a graphql.config.js too! That’s right, with Redwood, you’ll write a GraphQL API. Under the hood, Redwood uses Apollo on the front-end and Yoga on the back-end, but most of it is made pretty easy using the CLI. However, GraphQL has its downsides, and if you’re not OK with the tradeoff, well, you’re shit out of luck with Redwood.

Let’s dive a bit deeper into the API.

my-redwood-app
├── api
│   ├── db
│   │   └── schema.prisma
│   ├── jest.config.js
│   ├── package.json
│   ├── server.config.js
│   ├── src
│   │   ├── directives
│   │   │   ├── requireAuth
│   │   │   │   ├── requireAuth.test.ts
│   │   │   │   └── requireAuth.ts
│   │   │   └── skipAuth
│   │   │       ├── skipAuth.test.ts
│   │   │       └── skipAuth.ts
│   │   ├── functions
│   │   │   └── graphql.ts
│   │   ├── graphql
│   │   ├── lib
│   │   │   ├── auth.ts
│   │   │   ├── db.ts
│   │   │   └── logger.ts
│   │   └── services
│   ├── tsconfig.json
│   └── types
│       └── graphql.d.ts
...

Here, we can see some more, backend related config files, and the debut of tsconfig.json.

  • api/db/: Here resides our schema.prisma, which tells us the Redwood, of course, uses Prisma. The src/ dir stores the bulk of our logic.
  • directives/: Stores our graphql schema directives.
  • functions/: Here are the necessary lambda functions so we can deploy our app to a serverless cloud solution (remember STRONG opinions?).
  • graphql/: Here reside our gql schemas, which can be generated automatically from our db schema.
  • lib/: We can keep our more generic helper modules here.
  • services/: If we generate a page, we’ll have a services/ directory, which will hold our actual business logic.

This nicely maps to a layered architecture, where the GraphQL resolvers function as our controller layer. We have our services, and we can either create a repository or dal layer on top of Prisma, or if we can keep it simple, then use it as our data access tool straight away.

So far so good. Let’s move to the front-end.

my-redwood-app
├── web
│   ├── jest.config.js
│   ├── package.json
│   ├── public
│   │   ├── favicon.png
│   │   ├── README.md
│   │   └── robots.txt
│   ├── src
│   │   ├── App.tsx
│   │   ├── components
│   │   ├── index.css
│   │   ├── index.html
│   │   ├── layouts
│   │   ├── pages
│   │   │   ├── FatalErrorPage
│   │   │   │   └── FatalErrorPage.tsx
│   │   │   └── NotFoundPage
│   │   │       └── NotFoundPage.tsx
│   │   └── Routes.tsx
│   └── tsconfig.json
...

From the config file and the package.json, we can deduce we’re in a different workspace. The directory layout and file names also show us that this is not merely a repackaged Next.js app but something completely Redwood specific.

Redwood comes with its router, which is heavily inspired by React Router. I found this a bit annoying as the dir structure-based one in Next.js feels a lot more convenient, in my opinion.

However, a downside of Redwood is that it does not support server-side rendering, only static site generation. Right, SSR is its own can of worms, and while currently you probably want to avoid it even when using Next, with the introduction of Server Components this might soon change, and it will be interesting to see how Redwood will react (pun not intended).

On the other hand, Next.js is notorious for the hacky way you need to use layouts with it (which will soon change though), while Redwood handles them as you’d expect it. In Routes.tsx, you simply need to wrap your Routes in a Set block to tell Redwood what layout you want to use for a given route, and never think about it again.

import { Router, Route, Set } from "@redwoodjs/router";
import BlogLayout from "src/layouts/BlogLayout/";

const Routes = () => {
  return (
    <Router>
      <Route path="/login" page={LoginPage} name="login" />
      <Set wrap={BlogLayout}>
        <Route path="/article/{id:Int}" page={ArticlePage} name="article" />
        <Route path="/" page={HomePage} name="home" />
      </Set>
      <Route notfound page={NotFoundPage} />
    </Router>
  );
};

export default Routes;

Notice that you don’t need to import the page components, as it is handled automatically. Why can’t we also auto-import the layouts though, as for example Nuxt 3 would? Beats me.

Another thing to note is the /article/{id:Int} part. Gone are the days when you always need to make sure to convert your integer ids if you get them from a path variable, as Redwood can convert them automatically for you, given you provide the necessary type hint.

Now’s a good time to take a look at SSG. The NotFoundPage probably doesn’t have any dynamic content, so we can generate it statically. Just add prerender, and you’re good.

const Routes = () => {
  return (
    <Router>
      ...
      <Route notfound page={NotFoundPage} prerender />
    </Router>
  );
};

export default Routes;

You can also tell Redwood that some of your pages require authentication. Unauthenticated users should be redirected if they try to request it.

import { Private, Router, Route, Set } from "@redwoodjs/router";
import BlogLayout from "src/layouts/BlogLayout/";

const Routes = () => {
  return (
    <Router>
      <Route path="/login" page={LoginPage} name="login" />
      <Private unauthenticated="login">
        <Set wrap={PostsLayout}>
          <Route
            path="/admin/posts/new"
            page={PostNewPostPage}
            name="newPost"
          />
          <Route
            path="/admin/posts/{id:Int}/edit"
            page={PostEditPostPage}
            name="editPost"
          />
        </Set>
      </Private>
      <Set wrap={BlogLayout}>
        <Route path="/article/{id:Int}" page={ArticlePage} name="article" />
        <Route path="/" page={HomePage} name="home" />
      </Set>
      <Route notfound page={NotFoundPage} />
    </Router>
  );
};

export default Routes;

Of course, you need to protect your mutations and queries, too. So make sure to append them with the pre-generated @requireAuth.

Another nice thing in Redwood is that you might not want to use a local auth strategy but rather outsource the problem of user management to an authentication provider, like Auth0 or Netlify-Identity. Redwood’s CLI can install the necessary packages and generate the required boilerplate automatically.

What looks strange, however, at least with local auth, is that the client makes several roundtrips to the server to get the token. More specifically, the server will be hit for each currentUser or isAuthenticated call.

Frontend goodies in Redwood

There are two things that I really loved about working with Redwood: Cells and Forms.

A cell is a component that fetches and manages its own data and state. You define the queries and mutations it will use, and then export a function for rendering the Loading, Empty, Failure, and Success states of the component. Of course, you can use the generator to create the necessary boilerplate for you.

A generated cell looks like this:

import type { ArticlesQuery } from "types/graphql";
import type { CellSuccessProps, CellFailureProps } from "@redwoodjs/web";

export const QUERY = gql`
  query ArticlesQuery {
    articles {
      id
    }
  }
`;

export const Loading = () => <div>Loading...</div>;

export const Empty = () => <div>Empty</div>;

export const Failure = ({ error }: CellFailureProps) => (
  <div style={{ color: "red" }}>Error: {error.message}</div>
);

export const Success = ({ articles }: CellSuccessProps<ArticlesQuery>) => {
  return (
    <ul>
      {articles.map((item) => {
        return <li key={item.id}>{JSON.stringify(item)}</li>;
      })}
    </ul>
  );
};

Then you just import and use it as you would any other component, for example, on a page.

import ArticlesCell from "src/components/ArticlesCell";

const HomePage = () => {
  return (
    <>
      <MetaTags title="Home" description="Home page" />
      <ArticlesCell />
    </>
  );
};

export default HomePage;

However! If you use SSG on pages with cells — or any dynamic content really —only their loading state will get pre-rendered, which is not much of a help. That’s right, no getStaticProps for you if you go with Redwood.

The other somewhat nice thing about Redwood is the way it eases form handling, though the way they frame it leaves a bit of a bad taste in my mouth. But first, the pretty part.

import { Form, FieldError, Label, TextField } from "@redwoodjs/forms";

const ContactPage = () => {
  return (
    <>
      <Form config={{ mode: "onBlur" }}>
        <Label name="email" errorClassName="error">
          Email
        </Label>
        <TextField
          name="email"
          validation={{
            required: true,
            pattern: {
              value: /^[^@]+@[^.]+\..+$/,
              message: "Please enter a valid email address",
            },
          }}
          errorClassName="error"
        />
        <FieldError name="email" className="error" />
      </Form>
    </>
  );
};

The TextField components validation attribute expects an object to be passed, with a pattern against which the provided input value can be validated.

The errorClassName makes it easy to set the style of the text field and its label in case the validation fails, e.g. turning it red. The validations message will be printed in the FieldError component. Finally, the config={{ mode: 'onBlur' }} tells the form to validate each field when the user leaves them.

The only thing that spoils the joy is the fact that this pattern is eerily similar to the one provided by Phoenix. Don’t get me wrong. It is perfectly fine, even virtuous, to copy what’s good in other frameworks. But I got used to paying homage when it’s due. Of course, it’s totally possible that the author of the tutorial did not know about the source of inspiration for this pattern. If that’s the case, let me know, and I’m happy to open a pull request to the docs, adding that short little sentence of courtesy.

But let’s continue and take a look at the whole working form.

import { MetaTags, useMutation } from "@redwoodjs/web";
import { toast, Toaster } from "@redwoodjs/web/toast";
import {
  FieldError,
  Form,
  FormError,
  Label,
  Submit,
  SubmitHandler,
  TextAreaField,
  TextField,
  useForm,
} from "@redwoodjs/forms";

import {
  CreateContactMutation,
  CreateContactMutationVariables,
} from "types/graphql";

const CREATE_CONTACT = gql`
  mutation CreateContactMutation($input: CreateContactInput!) {
    createContact(input: $input) {
      id
    }
  }
`;

interface FormValues {
  name: string;
  email: string;
  message: string;
}

const ContactPage = () => {
  const formMethods = useForm();

  const [create, { loading, error }] = useMutation<
    CreateContactMutation,
    CreateContactMutationVariables
  >(CREATE_CONTACT, {
    onCompleted: () => {
      toast.success("Thank you for your submission!");
      formMethods.reset();
    },
  });

  const onSubmit: SubmitHandler<FormValues> = (data) => {
    create({ variables: { input: data } });
  };

  return (
    <>
      <MetaTags title="Contact" description="Contact page" />

      <Toaster />
      <Form
        onSubmit={onSubmit}
        config={{ mode: "onBlur" }}
        error={error}
        formMethods={formMethods}
      >
        <FormError error={error} wrapperClassName="form-error" />

        <Label name="email" errorClassName="error">
          Email
        </Label>
        <TextField
          name="email"
          validation={{
            required: true,
            pattern: {
              value: /^[^@]+@[^.]+\..+$/,
              message: "Please enter a valid email address",
            },
          }}
          errorClassName="error"
        />
        <FieldError name="email" className="error" />

        <Submit disabled={loading}>Save</Submit>
      </Form>
    </>
  );
};

export default ContactPage;

Yeah, that’s quite a mouthful. But this whole thing is necessary if we want to properly handle submissions and errors returned from the server. We won’t dive deeper into it now, but if you’re interested, make sure to take a look at Redwood’s really nicely written and thorough tutorial.

Now compare this with how it would look like in Phoenix LiveView.

<div>
  <.form
    let={f}
    for={@changeset}
    id="contact-form"
    phx-target={@myself}
    phx-change="validate"
    phx-submit="save">

    <%= label f, :title %>
    <%= text_input f, :title %>
    <%= error_tag f, :title %>

    <div>
      <button type="submit" phx-disable-with="Saving...">Save</button>
    </div>
  </.form>
</div>

A lot easier to see through while providing almost the same functionality. Yes, you’d be right to call me out for comparing apples to oranges. One is a template language, while the other is JSX. Much of the logic in a LiveView happens in an elixir file instead of the template, while JSX is all about combining the logic with the view. However, I’d argue that an ideal full-stack framework should allow me to write the validation code once for inputs, then let me simply provide the slots in the view to insert the error messages into, and allow me to set up the conditional styles for invalid inputs and be done with it. This would provide a way to write cleaner code on the front-end, even when using JSX. You could say this is against the original philosophy of React, and my argument merely shows I have a beef with it. And you’d probably be right to do so. But this is an opinion article about opinionated frameworks, after all, so that’s that.

The people behind RedwoodJS

Credit, where credit is due.

Redwood was created by GitHub co-founder and former CEO Tom Preston-Werner, Peter Pistorius, David Price & Rob Cameron. Moreover, its core team currently consists of 23 people. So if you’re afraid to try out newish tools because you may never know when their sole maintainer gets tired of the struggles of working on a FOSS tool in their free time, you can rest assured: Redwood is here to stay.

Redwood: Honorable mentions

Redwood

  • also comes bundled with Storybook,
  • provides the must-have graphiql-like GraphQL Playground,
  • provides accessibility features out of the box like the RouteAnnouncemnet SkipNavLink, SkipNavContent and RouteFocus components,
  • of course it automatically splits your code by pages.

The last one is somewhat expected in 2022, while the accessibility features would deserve their own post in general. Still, this one is getting too long already, and we haven’t even mentioned the other contender yet.

Let’s see BlitzJS

Blitz is built on top of Next.js, and it is inspired by Ruby on Rails and provides a “Zero-API” data layer abstraction. No GraphQL, pays homage to predecessors… seems like we’re off to a good start. But does it live up to my high hopes? Sort of.

A troubled past

Compared to Redwood, Blitz’s tutorial and documentation are a lot less thorough and polished. It also lacks several convenience features: 

  • It does not really autogenerate host-specific config files.
  • Blitz cannot run a simple CLI command to set up auth providers.
  • It does not provide accessibility helpers.
  • Its code generator does not take into account the model when generating pages.

Blitz’s initial commit was made in February 2020, a bit more than half a year after Redwood’s in June 2019, and while Redwood has a sizable number of contributors, Blitz’s core team consists of merely 2-4 people. In light of all this, I think they deserve praise for their work.

But that’s not all. If you open up their docs, you’ll be greeted with a banner on top announcing a pivot.

While Blitz originally included Next.js and was built around it, Brandon Bayer and the other developers felt it was too limiting. Thus they forked it, which turned out to be a pretty misguided decision. It quickly became obvious that maintaining the fork would take a lot more effort than the team could invest.

All is not lost, however. The pivot aims to turn the initial value proposition “JavaScript on Rails with Next” into “JavaScript on Rails, bring your own Front-end Framework”. 

And I can’t tell you how relieved I am that this recreation of Rails won’t force me to use React. 

Don’t get me wrong. I love the inventiveness that React brought to the table. Front-end development has come a long way in the last nine years, thanks to React. Other frameworks like Vue and Svelte might lack behind in following the new concepts, but this also means they have more time to polish those ideas even further and provide better DevX. Or at least I find them a lot easier to work with without ever being afraid that my client-side code’s performance would grind to a standstill.

All in all, I find this turn of events a lucky blunder.

How to create a Blitz app

You’ll need to install Blitz globally (run yarn global add blitz or npm install -g blitz –legacy-peer-deps), before you create a Blitz app. That’s possibly my main woe when it comes to Blitz’s design, as this way, you cannot lock your project across all contributors to use a given Blitz CLI version and increment it when you see fit, as Blitz will automatically update itself from time to time.

Once blitz is installed, run

$ blitz new my-blitz-app

It will ask you 

  • whether you want to use TS or JS, 
  • if it should include a DB and Auth template (more on that later), 
  • if you want to use npm, yarn or pnpm to install dependencies, 
  • and if you want to use React Final Form or React Hook Form. 

Once you have answered all its questions, the CLI starts to download half of the internet, as it is customary. Grab something to drink, have a lunch, finish your workout session, or whatever you do to pass the time and when you’re done, you can fire up the server by running

$ blitz dev

And, of course, you’ll see the app running and the UI telling you to run

$ blitz generate all project name:string

But before we do that, let’s look around in the project directory.

my-blitz-app/
├── app/
├── db/
├── mailers/
├── node_modules/
├── public/
├── test/
├── integrations/
├── babel.config.js
├── blitz.config.ts
├── blitz-env.d.ts
├── jest.config.ts
├── package.json
├── README.md
├── tsconfig.json
├── types.ts
└── yarn.lock

Again, we can see the usual suspects: config files, node_modules, test, and the likes. The public directory — to no one’s surprise — is the place where you store your static assets. Test holds your test setup and utils. Integrations is for configuring your external services, like a payment provider or a mailer. Speaking of the mailer, that is where you can handle your mail-sending logic. Blitz generates a nice template with informative comments for you to get started, including a forgotten password email template.

As you’d probably guessed, the app and db directories are the ones where you have the bulk of your app-related code. Now’s the time to do as the generated landing page says and run blitz generate all project name:string.

Say yes, when it asks you if you want to migrate your database and give it a descriptive name like add project.

Now let’s look at the db directory.

my-blitz-app/
└── db/
    ├── db.sqlite
    ├── db.sqlite-journal
    ├── index.ts
    ├── migrations/
    │   ├── 20220610075814_initial_migration/
    │   │   └── migration.sql
    │   ├── 20220610092949_add_project/
    │   │   └── migration.sql
    │   └── migration_lock.toml
    ├── schema.prisma
    └── seeds.ts

The migrations directory is handled by Prisma, so it won’t surprise you if you’re already familiar with it. If not, I highly suggest trying it out on its own before you jump into using either Blitz or Redwood, as they heavily and transparently rely on it.

Just like in Redwood’s db dir, we have our schema.prisma, and our sqlite db, so we have something to start out with. But we also have a seeds.ts and index.ts. If you take a look at the index.ts file, it merely re-exports Prisma with some enhancements, while the seeds.ts file kind of speaks for itself.

Now’s the time to take a closer look at our schema.prisma.

// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema

datasource db {
  provider = "sqlite"
  url      = env("DATABASE_URL")
}

generator client {
  provider = "prisma-client-js"
}

// --------------------------------------

model User {
  id             Int      @id @default(autoincrement())
  createdAt      DateTime @default(now())
  updatedAt      DateTime @updatedAt
  name           String?
  email          String   @unique
  hashedPassword String?
  role           String   @default("USER")

  tokens   Token[]
  sessions Session[]
}

model Session {
  id                 Int       @id @default(autoincrement())
  createdAt          DateTime  @default(now())
  updatedAt          DateTime  @updatedAt
  expiresAt          DateTime?
  handle             String    @unique
  hashedSessionToken String?
  antiCSRFToken      String?
  publicData         String?
  privateData        String?

  user   User? @relation(fields: [userId], references: [id])
  userId Int?
}

model Token {
  id          Int      @id @default(autoincrement())
  createdAt   DateTime @default(now())
  updatedAt   DateTime @updatedAt
  hashedToken String
  type        String
  // See note below about TokenType enum
  // type        TokenType
  expiresAt   DateTime
  sentTo      String

  user   User @relation(fields: [userId], references: [id])
  userId Int

  @@unique([hashedToken, type])
}

// NOTE: It's highly recommended to use an enum for the token type
//       but enums only work in Postgres.
//       See: https://blitzjs.com/docs/database-overview#switch-to-postgre-sql
// enum TokenType {
//   RESET_PASSWORD
// }

model Project {
  id        Int      @id @default(autoincrement())
  createdAt DateTime @default(now())
  updatedAt DateTime @updatedAt
  name      String
}

As you can see, Blitz starts out with models to be used with a fully functional User management. Of course, it also provides all the necessary code in the app scaffold, meaning that the least amount of logic is abstracted away, and you are free to modify it as you see fit.

Below all the user-related models, we can see the Project model we created with the CLI, with an automatically added id, createdAt, and updatedAt files. One of the things that I prefer in Blitz over Redwood is that its CLI mimics Phoenix, and you can really create everything from the command line end-to-end. 

This really makes it easy to move quickly, as less context switching happens between the code and the command line. Well, it would if it actually worked, as while you can generate the schema properly, the generated pages, mutations, and queries always use name: string, and disregard the entity type defined by the schema, unlike Redwood. There’s already an open pull request to fix this, but the Blitz team understandably has been focusing on getting v2.0 done instead of patching up the current stable branch.

That’s it for the db, let’s move on to the app directory.

my-blitz-app
└── app
    ├── api/
    ├── auth/
    ├── core/
    ├── pages/
    ├── projects/
    └── users/

The core directory contains Blitz goodies, like a predefined and parameterized Form (without Redwood’s or Phoenix’s niceties though), a useCurrentUser hook, and a Layouts directory, as Bliz made it easy to persist layouts between pages, which will be rendered completely unnecessary with the upcoming Next.js Layouts. This reinforces further that the decision to ditch the fork and pivot to a toolkit was probably a difficult but necessary decision.

The auth directory contains the fully functional authentication logic we talked about earlier, with all the necessary database mutations such as signup, login, logout, and forgotten password, with their corresponding pages and a signup and login form component. The getCurrentUser query got its own place in the users directory all by itself, which makes perfect sense.

And we got to the pages and projects directories, where all the action happens.

Blitz creates a directory to store database queries, mutations, input validations (using zod), and model-specific components like create and update forms in one place. You will need to fiddle around in these a lot, as you will need to update them according to your actual model. This is nicely laid out though in the tutorial… Be sure to read it, unlike I did when I first tried Blitz out.

my-blitz-app/
└── app/
    └── projects/
        ├── components/
        │   └── ProjectForm.tsx
        ├── mutations/
        │   ├── createProject.ts
        │   ├── deleteProject.ts
        │   └── updateProject.ts
        └── queries/
            ├── getProjects.ts
            └── getProject.ts

Whereas the pages directory won’t be of any surprise if you’re already familiar with Next.

my-blitz-app/
└── app/
    └── pages/
        ├── projects/
        │   ├── index.tsx
        │   ├── new.tsx
        │   ├── [projectId]/
        │   │   └── edit.tsx
        │   └── [projectId].tsx
        ├── 404.tsx
        ├── _app.tsx
        ├── _document.tsx
        ├── index.test.tsx
        └── index.tsx

A bit of explanation if you haven’t tried Next out yet: Blitz uses file-system-based routing just like Next. The pages directory is your root, and the index file is rendered when the path corresponding to a given directory is accessed. Thus when the root path is requested, pages/index.tsx will be rendered, accessing /projects will render pages/projects/index.tsx, /projects/new will render pages/projects/new.tsx and so on. 

If a filename is enclosed in []-s, it means that it corresponds to a route param. Thus /projects/15 will render pages/projects/[projectId].tsx. Unlike in Next, you access the param’s value within the page using the <code>useParam(name: string, type?: string)</code> hook. To access the query object, use the <code>useRouterQuery(name: string)</code>. To be honest, I never really understood why Next needs to mesh together the two.

When you generate pages using the CLI, all pages are protected by default. To make them public, simply delete the [PageComponent].authenticate = true line. This will throw an AuthenticationError if the user is not logged in anyway, so if you’d rather redirect unauthenticated users to your login page, you probably want to use [PageComponent].authenticate = {redirectTo: '/login'}.

In your queries and mutations, you can use the ctx context arguments value to call ctx.session.$authorize or resolver.authorize in a pipeline to secure your data.

Finally, if you still need a proper http API, you can create Express-style handler functions, using the same file-system routing as for your pages.

A possible bright future

While Blitz had a troubled past, it might have a bright future. It is still definitely in the making and not ready for widespread adoption. The idea of creating a framework agnostic full-stack JavaScript toolkit is a versatile concept. This strong concept is further reinforced by the good starting point, which is the current stable version of Blitz. I’m looking further to see how the toolkit will evolve over time.

Redwood vs. Blitz: Comparison and Conclusion

I set out to see whether we have a Rails, or even better, Phoenix equivalent in JavaScript. Let’s see how they measured up.

1. CLI code generator

Redwood’s CLI gets the checkmark on this one, as it is versatile, and does what it needs to do. The only small drawback is that the model has to be written in file first, and cannot be generated.

Blitz’s CLI is still in the making, but that’s true about Blitz in general, so it’s not fair to judge it by what’s ready, but only by what it will be. In that sense, Blitz would win if it was fully functional (or will when it will be), as it can really generate pages end-to-end.

Verdict: Tie

2. A powerful ORM

That’s a short one. Both use Prisma, which is a powerful enough ORM.

Verdict: Tie

3. Server side rendered but interactive pages

Well, in today’s ecosystem, that might be wishful thinking. Even in Next, SSR is something you should avoid, at least until we’ll have Server Components in React.

But which one mimics this behavior the best?

Redwood does not try to look like a Rails replacement. It has clear boundaries demarcated by yarn workspaces between front-end and back-end . It definitely provides nice conventions and — to keep it charitable — nicely reinvented the right parts of Phoenix’s form handling. However, strictly relying on GraphQL feels a bit overkill. For small apps that we start out with anyway when opting to use a full-stack framework, it definitely feels awkward.

Redwood is also React exclusive, so if you prefer using Vue, Svelte or Solid, then you have to wait until someone reimplements Redwood for your favorite framework.

Blitz follows the Rails way, but the controller layer is a bit more abstract. This is understandable, though, as using Next’s file-system-based routing, a lot of things that made sense for Rails do not make sense for Blitz. And in general, it feels more natural than using GraphQL for everything. In the meantime, becoming framework agnostic makes it even more versatile than Redwood.

Moreover, Blitz is on its way to becoming framework agnostic, so even if you’d never touch React, you’ll probably be able to see its benefits in the near future.

But to honor the original criterion: Redwood provides client-side rendering and SSG (kind of), while Blitz provides SSR on top of the previous two.

Verdict: Die-hard GraphQL fans will probably want to stick with Redwood. But according to my criteria, Blitz hands down wins this one.

4. API

Blitz auto generates an API for data access that you can use if you want to, but you can explicitly write handler functions too. A little bit awkward, but the possibility is there.

Redwood maintains a hard separation between front-end and back-end, so it is trivial that you have an API, to begin with. Even if it’s a GraphQL API, that might just be way too much to engineer for your needs.

Verdict: Tie (TBH, I feel like they both suck at this the same amount.)

Bye now!

In summary, Redwood is a production-ready, React+GraphQL-based full-stack JavaScript framework made for the edge. It does not follow the patterns laid down by Rails at all, except for being highly opinionated. It is a great tool to use if you share its sentiment, but my opinion greatly differs from Redwood’s on what makes development effective and enjoyable.

Blitz, on the other hand, follows in the footsteps of Rails and Next, and is becoming a framework agnostic, full-stack toolkit that eliminates the need for an API layer.

I hope you found this comparison helpful. Leave a comment if you agree with my conclusion and share my love for Blitz. If you don’t, argue with the enlightened ones… they say controversy boosts visitor numbers.

JavaScript Interview Questions & Answers

JavaScript is widely used by web developers and is supported by all major web browsers.

If you are looking for a job as a web developer, you will most likely be asked to answer some questions about JavaScript during your interview. In this article, we have compiled a list of some of the most vital JavaScript interview questions and answers you should definitely know the answers for.

We hope this article will help you prepare for your next interview and ace it!

What is the difference between Java & JavaScript?

Java and JavaScript are both widely used, however, there are some key differences between the two. Java is a statically typed, compiled language, meaning you need to use a compiler to type check and convert your code into a bundle that can run in supported runtime environments. On the other hand, JavaScript is a dynamically typed and JIT-compiled / interpreted language, meaning you don’t need to use a compiler before shipping your code.

What are the data types supported by JavaScript?

Primitive data types:

  • Booleans: can be true or false
  • null, undefined: are types that are values in and of themselves
  • Numbers: stored as 64-bit floating-point values
  • Strings: set of Unicode characters
  • BigInt: used to precisely represent large integers 
  • Symbol: a unique and immutable primitive type that is usually used for private object property keys

Object: collections of properties that can contain any combination of data types. Anything that is not a primitive type is represented as an object in JavaScript, including functions and arrays.

What is the difference between “==” and “===” operators in JavaScript?

The “==” and “===” operators are used for comparison in JavaScript. The “===” operator will compare two values and evaluate to true if they are strictly equal, while the “==” operator will also compare two values but will perform a type conversion if they are not of the same type.

For example, if you were to compare the number 1 with the string “1”, the “==” operator would return true: first, it converts the operands to the same type, but the “===” operator would return false because they are not of the same type. It is important to note that, in some cases, using the “==” operator may give you unexpected results due to type coercion.

Is JavaScript a case-sensitive language?

JavaScript is a case-sensitive language. This means that the language recognizes differences between uppercase and lowercase letters. For example, the variables “myVar” and “MyVar” would be considered two different variables. This can be important when writing code, as even a tiny change in the case of a letter can result in an error. In general, it is good practice to use consistent casing throughout your code to help avoid such mistakes. By convention, variables are named using camelCase, classes are named with PascalCase, and constants are written as SNAKE_CASE_ALL_CAPS.

How can you create an object in JavaScript?

There are at least four ways to create objects in JavaScript: using an Object literal, constructor function, class, or with the Object.create() method.

The Object literal, class, and constructor function all create objects which inherit the root object’s prototype.

Object literal:

const obj = {
 foo: 'bar',
 bar: 42,
 baz: true
};

Constructor function:

function Constructor(foo, bar, baz) {
 this.foo = foo;
 this.bar = bar;
 this.baz = baz;
}

const obj = new Constructor(‘bar’, 42, true);

Class:

Class MyClass {
 foo = 'bar';
 bar = 42;
 baz = true;
};

const obj = new MyClass();

Object.create:

Object.create is a lower level JS primitive that can mimic the behavior of all of the above methods and create objects without prototypal parents and with special property configurations. Mostly framework authors and low-level developers use it, and it’s rarely found in application code.

const obj = Object.create(null, {
 foo: { value: 'bar' },
 bar: { value: 42 },
 baz: { value: true}
});

What is the difference between let and var in JavaScript?

In JavaScript, the keywords let and var can be used to declare variables. However, there are some essential differences between these two keywords. Variables declared with var are accessible anywhere within their containing function, including the lines before the variable declaration and possible outer block scopes in function. In contrast, variables declared with let are only accessible within the block in which they were declared. This can be useful for preventing variable collisions or for creating private variables.

In addition, variables declared with let are hoisted to the top of their containing scope in the same way as variables declared with var, but while accessing a var before it’s defined will evaluate to undefined, let will throw a ReferenceError (Temporal Dead Zone). As a result, let can be used to help avoid unexpected behavior when accessing variables before they have been initialized. For these reasons, it is generally considered best practice to use let when declaring variables in JavaScript.

What is const in JavaScript?

Const is a keyword that works very similar to let in JavaScript, but a variable created with const cannot be reassigned.

How can you create an Array in JavaScript?

Creating an Array in JavaScript is relatively straightforward. The Array class has a constructor that can be used to initialize an array with a set of values. For example, the following code creates an array with three elements:

const myArray = new Array(1,2,3);

Be careful though! While passing multiple numbers to new Array will work as expected, passing one single number will create an empty array of that length.

const myArray = new Array(5)
console.log(myArray)

// prints [<5 empty items>] which is just [undefined, undefined, undefined, undefined, undefined]

But as Arrays are variable length by default, if you just want to create an empty array and later push elements to it, you can omit the length.

const myArray = new Array(); myArray.push(1); myArray.push(2); myArray.push(3);

In both cases, the resulting array will be of type object and will have a length property that indicates the number of elements in the array. But usually you would do it using an array literal:

const array = [1,2,3]

What is the purpose of the this keyword in JavaScript?

The this keyword in JavaScript has a variety of uses. The this keyword can be used to refer to different objects in different situations. The value of this is determined by how a function is called (runtime binding). It can’t be set by assignment during execution, and it may be different each time the function is called. The  3 most commonly used methods for controlling this binding are Function.prototype.bind(), Function.prototype.call(), Function.prototype.apply(), and ES2015 introduced arrow functions which don’t provide their own this binding (it retains the this value of the enclosing lexical context).

What is a callback in JavaScript?

A callback is a function that is passed as an argument to another function. The callback function is invoked by the other function. Callbacks are used for inversion of control, meaning that it’s not you who decide when and how exactly your logic will be run, but by the library or framework.

What is Closure in JavaScript?

In JavaScript, you can nest functions within functions. Variables declared in the outer function are available to the inner function, even if the inner one is executed later in time. Thus the outer function’s variables are part of the inner function’s closure.

How can a JavaScript code be imported in an HTML file?

JavaScript in essence is used to add interactivity to HTML files. This can be done by enclosing the code within script tags, or providing the location of the script file as an URL passed to the script tag’s src attribute. Once the code is included in the HTML file, it will be executed when the file is loaded in a web browser.

Should you wrap the entire content of a JavaScript source file in a function block? Why or why not?

Before modules, it was considered best practice to wrap closely related functionalities in function blocks, or even the whole code too. By doing this, you could ensure that all of the variables and functions defined in the file are local to that function and don’t pollute the global namespace. However, since the introduction of modules, and modern bundlers, this practice has become obsolete.

What is memoization in JavaScript?

Memoization is an optimization technique used to speed up programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again. When a memoized function is called with the same arguments, the previous result is simply looked up and returned, without needing to re-execute the entire function. While memoization is a powerful optimization tool, it is important to use it sparingly, as it can lead to increased memory usage and code that is difficult to understand.

What are classes in JavaScript?

Classes in JavaScript are templates for creating objects. A class definition can specify the kind of data that an object instance of that class type will contain, and it can also specify the methods (functions) that can be invoked on instances of that type. However, classes in JavaScript are merely a syntactic sugar over constructor functions.

In addition, a class can specify inheritance relationships, which is a syntactic sugar over prototypal inheritance.

What is the use of promises in JavaScript?

A promise in JavaScript is an object that represents the eventual result of an asynchronous operation. Promises are used in many applications to handle asynchronous events, such as server responses, and timers. Promises can be chained together to create complex sequences of events, and they can be combined with other programming constructs, such as error handling. Promises have become an essential part of many web applications, however since the introduction of async-await, it’s considered a best practice to use it instead of manual Promise chaining.

What are generator functions in JavaScript?

Generator functions are a type of function which does not follow the usual run to completion execution pattern, but can be paused and resumed. When a generator function is called, it doesn’t run the code inside the function immediately. Instead, it returns a “generator” object that can be used to control the execution of the code inside the function. The generator function can use the yield keyword to cede control to its caller which in turn resumes the execution of the generator by calling the next() method on the generator object.

Thanks for reading! We hope you found these questions and answers helpful. If you’re preparing for a JavaScript interview, be sure to brush up on your skills so you can ace the interview and land the job. Good luck!

The fs Module in Node.js: A Short Guide to File System Interaction

Node.js is a powerful platform that lets you build fast, scalable network applications. One of the modules that comes with Node is fs, which provides access to the file system. In this article, we will give an overview of what the fs module does and how you can use it to interact with your files. We will also provide a tutorial on how to use some of its more common functions.

What does the fs module do?

The fs module provides a lot of functionality for interacting with the file system. Some of the more common functions that you will use are writeFile() / writeFileSync() and readFile() / readFileSync(). These functions let you write to and read from files, respectively.

So now that we briefly outlined what the fs module does, let’s take a look at how you can use it in your own applications. In our tutorial, we will show you how to write to and read from files, as well as get additional information about them.

How to use the fs module

We will start by creating a file called “file.txt”. This file will contain some text that we want to write to. Next, we will create a file called “readfile.js” and put the following code in it:

var fs = require('fs');
var file = 'file.txt' ;

fs.writeFile(file, 'Hello world!', function(err) {
  if(err) { 
    console . log ( err );
  } else { 
    console.log('The file was written successfully!');
  }
});

var contents = fs.readFileSync(file);

console.log(contents);

We first require the fs module. Then we create a variable, which contains the path to our “file.txt” file. Next, we use the writeFile() function to write the text “Hello world!” to disk. We pass it three parameters: the file to write to, the text to write, and a function that will be executed if there are any errors.

The Node.js fs module provides two different functions for writing files: writeFile and writeFileSync. Both functions take a file path and data as arguments, and write the data to the specified file. However, there is a key difference between the two functions: writeFile is asynchronous, while writeFileSync is synchronous. This means that writeFile will return immediately, before the file has been written and only its callback will be called when the write operation is completed, while writeFileSync will block until the file has been written. As a result, writeFile allows your script to handle other tasks, while the computer is busy writing the file, but writeFileSync can be easier to use if you need to be sure that the file has been especially when bootstrapping your process. Most fs functions have a sync and an async version just like readFile and writeFile.

If everything goes well, the function will execute and print “The file was written successfully!” to the console. If there are any errors, it will print them out.

Next, we use the readFileSync() function to read the contents of our “file.txt” file into a variable called contents. We then log the contents of the variable to the console.

And that’s all there is to it! You can now use these same concepts to do more complex tasks with files, such as reading from multiple files at once or writing formatted data. Be sure to check out the fs module documentation for more information.

Happy coding! 🙂

Download & Update Node.js to the Latest Version! Node v18.9.0 Current / LTS v16.17.0 Direct Links

Node 16 is the LTS version since 2021-10-26, while Node 18 became the Current version from 2022. April 19. The next LTS version, v18 is planned to take over on 2022-10-25.

In this article below, you’ll find changelogs and download / update information regarding Node.js!

Node.js LTS & Current Download for macOS:

Node.js LTS & Current Download for Windows:

For other downloads like Linux libraries, source codes, Docker images, etc.. please visit https://nodejs.org/en/download/

Node.js Release Schedule:

Releases

Node.js v18 is the Current version!

Node.js 18 will be the ‘Current’ release for the next 6 months and then promoted to Long-term Support (LTS) in October 2022. Node.js 18 will be supported until April 2025.

New globally available browser-compatible APIs

fetch (experimental): In Node.js 18, an experimental global fetch API is available by default. The implementation comes from undici and is inspired by node-fetch which was originally based upon undici-fetch. The implementation strives to be as close to spec-compliant as possible, but some aspects would require a browser environment and are thus omitted. Through this addition, the following globals are made available: fetchFormDataHeadersRequestResponse. It’s possible to disable the API by supplying the --no-experimental-fetch command-line flag.

Web Streams API (experimental): Node.js now exposes the experimental implementation of the Web Streams API on the global scope. The following APIs are now globally available: ReadableStreamReadableStreamDefaultReaderReadableStreamBYOBReaderReadableStreamBYOBRequestReadableByteStreamControllerReadableStreamDefaultControllerTransformStreamTransformStreamDefaultControllerWritableStreamWritableStreamDefaultWriterWritableStreamDefaultControllerByteLengthQueuingStrategyCountQueuingStrategyTextEncoderStreamTextDecoderStreamCompressionStreamDecompressionStream.

Others available experimental APIs:

Test runner module (experimental)

The node:test module facilitates the creation of JavaScript tests that report results in TAP format. To access it: import test from 'node:test';

Build-time user-land snapshot (experimental)

Starting from Node.js 18.0.0, users can build a Node.js binary with a custom V8 startup snapshot using the --node-snapshot-main flag of the configure script. The resulted binary can deserialize the state of the heap that was initialized by the snapshot entry point at build time, so the application in generated binary can be initialized faster.

V8 10.1

The V8 engine is updated to version 10.1, which is part of Chromium 101. Compared to the version included in Node.js 17.9.0, the following new features are included:

Node.js CURRENT v18 Changelogs

Changelog for Node Version 18.9.0 (Current)

  • doc: add daeyeon to collaborators
  • lib: add diagnostics channel for process and worker
  • os: add machine method
  • report: expose report public native apis
  • src: expose environment RequestInterrupt api
  • vm: include vm context in the embedded snapshot

Changelog for Node Version 18.8.0 (Current)

  • bootstrap: implement run-time user-land snapshots via –build-snapshot and –snapshot-blob: This patch introduces --build-snapshot and --snapshot-blob options for creating and using user land snapshots.
  • crypto: allow zero-length IKM in HKDF and in webcrypto PBKDF2, allow zero-length secret KeyObject
  • deps: upgrade npm to 8.18.0 – Adds a new npm query command
  • http: make idle http parser count configurable
  • net: add local family
  • src: print source map error source on demand

Changelog for Node Version 18.7.0 (Current)

  • doc:
    • add F3n67u to collaborators
    • deprecate coercion to integer in process.exit
    • (SEMVER-MINOR) deprecate diagnostics_channel object subscribe method
  • events:
    • (SEMVER-MINOR) expose CustomEvent on global with CLI flag
    • (SEMVER-MINOR) add CustomEvent
  • http: (SEMVER-MINOR) add drop request event for http server
  • lib: (SEMVER-MINOR) improved diagnostics_channel subscribe/unsubscribe
  • util: (SEMVER-MINOR) add tokens to parseArgs

Changelog for Node Version 18.6.0 (Current)

Experimental ESM Loader Hooks API: Node.js ESM Loader hooks now support multiple custom loaders, and composition is achieved via “chaining”: foo-loader calls bar-loader calls qux-loader (a custom loader must now signal a short circuit when intentionally not calling the next). See the ESM docs for details.

Real-world use-cases are laid out for end-users with working examples in the article Custom ESM loaders: Who, what, when, where, why, how.

Changelog for Node Version 18.5.0 (Current)

This is a security release. The following CVEs are fixed in this release:

  • CVE-2022-2097: OpenSSL – AES OCB fails to encrypt some bytes (Medium)
  • CVE-2022-32212: DNS rebinding in –inspect via invalid IP addresses (High)
  • CVE-2022-32213: HTTP Request Smuggling – Flawed Parsing of Transfer-Encoding (Medium)
  • CVE-2022-32214: HTTP Request Smuggling – Improper Delimiting of Header Fields (Medium)
  • CVE-2022-32215: HTTP Request Smuggling – Incorrect Parsing of Multi-line Transfer-Encoding (Medium)
  • CVE-2022-32222: Attempt to read openssl.cnf from /home/iojs/build/ upon startup (Medium)
  • CVE-2022-32223: DLL Hijacking on Windows (High)

Changelog for Node Version 18.4.0 (Current)

  • crypto: remove Node.js-specific webcrypto extensions, add CFRG curves to Web Crypto API
  • dns: accept 'IPv4' and 'IPv6' for family
  • report: add more heap infos in process report

Changelog for Node Version 18.3.0 (Current)

  • deps: update undici to 5.4.0
  • (SEMVER-MINOR) util: add parseArgs module
  • (SEMVER-MINOR) http: add uniqueHeaders option to request and createServer
  • deps: upgrade npm to 8.11.0
  • deps: patch V8 to 10.2.154.4
  • (SEMVER-MINOR) deps: update V8 to 10.2.154.2
  • (SEMVER-MINOR) fs: make params in writing methods optional
  • (SEMVER-MINOR) http: add uniqueHeaders option to request and createServer
  • (SEMVER-MINOR) net: add ability to reset a tcp socket
  • (SEMVER-MINOR) Revert “build: make x86 Windows support temporarily experimental. This means 32-bit Windows binaries are back with this release.

Changelog for Node Version 18.2.0 (Current)

OpenSSL 3.0.3: This update can be treated as a security release as the issues addressed in OpenSSL 3.0.3 slightly affect Node.js 18.

  • deps: update archs files for quictls/openssl-3.0.3+quic
  • deps: upgrade openssl sources to quictls/openssl-3.0.3
  • Revert “deps: add template for generated headers”
  • deps: update undici to 5.2.0
  • deps: upgrade npm to 8.9.0
  • deps: upgrade openssl sources to quictls/openssl-3.0.3
  • (SEMVER-MINOR) fs: add read(buffer[, options]) versions
  • (SEMVER-MINOR) http: added connection closing methods
  • (SEMVER-MINOR) perf_hooks: add PerformanceResourceTiming

Changelog for Node Version 18.1.0 (Current)

  • lib,src: implement WebAssembly Web API
  • test_runner: add initial CLI runner
  • worker: add hasRef() to MessagePort

Node.js v16 Changelogs

Changelog for Node Version 16.17.0

Experimental command-line argument parser API: Adds util.parseArgs helper for higher level command-line argument parsing.

Experimental ESM Loader Hooks API: Node.js ESM Loader hooks now support multiple custom loaders, and composition is achieved via “chaining”: foo-loader calls bar-loader calls qux-loader (a custom loader must now signal a short circuit when intentionally not calling the next).

Experimental test runner: The node:test module, which was initially introduced in Node.js v18.0.0, is now available with all the changes done to it up to Node.js v18.7.0.

Improved interoperability of the Web Crypto API: To better align Node.js’ experimental implementation of the Web Crypto API with other runtimes, several changes were made:

  • Support for CFRG curves was added, with the 'Ed25519''Ed448''X25519', and 'X448' algorithms.
  • The proprietary 'NODE-DSA''NODE-DH''NODE-SCRYPT''NODE-ED25519''NODE-ED448''NODE-X25519', and 'NODE-X448' algorithms were removed.
  • The proprietary 'node.keyObject' import/export format was removed.

Changelog for Node Version 16.16.0

This is a security release.

  • deps: upgrade openssl sources to OpenSSL_1_1_1q
  • src: add OpenSSL config appname

Changelog for Node Version 16.15.0

Add fetch API: Adds experimental support to the fetch API. This adds the --experimental-fetch flag that installs the fetchRequestResponseHeaders, and FormData globals.

Other notable changes

  • build: remove broken x32 arch support
  • crypto: add KeyObject.prototype.equals method
  • esm: support https remotely and http locally under flag
  • module: unflag esm json modules
  • node-api: add node_api_symbol_for()
  • process: deprecate multipleResolves
  • stream: support some and every, add toArray, add forEach method

Changelog for Node Version 16.14.0

Importing JSON modules now requires experimental import assertions syntax: This release adds experimental support for the import assertions stage 3 proposal.

To keep Node.js ESM implementation as compatible as possible with the HTML spec, import assertions are now required to import JSON modules (still behind the --experimental-json-modules CLI flag):

import info from './package.json' assert { type: 'json' };

Or use dynamic import:

const info = await import('./package.json', { assert: { type: 'json' } });

Other notable changes:

  • async_hooks:
    • expose async_wrap providers
  • child_process:
    • add support for URL to cp.fork 
  • esm:
    • graduate capturerejections to supported
    • add EventEmitterAsyncResource to core
  • events:
    • propagate weak option for kNewListener
  • fs:
    • accept URL as argument for fs.rm and fs.rmSync 
  • lib:
    • make AbortSignal cloneable/transferable
    • add AbortSignal.timeout
    • add reason to AbortSignal
    • add unsubscribe method to non-active DC channels
    • add return value for DC channel.unsubscribe
  • loader:
    • return package format from defaultResolve if known
  • perf_hooks:
    • multiple fixes for Histogram
  • process:
    • add getActiveResourcesInfo() 
  • src:
    • add x509.fingerprint512 to crypto module
    • add flags for controlling process behavior
  • stream:
    • add filter method to readable
    • add isReadable helper
    • add map method to Readable
    • deprecate thenable support
  • util:
    • pass through the inspect function to custom inspect functions
    • add numericSeparator to util.inspect
    • always visualize cause property in errors during inspection timers:
  • timers:
    • add experimental scheduler api
  • v8:
    • multi-tenant promise hook api

Changelog for Node Version 16.13.2

This is a security release.

See changes at 17.3.1 (Current).

Changelog for Node Version 16.13.1

  • deps: upgrade npm to 8.1.2.
  • deps: update c-ares to 1.18.1. This release contains a c-ares update to fix a regression introduced in Node.js v16.6.2 resolving CNAME records containing underscores.
  • doc: add VoltrexMaster to collaborators.
  • lib: fix regular expression to detect `/` and `\`.

Changelog for Node Version 16.13.0

This release marks the transition of Node.js 16.x into Long Term Support (LTS) with the codename ‘Gallium’. The 16.x release line now moves into “Active LTS” and will remain so until October 2022. After that time, it will move into “Maintenance” until end of life in April 2024.

Changelog for Node Version 16.12.0

Experimental ESM Loader Hooks API:

Node.js ESM Loader hooks have been consolidated to represent the steps involved needed to facilitate future loader chaining:

  1. resolveresolve [+ getFormat]
  2. loadgetFormat + getSource + transformSource

For consistency, getGlobalPreloadCode has been renamed to globalPreload.

A loader exporting obsolete hook(s) will trigger a single deprecation warning (per loader) listing the errant hooks.

Changelog for Node Version 16.11.1

This is a security release. Notable changes:

  • CVE-2021-22959: HTTP Request Smuggling due to spaced in headers (Medium): The http parser accepts requests with a space (SP) right after the header name before the colon. This can lead to HTTP Request Smuggling (HRS).
  • CVE-2021-22960: HTTP Request Smuggling when parsing the body (Medium): The parse ignores chunk extensions when parsing the body of chunked requests. This leads to HTTP Request Smuggling (HRS) under certain conditions.

Changelog for Node Version 16.11.0

  • crypto: update root certificates
  • deps: upgrade npm to 8.0.0, update nghttp2 to v1.45.1, update V8 to 9.4.146.19
  • tools: update certdata.txt

Changelog for Node Version 16.10.0

  • crypto: add rsa-pss keygen parameters
  • deps: upgrade npm to 7.24.0
  • deps: update Acorn to v8.5.0
  • doc: add Ayase-252 to collaborators
  • fs: make open and close stream override optional when unused
  • http: limit requests per connection
    • The maximum number of requests a socket can handle before closing keep alive connection can be set with server.maxRequestsPerSocket.
  • src: add –no-global-search-paths cli option
    • Adds the –no-global-search-paths command-line option to not search modules from global paths like $HOME/.node_modules and $NODE_PATH.
  • src: make napi_create_reference accept symbol
  • stream: add signal support to pipeline generators

Changelog for Node Version 16.9.1

This release fixes a regression introduced by the V8 9.3 update in Node.js 16.9.0.

Changelog for Node Version 16.9.0

Corepack

Node.js now includes Corepack, a script that acts as a bridge between Node.js projects and the package managers they are intended to be used with during development. In practical terms, Corepack will let you use Yarn and pnpm without having to install them – just like what currently happens with npm, which is shipped in Node.js by default.

V8 9.3

V8 is updated to version 9.3, which includes performance improvements and new JavaScript features.

Object.hasOwn

Object.hasOwn is a static alias for Object.prototype.hasOwnProperty.call:

Object.hasOwn({ value: 42 }, 'value'); // Returns `true`.

Error cause

Errors can now be optionally constructed with a cause option, pointing to another error. This adds a cause property on the new error:

const error1 = new Error('Error one');
const error2 = new Error('Error two', { cause: error1 });
// error2.cause === error1

Other Notable Changes

  • crypto: add RSA-PSS params to asymmetricKeyDetails
  • module: support pattern trailers
  • stream: add stream.compose

Changelog for Node Version 16.8.0

  • doc: deprecate type coercion for dns.lookup options
  • stream: add stream.Duplex.from utility
  • stream: add isDisturbed helper
  • util: expose toUSVString 

Changelog for Node Version 16.7.0

  • fs, experimental: add recursive cp method

Changelog for Node Version 16.6.2

This is a security release. Notable Changes:

  • CVE-2021-3672/CVE-2021-22931: Improper handling of untypical characters in domain names: Node.js was vulnerable to Remote Code Execution, XSS, application crashes due to missing input validation of hostnames returned by Domain Name Servers in the Node.js DNS library which can lead to the output of wrong hostnames (leading to Domain Hijacking) and injection vulnerabilities in applications using the library.
  • CVE-2021-22930: Use after free on close http2 on stream canceling: Node.js was vulnerable to a use after free attack where an attacker might be able to exploit memory corruption to change process behavior. This release includes a follow-up fix for CVE-2021-22930 as the issue was not completely resolved by the previous fix.
  • CVE-2021-22939: Incomplete validation of rejectUnauthorized parameter: If the Node.js HTTPS API was used incorrectly and “undefined” was in passed for the “rejectUnauthorized” parameter, no error was returned and connections to servers with an expired certificate would have been accepted.

Changelog for Node Version 16.6.0

This is a security release. Notable Changes:

The V8 engine is updated to version 9.2.230.21.:

It notably introduces the new Array.prototype.at method (also on Typed Arrays and strings):

const array = [1, 2, 3];

console.log(array.at(-1));
// Prints: 3

Other notable changes:

  • CVE-2021-22930: Use after free on close http2 on stream canceling:
    Node.js is vulnerable to a use after free attack where an attacker might be able to exploit the memory corruption, to change process behavior.
  • inspector: mark as stable
  • punycode: add pending deprecation
  • repl: enable –experimental-repl-await /w opt-out

Changelog for Node Version 16.5.0

Experimental Web Streams API: Node.js now exposes an experimental implementation of the Web Streams API.

While it is experimental, the API is not exposed on the global object and is only accessible using the new stream/web core module:

import { ReadableStream, WritableStream } from 'stream/web'; // Or from 'node:stream/web'

Importing the module will emit a single experimental warning per process.

The raw API is implemented and we are now working on its integration with various existing core APIs.

Other notable changes:

  • fs: allow empty string for temp directory prefix
  • deps: upgrade npm to 7.19.1

Changelog for Node Version 16.4.2

Node.js 16.4.1 introduced a regression in the Windows installer on non-English locales that is being fixed in this release. There is no need to download this release if you are not using the Windows installer.

Changelog for Node Version 16.4.1

This is a security release. Vulnerabilities fixed:

  • CVE-2021-22918: libuv upgrade – Out of bounds read (Medium): Node.js is vulnerable to out-of-bounds read in libuv’s uv__idna_toascii() function which is used to convert strings to ASCII. This is called by Node’s dns module’s lookup() function and can lead to information disclosures or crashes.
  • CVE-2021-22921: Windows installer – Node Installer Local Privilege Escalation (Medium): Node.js is vulnerable to local privilege escalation attacks under certain conditions on Windows platforms. More specifically, improper configuration of permissions in the installation directory allows an attacker to perform two different escalation attacks: PATH and DLL hijacking.

Changelog for Node Version 16.4.0

  • async_hooks: stabilize part of AsyncLocalStorage
  • deps: upgrade npm to 7.18.1, update V8 to 9.1.269.36
  • dns: allow --dns-result-order to change default dns verbatim

Changelog for Node Version 16.3.0

  • cli: add -C alias for –conditions flag
  • deps: add workspaces support to npm install commands

Changelog for Node Version 16.2.0

  • async_hooks: use new v8::Context PromiseHook API
  • lib: support setting process.env.TZ on windows
  • module: add support for URL to import.meta.resolve
  • process: add ‘worker’ event
  • util: add util.types.isKeyObject and util.types.isCryptoKey

Changelog for Node Version 16.1.0

fs: allow no-params fsPromises fileHandle read

Changelog for Node Version 16.0.0

  • Stable Timers Promises API: The Timers Promises API provides an alternative set of timer functions that return Promise objects. Added in Node.js v15.0.0, in this release they graduate from experimental status to stable.
  • Toolchain and Compiler Upgrades: Node.js v16.0.0 will be the first release where we ship prebuilt binaries for Apple Silicon. While we’ll be providing separate tarballs for the Intel (darwin-x64) and ARM (darwin-arm64) architectures the macOS installer (.pkg) will be shipped as a ‘fat’ (multi-architecture) binary.
  • V8 9.0: The V8 JavaScript engine is updated to V8 9.0, including performance tweaks and improvements. This update also brings the ECMAScript RegExp Match Indices, which provide the start and end indices of the captured string. The indices array is available via the .indices property on match objects when the regular expression has the /d flag.
  • Other Notable Changes:
    • assert: graduate assert.match and assert.doesNotMatch
    • buffer: expose btoa and atob as globals
    • deps: bump minimum ICU version to 68
    • deps: update ICU to 69.1
    • deps: update llhttp to 6.0.0
    • deps: upgrade npm to 7.10.0
    • http: add http.ClientRequest.getRawHeaderNames()
    • lib,src: update cluster to use Parent
    • module: add support for node:‑prefixed require(…) calls
    • perf_hooks: add histogram option to timerify
    • repl: add auto‑completion for node:‑prefixed require(…) calls
    • util: add getSystemErrorMap() impl

Learn More Node.js from RisingStack

At RisingStack we’ve been writing JavaScript / Node tutorials for the community in the past 5 years. If you’re beginner to Node.js, we recommend checking out our Node Hero tutorial series! The goal of this series is to help you get started with Node.js and make sure you understand how to write an application using it.

See all chapters of the Node Hero tutorial series:
  1. Getting Started with Node.js
  2. Using NPM
  3. Understanding async programming
  4. Your first Node.js HTTP server
  5. Node.js database tutorial
  6. Node.js request module tutorial
  7. Node.js project structure tutorial
  8. Node.js authentication using Passport.js
  9. Node.js unit testing tutorial
  10. Debugging Node.js applications
  11. Node.js Security Tutorial
  12. How to Deploy Node.js Applications
  13. Monitoring Node.js Applications

As a sequel to Node Hero, we have completed another series called Node.js at Scale – which focuses on advanced Node / JavaScript topics. Take a look!


How to check Node version

Knowing what Node.js version you have in a project is vital because it affects the Node and JavaScript language features you can use. Also, you might not want to miss out on essential security patches or experience compatibility problems.

There are several ways to check the Node version you’re using. You can use basic command line prompts, npm, or nvm as well to do it. In this article below, we list ways for you to check your Node version with different methods, on different operating systems.

Check your Node version in one step

To check the version of Node.js on your computer (may it run MacOS, Windows or a Linux distro such as Ubuntu), run the following command:

$ node -v

This will return the current version of node that is installed on your system. 

If you want to learn more about Node.js, you can find instructions and official docs on the node website. https://nodejs.org/en/download/ 

To check the latest version of Node for both the LTS and Current versions, check out our blog post that collects and lists all major updates.

Using npm to check your node version (and also update it)

Alternatively, you can use a package manager like npm to update Node. 

https://docs.npmjs.com/cli/update-node

$ npm install -g npm@latest 

then

$ npm update -g node 

will update node and npm. 

If you are having issues with your node installation, you can try the following commands:

$ npm cache clean

$ npm install -g --unsafe-perm node 

These commands will try to clean up any issues with your npm cache and install Node with permissions that may help resolve any installation issues. 

To only check your npm version, you can use the following command:

$ npm -v

Managing your Node versions with nvm

NVM (Node Version Manager) is a bash script that allows you to manage multiple active versions of Node.js. It allows you to install, uninstall, list, and switch between node versions.

​​The preferred way for managing your local node.js versions is to use nvm, which can be installed like this:

curl https://raw.githubusercontent.com/creationix/nvm/v0.33.3/install.sh | bash

Then, use this to install node.js:

$ nvm install node

To use a specific version of node.js, you can do:

$ nvm use node

If you want to uninstall node.js, you can type:

$ nvm uninstall node

To update Node to the latest LTS version, you can use the nvm update node command.

For further details on how to install specific versions, see the nvm docs: https://github.com/nvm-sh/nvm

If you’re using Windows, you’ll need to use nvm-windows, which has almost the same API as nvm, but is a completely different project, and has a different philosophy. https://github.com/coreybutler/nvm-windows

What is Node.js?

If you are already familiar with Node, but need a quick refresher about it, we’ve got you covered:

Node.js is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. 

Node.js is open-source and free to use. It also provides a first-class development experience, making it an ideal platform for web-based applications. Node.js also has a large community of developers who are constantly creating new modules and libraries to make development easier. 

Node.js applications are written in JavaScript, and can be run on Mac OS X, Windows, and Linux which makes it fully cross-platform. Node.js has an event-driven architecture and a non-blocking I/O model that makes it lightweight and efficient. These features make it perfect for data-intensive, real-time applications that run across distributed devices. 

There are a few things to keep in mind when writing Node.js applications. First, since Node.js is asynchronous, you need to use promises, async functions, callbacks or events to handle data flow. Second, Node.js is single-threaded, so you need to be careful not to block the thread with long computations. 

How the Node release schedule works

A few words about the Node.js release schedule:

Releases
The Node.js Release Schedule

Node.js releases are identified by a major and minor version number, e.g. v4.2.0. Minor version releases (e.g. v4.2.1) are made every few weeks and contain new features and bug fixes. Major version releases (e.g. v5.0.0) are made every six months or so and may contain breaking changes.

Nowadays, the LTS (long-term support) Node.js versions get an even number, like 16.14.0, while Current releases have an odd version number, like 17.5.0.

Argo CD Kubernetes Tutorial

Usually, when devs set up a CI/CD pipeline for an application hosted on Kubernetes, they handle both the CI and CD parts in one task runner, such as CircleCI or Travis CI. These services offer push-based updates to your deployments, which means that credentials for the code repo and the deployment target must be stored with these services. This method can be problematic if the service gets compromised, e.g. as it happened to CodeShip last year.

Even using services such as GitLab CI and GitHub Actions requires that credentials for accessing your cluster be stored with them. If you’re employing GitOps, to take advantage of using the usual Push to repo -> Review Code -> Merge Code sequence for managing your infrastructure configuration as well, this would also mean access to your whole infrastructure.

It can also be difficult to keep track of how the different deployed environments are drifting from the configuration files stored in the repo, since these external services are not specific to Kubernetes and thus aren’t aware of the status of all the deployed pieces.

Luckily there are tools to help us with these issues. Two of the most known are Argo CD and Flux. They allow credentials to be stored within your Kubernetes cluster, where you have more control over their security. They also offer pull-based deployment with drift detection. Both of these tools solve the same issues, but tackle them from different angles.

Here, we’ll take a deeper look at Argo CD out of the two.

What is Argo CD

Argo CD is a continuous deployment tool that you can install into your Kubernetes cluster. It can pull the latest code from a git repository and deploy it into the cluster – as opposed to external CD services, deployments are pull-based. You can manage updates for both your application and infrastructure configuration with Argo CD. Advantages of such a setup include being able to use credentials from the cluster itself for deployments, which can be stored in secrets or a vault.

Preparation

To try out Argo CD, we’ve also prepared a test project that we’ll deploy to Kubernetes hosted on DigitalOcean. You can grab the example project from our GitLab repository here: https://gitlab.com/risingstack-org/argocd-demo/

Forking the repo will allow you to make changes for yourself, and it can be set up later in Argo CD as the deployment source.

Get doctl from here:

https://docs.digitalocean.com/reference/doctl/how-to/install/

Or, if you’re using a mac, from Homebrew:

brew install doctl

You can use any Kubernetes provider for this tutorial. The two requirements are having a Docker repository and a Kubernetes cluster with access to it. For this tutorial, we chose to go with DigitalOcean for the simplicity of its setup, but most other platforms should work just fine.

We’ll focus on using the web UI for the majority of the process, but you can also opt to use the `doctl` cli tool if you wish. `doctl` can mostly replace `kubectl` as well. `doctl` will only be needed to push our built docker image to the repo that our deployment will have access to.

Helm is a templating engine for Kubernetes. It allows us to define values separately from the structure of the yaml files, which can help with access control and managing multiple environments using the same template.

You can grab Helm here: https://github.com/helm/helm/releases

Or via Homebrew for mac users:

brew install helm

Download the latest Argo CD version from https://github.com/argoproj/argo-cd/releases/latest

If you’re using a mac, you can grab the cli tools from Homebrew:

brew install argocd

DigitalOcean Setup

After logging in, first, create a cluster using the “Create” button on the top right, and selecting Kubernetes. For the purposes of this demo, we can just go with the smallest cluster with no additional nodes. Be sure to choose a data center close to you.

Preparing the demo app

You can find the demo app in the node-app folder in the repo you forked. Use this folder for the following steps to build and push the docker image to the GitLab registry:

docker login registry.gitlab.com

docker build . -t registry.gitlab.com/<substiture repo name here>/demo-app-1

docker push registry.gitlab.com/<substiture repo name here>/demo-app-1

GitLab offers a free image registry with every git repo – even free tier ones. You can use these to store your built image, but be aware that the registry inherits the privacy setting of the git repo, you can’t change them separately.

Once the image is ready, be sure to update the values.yaml file with the correct image url and use helm to generate the resources.yaml file. You can then deploy everything using kubectl:

helm template -f "./helm/demo-app/values.yaml" "./helm/demo-app" > "./helm/demo-app/resources/resources.yaml"

kubectl apply -f helm/demo-app/resources/resources.yaml

The only purpose of these demo-app resources’ is to showcase the ArgoCD UI capabilities, that’s why it also contains an Ingress resource as a plus.

Install Argo CD into the cluster

Argo CD provides a yaml file that installs everything you’ll need and it’s available online. The most important thing here is to make sure that you install it into the `argocd` namespace, otherwise, you’ll run into some errors later and Argo CD will not be usable.

kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

From here, you can use Kubernetes port-forwarding to access the UI of Argo CD:

kubectl -n argocd port-forward svc/argocd-server 8080:443

This will expose the service on localhost:8080 – we will use the UI to set up the connection to GitLab, but it could also be done via the command line tool.

Argo CD setup

To log in on the UI, use `admin` as username, and the password retrieved by this command:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Once you’re logged in, connect your fork of the demo app repo from the Repositories inside the Settings menu on the left side. Here, we can choose between ssh and https authentication – for this demo, we’ll use https, but for ssh, you’d only need to set up a key pair for use.

argo cd repo connect

Create an API key on GitLab and use it in place of a password alongside your username to connect the repo. An API key allows for some measure of access control as opposed to using your account password.

After successfully connecting the repository, the only thing left is to set up an Application, which will take care of synchronizing the state of our deployment with that described in the GitLab repo.

argo cd tutorial how to set up a new application

You’ll need to choose a branch or a tag to use to monitor. Let’s choose the master branch for now – it should contain the latest stable code anyway. Setting the sync policy to automatic allows for automatic deployments when the git repo is updated, and also provides automatic pruning and self-healing capabilities.

argo cd application setup

Be sure to set the destination cluster to the one available in the dropdown and use the `demo` namespace. If everything is set correctly, Argo CD should now start syncing the deployment state.

Features of Argo CD

From the application view, you can now see the different parts that comprise our demo application.

argo cd app overview

Clicking on any of these parts allows for checking the diff of the deployed config, and the one checked into git, as well as the yaml files themselves separately. The diff should be empty for now, but we’ll see it in action once we make some changes or if you disable automatic syncing.

argo cd container details

You also have access to the logs from the pods here, which can be quite useful – logs are not retained between different pod instances, which means that they are lost on the deletion of a pod, however.

argo cd container logs

It is also possible to handle rollbacks from here, clicking on the “History and Rollback” button. Here, you can see all the different versions that have been deployed to our cluster by commit. 

You can re-deploy any of them using the … menu on the top right, and selecting “Redeploy” – this feature needs automatic deployment to be turned off. However, you’ll be prompted to do so here.

These should cover the most important parts of the UI and what is available in Argo CD. Next up, we’ll take a look at how the deployment update happens when code changes on GitLab.

Updating the deployment

With the setup done, any changes you make to the configuration that you push to the master branch should be reflected on the deployment shortly after.

A very simple way to check out the updating process is to bump up the `replicaCount` in values.yaml to 2 (or more), and run the helm command again to generate the resources.yaml. 

Then, commit and push to master and monitor the update process on the Argo CD UI.

You should see a new event in the demo-app events, with the reason `ScalingReplicaSet`.

argo cd scaling event

You can double-check the result using kubectl, where you should now see two instances of the demo-app running:

kubectl -n demo get pod

There is another branch prepared in the repo, called second-app, which has another app that you can deploy, so you can see some more of the update process and diffs. It is quite similar to how the previous deployment works.

First, you’ll need to merge the second-app branch into master – this will allow the changes to be automatically deployed, as we set it up already. Then, from the node-app-2 folder, build and push the docker image. Make sure to have a different version tag for it, so we can use the same repo!

docker build . -t registry.gitlab.com/<substitute repo name here>/demo-app-2

docker push registry.gitlab.com/<substitute repo name here>/demo-app-2

You can set deployments to manual for this step, to be able to take a better look at the diff before the actual update happens. You can do this from the sync settings part of `App details`.

argo cd sync policy

Generate the updated resources file afterwards, then commit and push it to git to trigger the update in Argo CD:

helm template -f "./helm/demo-app/values.yaml" "./helm/demo-app" > "./helm/demo-app/resources/resources.yaml"

This should result in a diff appearing `App details` -> `Diff` for you to check out. You can either deploy it manually or just turn auto-deploy back.

ArgoCD safeguards you from those resource changes that are drifting from the latest source-controlled version of your code. Let’s try to manually scale up the deployment to 5 instances:

Get the name of the replica set:

kubectl -n demo get rs

Scale it to 5 instances:

kubectl -n demo scale --replicas=5 rs/demo-app-<number>

If you are quick enough, you can catch the changes applied on the ArgoCD Application Visualization as it tries to add those instances. However, ArgoCD will prevent this change, because it would drift from the source controlled version of the deployment. It also scales the deployment down to the defined value in the latest commit (in my example it was set to 3). 

The downscale event can be found under the `demo-app` deployment events, as shown below:

how to scale down kubernetes with argo cd

From here, you can experiment with whatever changes you’d like!

Finishing our ArgoCD Kubernetes Tutorial

This was our quick introduction to using ArgoCD, which can make your GitOps workflow safer and more convenient.

Stay tuned, as we’re planning to take a look at the other heavy-hitter next time: Flux.

This article was written by Janos Kubisch, senior engineer at RisingStack.

React-Native Sound & Animation Tutorial

In this React-Native sound and animation tutorial, you’ll learn tips on how you can add animation and sound effects to your mobile application. We’ll also discuss topics like persisting data with React-Native AsyncStorage.

To showcase how you can do these things, we’ll use our Mobile Game which we’ve been building in the previous 4 episodes of this tutorial series.

Quick recap: In the previous episodes of our React-Native Tutorial Series, we built our React-Native game’s core: you can finally collect points, see them, and even lose.

Now let’s spice things up and make our game enjoyable with music, react native animations & sound effects, then finish off by saving the high score!

react native tutorial animation sound

Adding Sound to our React-Native Game

As you may have noticed, we have a /music and /sfx directory in the assets, but we didn’t quite touch them until now. They are not mine, so let’s just give credit to the creators: the sound effects can be found here, and the music we’ll use are made by Komiku.

We will use the Expo’s built-in Audio API to work with music. We’ll start by working in the Home/index.js to add the main menu theme.

First off, import the Audio API from the ExpoKit:

import { Audio } from 'expo';

Then import the music and start playing it in the componentWillMount():

async componentWillMount() {
  this.backgroundMusic = new Audio.Sound();
  try {
    await this.backgroundMusic.loadAsync(
      require("../../assets/music/Komiku_Mushrooms.mp3")
    );
    await this.backgroundMusic.setIsLoopingAsync(true);
    await this.backgroundMusic.playAsync();
    // Your sound is playing!
  } catch (error) {
    // An error occurred!
  
}

This will load the music, set it to be a loop and start playing it asynchronously.

If an error happens, you can handle it in the catch section – maybe notify the user, console.log() it or call your crash analytics tool. You can read more about how the Audio API works in the background in the related Expo docs.

In the onPlayPress, simply add one line before the navigation:

this.backgroundMusic.stopAsync();

If you don’t stop the music when you route to another screen, the music will continue to play on the next screen, too.

Speaking of other screens, let’s add some background music to the Game screen too, with the same steps, but with the file ../../assets/music/Komiku_BattleOfPogs.mp3.

Spicing Things up with SFX

Along with the music, sound effects also play a vital part in making the game fun. We’ll have one sound effect on the main menu (button tap), and six on the game screen (button tap, tile tap – correct/wrong, pause in/out, lose).

Let’s start with the main menu SFX, and from there, you’ll be able to add the remaining to the game screen by yourself (I hope).

We only need a few lines of code to define a buttonFX object that is an instance of the Audio.Sound(), and load the sound file in the same try-catch block as the background music:

async componentWillMount() {
   this.backgroundMusic = new Audio.Sound();
   this.buttonFX = new Audio.Sound();
   try {
     await this.backgroundMusic.loadAsync(
       require("../../assets/music/Komiku_Mushrooms.mp3")
     );
     await this.buttonFX.loadAsync(
       require("../../assets/sfx/button.wav")
     );
    ...

You only need one line of code to play the sound effect. On the top of the onPlayPress event handler, add the following:

onPlayPress = () => {
   this.buttonFX.replayAsync();
   ...

Notice how I used replayAsync instead of playAsync – it’s because we may use this sound effect more than one time, and if you use playAsync and run it multiple times, it will only play the sound for the first time. It will come in handy later, and it’s also useful for continuing with the Game screen.

It’s easy as one, two, three! Now, do the six sound effects on the game screen by yourself:

  • Button tap
    • ../../assets/sfx/button.wav
    • Play it when pressing the Exit button
  • Tile tap – correct
    • ../../assets/sfx/tile_tap.wav
    • Play it in the onTilePress/good tile block
  • Tile tap – wrong
    • ../../assets/sfx/tile_wrong.wav
    • Play it in the onTilePress/wrong tile block
  • Pause – in
    • ../../assets/sfx/pause_in.wav
    • Play it in the onBottomBarPress/case "INGAME" block
  • Pause – out
    • ../../assets/sfx/pause_out.wav
    • Play it in the onBottomBarPress/case "PAUSED" block
  • Lose
    • ../../assets/sfx/lose.wav
    • Play it in the interval’s if (this.state.timeLeft <= 0) block
    • Also stop the background music with this.backgroundMusic.stopAsync();
    • Don’t forget to start playing the background music when starting the game again. You can do this by adding this.backgroundMusic.replayAsync(); to the onBottomBarPress/case "LOST" block.

Our game is already pretty fun, but it still lacks the shaking animation when we’re touching the wrong tile – thus we are not getting any instant noticeable feedback.

A Primer to React-Native Animations (with example)

Animating is a vast topic, thus we can only cover the tip of the iceberg in this article. However, Apple has a really good WWDC video about designing with animations, and the Human Interface Guidelines is a good resource, too.

We could use a ton of animations in our app (e.g. animating the button size when the user taps it), but we’ll only cover one in this tutorial: The shaking of the grid when the player touches the wrong tile.

This React Native animation example will have several benefits: it’s some sort of punishment (it will take some time to finish), and as I mentioned already, it’s instant feedback when pressing the wrong tile, and it also looks cool.

There are several animation frameworks out there for React-Native, like react-native-animatable, but we’ll use the built-in Animated API for now. If you are not familiar with it yet, be sure to check the docs out.

Adding React-Native Animations to our Game

First, let’s initialize an animated value in the state that we can later use in the style of the grid container:

state = {
  ...
  shakeAnimation: new Animated.Value(0)
};

And for the <View> that contains the grid generator (with the shitton of ternary operators in it), just change <View> to <Animated.View>. (Don’t forget to change the closing tag, too!) Then in the inline style, add left: shakeAnimation so that it looks something like this:

<Animated.View
   style={{
     height: height / 2.5,
     width: height / 2.5,
     flexDirection: "row",
     left: shakeAnimation
  }
>
   {gameState === "INGAME" ?
   ...

Now let’s save and reload the game. While playing, you shouldn’t notice any difference. If you do, you did something wrong – make sure that you followed every step exactly.

Now, go to the onTilePress() handler and at the // wrong tile section you can start animating the grid. In the docs, you’ll see that the basic recommended function to start animating with in React Native is Animated.timing().

You can animate one value to another value by using this method, however, to shake something, you will need multiple, connected animations playing after each other in a sequence. For example modifying it from 0 to 50, then -50, and then back to 0 will create a shake-like effect.

If you look at the docs again, you’ll see that Animated.sequence([]) does exactly this: it plays a sequence of animations after each other. You can pass in an endless number of animations (or Animated.timing()s) in an array, and when you run .play() on this sequence, the animations will start executing.

You can also ease animations with Easing. You can use backbounceease and elastic – to explore them, be sure to check the docs. However, we don’t need them yet as it would really kill the performance now.

Our sequence will look like this:

Animated.sequence([
 Animated.timing(this.state.shakeAnimation, {
   toValue: 50,
   duration: 100
 }),
 Animated.timing(this.state.shakeAnimation, {
   toValue: -50,
   duration: 100
 }),
 Animated.timing(this.state.shakeAnimation, {
   toValue: 50,
   duration: 100
 }),
 Animated.timing(this.state.shakeAnimation, {
   toValue: -50,
   duration: 100
 }),
 Animated.timing(this.state.shakeAnimation, {
   toValue: 0,
   duration: 100
 })
]).start();

This will change the shakeAnimation in the state to 50, -50, 50, -50 and then 0. Therefore, we will shake the grid and then reset to its original position. If you save the file, reload the app and tap on the wrong tile, you’ll hear the sound effect playing and see the grid shaking.

Moving away animations from JavaScript thread to UI thread

Animations are an essential part of every fluid UI, and rendering them with performance efficency in mind is something that every developer needs to strive for.

By default, the Animation API runs on the JavaScript thread, blocking other renders and code execution. This also means that if it gets blocked, the animation will skip frames. Because of this, we want to move animation drivers from the JS thread to the UI thread – and good news is, this can be done with just one line of code with the help of native drivers.

To learn more about how the Animation API works in the background, what exactly are “animation drivers” and why exactly it is more efficient to use them, be sure to check out this blog post, but let’s move forward.

To use native drivers in our app, we only need to add just one property to our animations: useNativeDriver: true.

Before:

Animated.timing(this.state.shakeAnimation, {
   toValue: 0,
   duration: 100
})

After:

Animated.timing(this.state.shakeAnimation, {
   toValue: 0,
   duration: 100,
   useNativeDriver: true
})

And boom, you’re done, great job there!

Now, let’s finish off with saving the high scores.

Persisting Data – Storing the High Scores

In React-Native, you get a simple, unencrypted, asynchronous, and persistent key-value storage system: AsyncStorage.

It’s recommended not to use AsyncStorage while aiming for production, but for a demo project like this, we can use it with ease. If you are aiming for production, be sure to check out other solutions like Realm or SQLite, though.

First off, we should create a new file under utils called storage.js or something like that. We will handle the two operations we need to do – storing and retrieving data – with the AsyncStorage API.

The API has two built-in methods: AsyncStorage.setItem() for storing, and AsyncStorage.getItem() for retrieving data. You can read more about how they work in the docs linked above. For now, the snippet above will be able to fulfill our needs:

import { AsyncStorage } from "react-native";

export const storeData = async (key, value) => {
 try {
   await AsyncStorage.setItem(`@ColorBlinder:${key}`, String(value));
 } catch (error) {
   console.log(error);
 
};

export const retrieveData = async key => {
 try {
   const value = await AsyncStorage.getItem(`@ColorBlinder:${key}`);
   if (value !== null) {
     return value;
   
 } catch (error) {
   console.log(error);
 
};

By adding this, we’ll have two async functions that can be used to store and persist data from the AsyncStorage. Let’s import our new methods and add two keys we’ll persist to the Game screen’s state:

import {
 generateRGB,
 mutateRGB,
 storeData,
 retrieveData
} from "../../utilities";
...
state = {
   points: 0,
   bestPoints: 0, // < new
   timeLeft: 15,
   bestTime: 0, // < new
   ...

And display these values in the bottom bar, next to their corresponding icons:

<View style={styles.bestContainer}>
 <Image
   source={require("../../assets/icons/trophy.png")}
   style={styles.bestIcon}
 />
 <Text style={styles.bestLabel}>{this.state.bestPoints}</Text>
</View>
. . .
<View style={styles.bestContainer}>
 <Image
   source={require("../../assets/icons/clock.png")}
   style={styles.bestIcon}
 />
 <Text style={styles.bestLabel}>{this.state.bestTime}</Text>
</View>

Now, let’s just save the best points first – we can worry about storing the best time later. In the timer, we have an if statement that checks whether we’ve lost already – and that’s the time when we want to update the best point, so let’s just check if your actual points are better than our best yet, and if it is, update the best:

if (this.state.timeLeft <= 0) {
 this.loseFX.replayAsync();
 this.backgroundMusic.stopAsync();
 if (this.state.points > this.state.bestPoints) {
   this.setState(state => ({ bestPoints: state.points }));
   storeData('highScore', this.state.points);
 
 this.setState(me{ gameState: "LOST" });
} else {
...

And when initializing the screen, in the async componentWillMount(), make sure to read in the initial high score and store it in the state so that we can display it later:

retrieveData('highScore').then(val => this.setState({ bestPoints: val || 0 }));

Now, you are storing and retrieving the high score on the game screen – but there’s a high score label on the home screen, too! You can retrieve the data with the same line as now and display it in the label by yourself.

We only need one last thing before we can take a break: storing the highest time that the player can achieve. To do so, you can use the same functions we already use to store the data (but with a different key!), However, we’ll need a bit different technique to check if we need to update the store:

this.interval = setInterval(async () => {
 if (this.state.gameState === "INGAME") {
   if (this.state.timeLeft > this.state.bestTime) {
     this.setState(state => ({ bestTime: state.timeLeft }));
     storeData('bestTime', this.state.timeLeft);
   
. . .

This checks if our current timeLeft is bigger than the best that we achieved yet. At the top of the componentWillMount, don’t forget to retrieve and store the best time along with the high score, too:

retrieveData('highScore').then(val => this.setState({ bestPoints: val || 0 }));
retrieveData('bestTime').then(val => this.setState({ bestTime: val || 0 }));

Now everything’s set. The game is starting to look and feel nice, and the core features are already starting to work well – so from now on, we don’t need too much work to finish the project.

Next up in our React-Native Tutorial

In the next episode of this series, we will look into making our game responsive by testing on devices ranging from iPhone SE to Xs and last but not least, testing on Android. We will also look into improving the developer experience with ESLint and add testing with Jest.

Don’t worry if you still feel a bit overwhelmed, mobile development may be a huge challenge, even if you are already familiar with React – so don’t lose yourself right before the end. Give yourself a rest and check back later for the next episode!

If you want to check out the code that’s been finished as of now – check out the project’s GitHub repo.

In case you’re looking for outsourced development services, don’t hesitate to reach out to RisingStack.

Stripe & JS: Payments Integration Tutorial

In this Stripe & JS tutorial, I’ll show how you can create a simple webshop using Stripe Payments integration, React and Express. We’ll get familiar with the Stripe Dashboard and basic Stripe features such as charges, customers, orders, coupons and so on. Also, you will learn about the usage of webhooks and restricted API keys.

If you read this article, you’ll get familiar with Stripe integration in 15 minutes, so you can leapfrog the process of burying yourself in the official documentation (’cause we already did that for you!)

A little bit about my Stripe experience and the reasons for writing this tutorial: At RisingStack we’ve been working with a client from the US healthcare scene who hired us to create a large-scale webshop they can use to sell their products. During the creation of this Stripe based platform, we spent a lot of time with studying the documentation and figuring out the integration. Not because it is hard, but there’s a certain amount of Stripe related knowledge that you’ll need to internalize.

We’ll build an example app in this tutorial together – so you can learn how to create a Stripe Webshop from the ground up! The example app’s frontend can be found at https://github.com/RisingStack/post-stripe , and its backend at https://github.com/RisingStack/post-stripe-api.

I’ll use code samples from these repo’s in the article below.

Table of contents:

The Basics of Stripe Payments Integration

First of all, what is the promise of Stripe? It is basically a payment provider: you set up your account, integrate it into your application and let the money rain. Pretty simple right? Well, let your finance people decide if it is a good provider or not based on the plans they offer.

If you are here, you are probably more interested in the technicalities of the integration, so I’ll delve into that part. To show you how to use Stripe, we’ll build a simple demo application with it together.

Make it rain

Before we start coding, we need to create a Stripe account. Don’t worry, no credit card is required in this stage. You only need to provide a payment method when you attempt to activate your account.

Go straight to the Stripe Dashboard and hit that Sign up button. Email, name, password… the usual. BOOM! You have a dashboard. You can create, manage and keep track of orders, payment flow, customers… so basically everything you want to know regarding your shop is here.

If you want to create a new coupon or product, you only need to click a few buttons or enter a simple curl command to your terminal, as the Stripe API Doc describes. Of course, you can integrate Stripe into your product so your admins can set them up from your UI, and then integrate and expose it to your customers using Stripe.js.

Another important menu on the dashboard is the Developers section, where we will add our first webhook and create our restricted API keys. We will get more familiar with the dashboard and the API while we implement our demo shop below.

Stripe Payments Integration Dashboard

Creating a Webshop in React with Charges

Let’s create a React webshop with two products: a Banana and Cucumber. What else would you want to buy in a webshop anyways, right?

  • We can use Create React App to get started.
  • We’re going to use Axios for HTTP requests
  • and query-string-object to convert objects to query strings for Stripe requests.
  • We will also need React Stripe Elements, which is a React wrapper for Stripe.js and Stripe Elements. It adds secure credit card inputs and sends the card’s data for tokenization to the Stripe API.

Take my advice: You should never send raw credit card details to your own API, but let Stripe handle the credit card security for you.

You will be able to identify the card provided by the user using the token you got from Stripe.

npx create-react-app webshop
cd webshop
npm install --save react-stripe-elements
npm install --save axios
npm install --save query-string-object

After we’re done with the preparations, we have to include Stripe.js in our application. Just add <script src="https://js.stripe.com/v3/"></script> to the head of your index.html.

Now we are ready to start coding.

First, we have to add a <StripeProvider/> from react-stripe-elements to our root React App component.

Stripe Payments Dashboard API Key

This will give us access to the Stripe object. In the props, we should pass a public access key (apiKey) which is found in the dashboard’s Developers section under the API keys menu as Publishable key.

// App.js
import React from 'react'
import {StripeProvider, Elements} from 'react-stripe-elements'
import Shop from './Shop'

const App = () => {
  return (
    <StripeProvider apiKey="pk_test_xxxxxxxxxxxxxxxxxxxxxxxx">
      <Elements>
        <Shop/>
      </Elements>
    </StripeProvider>
  )
}

export default App

The <Shop/> is the Stripe implementation of our shop form as you can see from import Shop from './Shop'. We’ll go into the details later.

As you can see the <Shop/> is wrapped in <Elements> imported from react-stripe-elements so that you can use injectStripe in your components. To shed some light on this, let’s take a look at our implementation in Shop.js.

// Shop.js
import React, { Component } from 'react'
import { CardElement } from 'react-stripe-elements'
import PropTypes from 'prop-types'
import axios from 'axios'
import qs from 'query-string-object'

const prices = {
  banana: 150,
  cucumber: 100
}

class Shop extends Component {
  constructor(props) {
    super(props)
    this.state = {
      fetching: false,
      cart: {
        banana: 0,
        cucumber: 0
      }
    }
    this.handleCartChange = this.handleCartChange.bind(this)
    this.handleCartReset = this.handleCartReset.bind(this)
    this.handleSubmit = this.handleSubmit.bind(this)
  }

  handleCartChange(evt) {
    evt.preventDefault()
    const cart = this.state.cart
    cart[evt.target.name]+= parseInt(evt.target.value)
    this.setState({cart})
  }

  handleCartReset(evt) {
    evt.preventDefault()
    this.setState({cart:{banana: 0, cucumber: 0}})
  }

  handleSubmit(evt) {
    // TODO
  }

  render () {
    const cart = this.state.cart
    const fetching = this.state.fetching
    return (
      <form onSubmit={this.handleSubmit} style={{width: '550px', margin: '20px', padding: '10px', border: '2px solid lightseagreen', borderRadius: '10px'}}>
        <div>
          Banana {(prices.banana / 100).toLocaleString('en-US', {style: 'currency', currency: 'usd'})}:
          <div>
            <button name="banana" value={1} onClick={this.handleCartChange}>+</button>
            <button name="banana" value={-1} onClick={this.handleCartChange} disabled={cart.banana <= 0}>-</button>
            {cart.banana}
          </div>
        </div>
        <div>
          Cucumber {(prices.cucumber / 100).toLocaleString('en-US', {style: 'currency', currency: 'usd'})}:
          <div>
            <button name="cucumber" value={1} onClick={this.handleCartChange}>+</button>
            <button name="cucumber" value={-1} onClick={this.handleCartChange} disabled={cart.cucumber <= 0}>-</button>
            {cart.cucumber}
          </div>
        </div>
        <button onClick={this.handleCartReset}>Reset Cart</button>
        <div style={{width: '450px', margin: '10px', padding: '5px', border: '2px solid green', borderRadius: '10px'}}>
          <CardElement style={{base: {fontSize: '18px'}}}/>
        </div>
        {!fetching
          ? <button type="submit" disabled={cart.banana === 0 && cart.cucumber === 0}>Purchase</button>
          : 'Purchasing...'
        }
        Price:{((cart.banana * prices.banana + cart.cucumber * prices.cucumber) / 100).toLocaleString('en-US', {style: 'currency', currency: 'usd'})}
      </form>
    )
  }
}

Shop.propTypes = {
  stripe: PropTypes.shape({
    createToken: PropTypes.func.isRequired
  }).isRequired
}

If you take a look at it, the Shop is a simple React form with purchasable elements: Banana and Cucumber, and with a quantity increase/decrease button for each. Clicking the buttons will change their respective amount in this.state.cart.

There is a submit button below, and the current total price of the cart is printed at the very bottom of the form. Price will expect the prices in cents, so we store them as cents, but of course, we want to present them to the user in dollars. We prefer them to be shown to the second decimal place, e.g. $2.50 instead of $2.5. To achieve this, we can use the built-in toLocaleString() function to format the prices.

Now comes the Stripe specific part: we need to add a form element so users can enter their card details. To achieve this, we only need to add <CardElment/> from react-stripe-elements and that’s it. I’ve also added a bit of low effort inline css to make this shop at least somewhat pleasing to the eye.

We also need to use the injectStripe Higher-Order-Component in order to pass the Stripe object as a prop to the <Shop/> component, so we can call Stripe’s createToken() function in handleSubmit to tokenize the user’s card, so they can be charged.

// Shop.js
import { injectStripe } from 'react-stripe-elements'
export default injectStripe(Shop)

Once we receive the tokenized card from Stripe, we are ready to charge it.

For now let’s just keep it simple and charge the card by sending a POST request to https://api.stripe.com/v1/charges with specifying the payment source (this is the token id), the charge amount (of the charge) and the currency as described in the Stripe API.

We need to send the API key in the header for authorization. We can create a restricted API key on the dashboard in the Developers menu. Set the permission for charges to “Read and write” as shown in the screenshot below.

Do not forget:. You should never use your swiss army Secret key on the client!

Stripe Dashboard API-Key Restricted

Let’s take a look at it in action.

// Shop.js
// ...
const stripeAuthHeader = {
  'Content-Type': 'application/x-www-form-urlencoded',
  'Authorization': `Bearer rk_test_xxxxxxxxxxxxxxxxxxxxxxxx`
}

class Shop extends Component {
  // ...
  handleSubmit(evt) {
    evt.preventDefault()
    this.setState({fetching: true})
    const cart = this.state.cart
    
    this.props.stripe.createToken().then(({token}) => {
        const price = cart.banana * prices.banana + cart.cucumber * prices.cucumber
        axios.post(`https://api.stripe.com/v1/charges`, 
        qs.stringify({
          source: token.id,
          amount: price,
          currency: 'usd'
        }),
        { headers: stripeAuthHeader })
        .then((resp) => {
          this.setState({fetching: false})
          alert(`Thank you for your purchase! You card has been charged with: ${(resp.data.amount / 100).toLocaleString('en-US', {style: 'currency', currency: 'usd'})}`)
        })
        .catch(error => {
          this.setState({fetching: false})
          console.log(error)
        })
    }).catch(error => {
      this.setState({fetching: false})
      console.log(error)
    })
  }
  // ...
}

For testing purposes you can use these international cards provided by Stripe.

Looks good, we can already create tokens from cards and charge them, but how should we know who bought what and where should we send the package?

Thats where products and orders come in.

Placing an order with Stripe

Implementing a simple charging method is a good start, but we will need to take it a step further to create orders. To do so, we have to set up a server and expose an API which handles those orders and accepts webhooks from Stripe to process them once they got paid.

We will use express to handle the routes of our API. You can find a list below of a couple of other node packages to get started. Let’s create a new root folder and get started.

npm install express stripe body-parser cors helmet 

The skeleton is a simple express Hello World using CORS so that the browser won’t panic when we try to reach our PI server that resides and Helmet to set a bunch of security headers automatically for us.

// index.js
const express = require('express')
const helmet = require('helmet')
const cors = require('cors')
const app = express()
const port = 3001

app.use(helmet())

app.use(cors({
  origin: [/http:\/\/localhost:\d+$/],
  allowedHeaders: ['Content-Type', 'Authorization'],
  credentials: true
}))

app.get('/api/', (req, res) => res.send({ version: '1.0' }))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))

In order to access Stripe, require Stripe.js and call it straight away with your Secret Key (you can find it in dashboard->Developers->Api keys), we will use stripe.orders.create() for passing the data we receive when the client calls our server to place an order.

The orders will not be paid automatically. To charge the customer we can either use a Source directly such as a Card Token ID or we can create a Stripe Customer.

The added benefit of creating a Stripe customer is that we can track multiple charges, or create recurring charges for them and also instruct Stripe to store the shipping data and other necessary information to fulfill the order.

You probably want to create Customers from Card Tokens and shipping data even when your application already handles users. This way you can attach permanent or seasonal discount to those Customers, allow them to shop any time with a single click and list their orders on your UI.

For now let’s keep it simple anyway and use the Card Token as our Source calling stripe.orders.pay() once the order is successfully created.

In a real-world scenario, you probably want to separate the order creation from payment by exposing them on different endpoints, so if the payment fails the Client can try again later without having to recreate the order. However, we still have a lot to cover, so let’s not overcomplicate things.

// index.js
const stripe = require('stripe')('sk_test_xxxxxxxxxxxxxxxxxxxxxx')

app.post('/api/shop/order', async (req, res) => {
  const order = req.body.order
  const source = req.body.source
  try {
    const stripeOrder = await stripe.orders.create(order)
    console.log(`Order created: ${stripeOrder.id}`)
    await stripe.orders.pay(stripeOrder.id, {source})
  } catch (err) {
    // Handle stripe errors here: No such coupon, sku, ect
    console.log(`Order error: ${err}`)
    return res.sendStatus(404)
  }
  return res.sendStatus(200)
})

Now we’re able to handle orders on the backend, but we also need to implement this on the UI.

First, let’s implement the state of the <Shop/> as an object the Stripe API expects.

You can find out how an order request should look like here. We’ll need an address object with line1, city, state, country, postal_code fields, a name, an email and a coupon field, to get our customers ready for coupon hunting.

// Shop.js
class Shop extends Component {
  constructor(props) {
    super(props)
    this.state = {
      fetching: false,
      cart: {
        banana: 0,
        cucumber: 0
      },
      coupon: '',
      email: '',
      name: '',
      address : {
        line1: '',
        city: '',
        state: '',
        country: '',
        postal_code: ''
      }
    }
    this.handleCartChange = this.handleCartChange.bind(this)
    this.handleCartReset = this.handleCartReset.bind(this)
    this.handleAddressChange = this.handleAddressChange.bind(this)
    this.handleChange = this.handleChange.bind(this)
    this.handleSubmit = this.handleSubmit.bind(this)
  }

  handleChange(evt) {
    evt.preventDefault()
    this.setState({[evt.target.name]: evt.target.value})
  }

  handleAddressChange(evt) {
    evt.preventDefault()
    const address = this.state.address
    address[evt.target.name] = evt.target.value
    this.setState({address})
  }
  // ...
}

Now we are ready to create the input fields. We should, of course, disable the submit button when the input fields are empty. Just the usual deal.

// Shop.js
render () {
  const state = this.state
  const fetching = state.fetching
  const cart = state.cart
  const address = state.address
  const submittable = (cart.banana !== 0 || cart.cucumber !== 0) && state.email && state.name && address.line1 && address.city && address.state && address.country && address.postal_code
  return (
// ...
    <div>Name: <input type="text" name="name" onChange={this.handleChange}/></div>
    <div>Email: <input  type="text" name="email" onChange={this.handleChange}/></div>
    <div>Address Line: <input  type="text" name="line1" onChange={this.handleAddressChange}/></div>
    <div>City: <input  type="text" name="city" onChange={this.handleAddressChange}/></div>
    <div>State: <input  type="text" name="state" onChange={this.handleAddressChange}/></div>
    <div>Country: <input  type="text" name="country" onChange={this.handleAddressChange}/></div>
    <div>Postal Code: <input  type="text" name="postal_code" onChange={this.handleAddressChange}/></div>
    <div>Coupon Code: <input  type="text" name="coupon" onChange={this.handleChange}/></div>
    {!fetching
      ? <button type="submit" disabled={!submittable}>Purchase</button>
      : 'Purchasing...'}
// ...

We also have to define purchasable items.

These items will be identified by a Stock Keeping Unit by Stripe, which can be created on the dashboard as well.

First, we have to create the Products (Banana and Cucumber on dashboard->Orders->Products) and then assign an SKU to them (click on the created product and Add SKU in the Inventory group). An SKU specifies the products including its properties – size, color, quantity, and prices -, so a product can have multiple SKUs.

stripe-payments-dashboard-banana-product-creation

.

Stripe-Payments-Dashboard-Stock-Keeping-Unit

After we created our products and assigned SKUs to them, we add them to the webshop so we can parse up the order.

// Shop.js
const skus = {
  banana: 1,
  cucumber: 2
}

We are ready to send orders to our express API on submit. We do not have to calculate the total price of orders from now on. Stripe can sum it up for us, based on the SKUs, quantities, and coupons.

// Shop.js
handleSubmit(evt) {
  evt.preventDefault()
  this.setState({fetching: true})
  const state = this.state
  const cart = state.cart
  
  this.props.stripe.createToken({name: state.name}).then(({token}) => {
    // Create order
    const order = {
      currency: 'usd',
      items: Object.keys(cart).filter((name) => cart[name] > 0 ? true : false).map(name => {
        return {
          type: 'sku',
          parent: skus[name],
          quantity: cart[name]
        }
      }),
      email: state.email,
      shipping: {
        name: state.name,
        address: state.address
      }
    }
    // Add coupon if given
    if (state.coupon) {
      order.coupon = state.coupon
    }
    // Send order
    axios.post(`http://localhost:3001/api/shop/order`, {order, source: token.id})
    .then(() => {
      this.setState({fetching: false})
      alert(`Thank you for your purchase!`)
    })
    .catch(error => {
      this.setState({fetching: false})
      console.log(error)
    })
  }).catch(error => {
    this.setState({fetching: false})
    console.log(error)
  })
}

Let’s create a coupon for testing purposes. This can be done on the dashboard as well. You can find this option under the Billing menu on the Coupons tab.

There are multiple types of coupons based on their duration, but only coupons with the type Once can be used for orders. The rest of the coupons can be attached to Stripe Customers.

You can also specify a lot of parameters for the coupon you create, such as how many times it can be used, whether it is amount based or percentage based, and when will the coupon expire. Now we need a coupon that can be used only once and provides a reduction on the price by a certain amount.

Stripe-Payments-Dashboard-Coupon-Creation

Great! Now we have our products, we can create orders, and we can also ask Stripe to charge the customer’s card for us. But we are still not ready to ship the products as we have no idea at the moment whether the charge was successful. To get that information, we need to set up webhooks, so Stripe can let us know when the money is on its way.

Stripe-Payments-Shop-Orders

Setting up Stripe Webhooks to Verify Payments

As we discussed earlier, we are not assigning cards but Sources to Customers. The reason behind that is Stripe is capable of using several payment methods, some of which may take days to be verified.

We need to set up an endpoint Stripe can call when an event — such as a successful payment — has happened. Webhooks are also useful when an event is not initiated by us via calling the API, but comes straight from Stripe.

Imagine that you have a subscription service, and you don’t want to charge the customer every month. In this case, you can set up a webhook, and you will get notified when the recurring payment was successful or if it failed.

In this example, we only want to be notified when an order gets paid. When it happens, Stripe can notify us by calling an endpoint on our API with an HTTP request containing the payment data in the request body. At the moment, we don’t have a static IP, but we need a way to expose our local API to the public internet. We can use Ngrok for that. Just download it and run with ./ngrok http 3001 command to get an ngrok url pointing to our localhost:3001.

We also have to set up our webhook on the Stripe dashboard. Go to Developers -> Webhooks, click on Add endpoint and type in your ngrok url followed by the endpoint to be called e.g. http://92832de0.ngrok.io/api/shop/order/process. Then under Filter event select Select types to send and search for order.payment_succeeded.

Stripe-Dashboard-Webhook-Creation

The data sent in the request body is encrypted and can only be decrypted by using a signature sent in the header and with the webhook secret that can be found on the webhooks dashboard.

This also means that we cannot simply use bodyParser to parse the body, so we need to add an exception to bodyParser so it will be bypassed when the URL starts with /api/shop/order/process. We need to use the stripe.webhooks.constructEvent() function instead, provided by the Stripe JavaScript SDK to decrypt the message for us.

// index.js
const bodyParser = require('body-parser')

app.use(bodyParser.json({
  verify: (req, res, buf) => {
    if (req.originalUrl.startsWith('/api/shop/order/process')) {
      req.rawBody = buf.toString()
    }
  }
}))

app.use(bodyParser.urlencoded({
  extended: false
}))

app.post('/api/shop/order/process', async (req, res) => {
  const sig = req.headers['stripe-signature']
  try {
    const event = await stripe.webhooks.constructEvent(req.rawBody, sig, 'whsec_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
    console.log(`Processing Order : ${event.data.object.id}`)
    // Process payed order here
  } catch (err) {
    return res.sendStatus(500)
  }
  return res.sendStatus(200)
})

After an order was successfully paid, we can parse send it to other APIs like Salesforce or Stamps to pack things up and get ready to send out.

Wrapping up our Stripe JS tutorial

My goal with this guide was to provide help to you through the process of creating a webshop using JavaScript & Stripe. I hope you did learn from our experiences and will use this guide when you decide to implement a similar system like this in the future.

In case you need help with Stripe development, you’d like to learn more on how to use the Stripe Api, or you’re just looking for Node & React development in general, feel free to reach out to us on info@risingstack.com or via our Node.js development website.

How to Deploy a Ceph Storage to Bare Virtual Machines

Ceph is a freely available storage platform that implements object storage on a single distributed computer cluster and provides interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure. Ceph storage manages data replication and is generally quite fault-tolerant. As a result of its design, the system is both self-healing and self-managing.

Ceph has loads of benefits and great features, but the main drawback is that you have to host and manage it yourself. In this post, we’ll check two different approaches of virtual machine deployment with Ceph.

Anatomy of a Ceph cluster

Before we dive into the actual deployment process, let’s see what we’ll need to fire up for our own Ceph cluster.

There are three services that form the backbone of the cluster

  • ceph monitors (ceph-mon) maintain maps of the cluster state and are also responsible for managing authentication between daemons and clients
  • managers (ceph-mgr) are responsible for keeping track of runtime metrics and the current state of the Ceph cluster
  • object storage daemons (ceph-osd) store data, handle data replication, recovery, rebalancing, and provide some ceph monitoring information.

Additionally, we can add further parts to the cluster to support different storage solutions

  • metadata servers (ceph-mds) store metadata on behalf of the Ceph Filesystem
  • rados gateway (ceph-rgw) is an HTTP server for interacting with a Ceph Storage Cluster that provides interfaces compatible with OpenStack Swift and Amazon S3.

There are multiple ways of deploying these services. We’ll check two of them:

  • first, using the ceph/deploy tool,
  • then a docker-swarm based vm deployment.

Let’s kick it off!

Ceph Setup

Okay, a disclaimer first. As this is not a production infrastructure, we’ll cut a couple of corners.

You should not run multiple different Ceph demons on the same host, but for the sake of simplicity, we’ll only use 3 virtual machines for the whole cluster.

In the case of OSDs, you can run multiple of them on the same host, but using the same storage drive for multiple instances is a bad idea as the disk’s I/O speed might limit the OSD daemons’ performance.

For this tutorial, I’ve created 4 EC2 machines in AWS: 3 for Ceph itself and 1 admin node. For ceph-deploy to work, the admin node requires passwordless SSH access to the nodes and that SSH user has to have passwordless sudo privileges.

In my case, as all machines are in the same subnet on AWS, connectivity between them is not an issue. However, in other cases editing the hosts file might be necessary to ensure proper connection.

Depending on where you deploy Ceph security groups, firewall settings or other resources have to be adjusted to open these ports

  • 22 for SSH
  • 6789 for monitors
  • 6800:7300 for OSDs, managers and metadata servers
  • 8080 for dashboard
  • 7480 for rados gateway

Without further ado, let’s start deployment.

Ceph Storage Deployment

Install prerequisites on all machines

$ sudo apt update
$ sudo apt -y install ntp python

For Ceph to work seamlessly, we have to make sure the system clocks are not skewed. The suggested solution is to install ntp on all machines and it will take care of the problem. While we’re at it, let’s install python on all hosts as ceph-deploy depends on it being available on the target machines.

Prepare the admin node

$ ssh -i ~/.ssh/id_rsa -A ubuntu@13.53.36.123

As all the machines have my public key added to known_hosts thanks to AWS, I can use ssh agent forwarding to access the Ceph machines from the admin node. The first line ensures that my local ssh agent has the proper key in use and the -A flag takes care of forwarding my key.

$ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-nautilus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
$ sudo apt update
$ sudo apt -y install ceph-deploy

We’ll use the latest nautilus release in this example. If you want to deploy a different version, just change the debian-nautilus part to your desired release (luminous, mimic, etc.).

$ echo "StrictHostKeyChecking no" | sudo tee -a /etc/ssh/ssh_config > /dev/null

OR

$ ssh-keyscan -H 10.0.0.124,10.0.0.216,10.0.0.104 >> ~/.ssh/known_hosts

Ceph-deploy uses SSH connections to manage the nodes we provide. Each time you SSH to a machine that is not in the list of known_hosts (~/.ssh/known_hosts), you’ll get prompted whether you want to continue connecting or not. This interruption does not mesh well with the deployment process, so we either have to use ssh-keyscan to grab the fingerprint of all the target machines or disable the strict host key checking outright.

10.0.0.124 ip-10-0-0-124.eu-north-1.compute.internal ip-10-0-0-124
10.0.0.216 ip-10-0-0-216.eu-north-1.compute.internal ip-10-0-0-216
10.0.0.104 ip-10-0-0-104.eu-north-1.compute.internal ip-10-0-0-104

Even though the target machines are in the same subnet as our admin and they can access each other, we have to add them to the hosts file (/etc/hosts) for ceph-deploy to work properly. Ceph-deploy creates monitors by the provided hostname, so make sure it matches the actual hostname of the machines otherwise the monitors won’t be able to join the quorum and the deployment fails. Don’t forget to reboot the admin node for the changes to take effect.

$ mkdir ceph-deploy
$ cd ceph-deploy

As a final step of the preparation, let’s create a dedicated folder as ceph-deploy will create multiple config and key files during the process.

Deploy resources

$ ceph-deploy new ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

The command ceph-deploy new creates the necessary files for the deployment. Pass it the hostnames of the monitor nodes, and it will create cepf.conf and ceph.mon.keyring along with a log file.

The ceph-conf should look something like this

[global]
fsid = 0572e283-306a-49df-a134-4409ac3f11da
mon_initial_members = ip-10-0-0-124, ip-10-0-0-216, ip-10-0-0-104
mon_host = 10.0.0.124,10.0.0.216,10.0.0.104
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

It has a unique ID called fsid, the monitor hostnames and addresses and the authentication modes. Ceph provides two authentication modes: none (anyone can access data without authentication) or cephx (key based authentication).

The other file, the monitor keyring is another important piece of the puzzle, as all monitors must have identical keyrings in a cluster with multiple monitors. Luckily ceph-deploy takes care of the propagation of the key file during virtual deployments.

$ ceph-deploy install --release nautilus ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

As you might have noticed so far, we haven’t installed ceph on the target nodes yet. We could do that one-by-one, but a more convenient way is to let ceph-deploy take care of the task. Don’t forget to specify the release of your choice, otherwise you might run into a mismatch between your admin and targets.

$ ceph-deploy mon create-initial

Finally, the first piece of the cluster is up and running! create-initial will deploy the monitors specified in ceph.conf we generated previously and also gather various key files. The command will only complete successfully if all the monitors are up and in the quorum.

$ ceph-deploy admin ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

Executing ceph-deploy admin will push a Ceph configuration file and the ceph.client.admin.keyring to the /etc/ceph directory of the nodes, so we can use the ceph CLI without having to provide the ceph.client.admin.keyring each time to execute a command.

At this point, we can take a peek at our cluster. Let’s SSH into a target machine (we can do it directly from the admin node thanks to agent forwarding) and run sudo ceph status.

$ sudo ceph status
  cluster:
	id: 	0572e283-306a-49df-a134-4409ac3f11da
	health: HEALTH_OK

  services:
	mon: 3 daemons, quorum ip-10-0-0-104,ip-10-0-0-124,ip-10-0-0-216 (age 110m)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in

  data:
  	pools:   0 pools, 0 pgs
objects: 0 objects, 0 B
	usage:   0 B used, 0 B / 0 B avail
pgs:

Here we get a quick overview of what we have so far. Our cluster seems to be healthy and all three monitors are listed under services. Let’s go back to the admin and continue adding pieces.

$ ceph-deploy mgr create ip-10-0-0-124

For luminous+ builds a manager daemon is required. It’s responsible for monitoring the state of the Cluster and also manages modules/plugins.

Okay, now we have all the management in place, let’s add some storage to the cluster to make it actually useful, shall we?

First, we have to find out (on each target machine) the label of the drive we want to use. To fetch the list of available disks on a specific node, run

$ ceph-deploy disk list ip-10-0-0-104

Here’s a sample output:

ceph storage deploy sample output
$ ceph-deploy osd create --data /dev/nvme1n1 ip-10-0-0-124
$ ceph-deploy osd create --data /dev/nvme1n1 ip-10-0-0-216
$ ceph-deploy osd create --data /dev/nvme1n1 ip-10-0-0-104

In my case the label was nvme1n1 on all 3 machines (courtesy of AWS), so to add OSDs to the cluster I just ran these 3 commands.

At this point, our cluster is basically ready. We can run ceph status to see that our monitors, managers and OSDs are up and running. But nobody wants to SSH into a machine every time to check the status of the cluster. Luckily there’s a pretty neat dashboard that comes with Ceph, we just have to enable it.

…Or at least that’s what I thought. The dashboard was introduced in luminous release and was further improved in mimic. However, currently we’re deploying nautilus, the latest version of Ceph. After trying the usual way of enabling the dashboard via a manager

$ sudo ceph mgr module enable dashboard

we get an error message saying Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement.

Turns out, in nautilus the dashboard package is no longer installed by default. We can check the available modules by running

$ sudo ceph mgr module ls

and as expected, dashboard is not there, it comes in a form a separate package. So we have to install it first, luckily it’s pretty easy.

$ sudo apt install -y ceph-mgr-dashboard

Now we can enable it, right? Not so fast. There’s a dependency that has to be installed on all manager hosts, otherwise we get a slightly cryptic error message saying Error EIO: Module 'dashboard' has experienced an error and cannot handle commands: No module named routes.

$ sudo apt install -y python-routes

We’re all set to enable the dashboard module now. As it’s a public-facing page that requires login, we should set up a cert for SSL. For the sake of simplicity, I’ve just disabled the SSL feature. You should never do this in production, check out the official docs to see how to set up a cert properly. Also, we’ll need to create an admin user so we can log in to our dashboard.

$ sudo ceph mgr module enable dashboard
$ sudo ceph config set mgr mgr/dashboard/ssl false
$ sudo ceph dashboard ac-user-create admin secret administrator

By default, the dashboard is available on the host running the manager on port 8080. After logging in, we get an overview of the cluster status, and under the cluster menu, we get really detailed overviews of each running daemon.

ceph storage deployment dashboard
ceph cluster dashboard

If we try to navigate to the Filesystems or Object Gateway tabs, we get a notification that we haven’t configured the required resources to access these features. Our cluster can only be used as a block storage right now. We have to deploy a couple of extra things to extend its usability.

Quick detour: In case you’re looking for a company that can help you with Ceph, or DevOps in general, feel free to reach out to us at RisingStack!

Using the Ceph filesystem

Going back to our admin node, running

$ ceph-deploy mds create ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

will create metadata servers, that will be inactive for now, as we haven’t enabled the feature yet. First, we need to create two RADOS pools, one for the actual data and one for the metadata.

$ sudo ceph osd pool create cephfs_data 8
$ sudo ceph osd pool create cephfs_metadata 8

There are a couple of things to consider when creating pools that we won’t cover here. Please consult the documentation for further details.

After creating the required pools, we’re ready to enable the filesystem feature

$ sudo ceph fs new cephfs cephfs_metadata cephfs_data

The MDS daemons will now be able to enter an active state, and we are ready to mount the filesystem. We have two options to do that, via the kernel driver or as FUSE with ceph-fuse.

Before we continue with the mounting, let’s create a user keyring that we can use in both solutions for authorization and authentication as we have cephx enabled. There are multiple restrictions that can be set up when creating a new key specified in the docs. For example:

$ sudo ceph auth get-or-create client.user mon 'allow r' mds 'allow r, allow rw path=/home/cephfs' osd 'allow rw pool=cephfs_data' -o /etc/ceph/ceph.client.user.keyring

will create a new client key with the name user and output it into ceph.client.user.keyring. It will provide write access for the MDS only to the /home/cephfs directory, and the client will only have write access within the cephfs_data pool.

Mounting with the kernel

Now let’s create a dedicated directory and then use the key from the previously generated keyring to mount the filesystem with the kernel.

$ sudo mkdir /mnt/mycephfs
$ sudo mount -t ceph 13.53.114.94:6789:/ /mnt/mycephfs -o name=user,secret=AQBxnDFdS5atIxAAV0rL9klnSxwy6EFpR/EFbg==

Attaching with FUSE

Mounting the filesystem with FUSE is not much different either. It requires installing the ceph-fuse package.

$ sudo apt install -y ceph-fuse

Before we run the command we have to retrieve the ceph.conf and ceph.client.user.keyring files from the Ceph host and put the in /etc/ceph. The easiest solution is to use scp.

$ sudo scp ubuntu@13.53.114.94:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
$ sudo scp ubuntu@13.53.114.94:/etc/ceph/ceph.client.user.keyring /etc/ceph/ceph.keyring

Now we are ready to mount the filesystem.

$ sudo mkdir cephfs
$ sudo ceph-fuse -m 13.53.114.94:6789 cephfs

Using the RADOS gateway

To enable the S3 management feature of the cluster, we have to add one final piece, the rados gateway.

$ ceph-deploy rgw create ip-10-0-0-124

For the dashboard, it’s required to create a radosgw-admin user with the system flag to enable the Object Storage management interface. We also have to provide the user’s access_key and secret_key to the dashboard before we can start using it.

$ sudo radosgw-admin user create --uid=rg_wadmin --display-name=rgw_admin --system
$ sudo ceph dashboard set-rgw-api-access-key <access_key>
$ sudo ceph dashboard set-rgw-api-secret-key <secret_key>

Using the Ceph Object Storage is really easy as RGW provides an interface identical to S3. You can use your existing S3 requests and code without any modifications, just have to change the connection string, access, and secret keys.

Ceph Storage Monitoring

The dashboard we’ve deployed shows a lot of useful information about our cluster, but monitoring is not its strongest suit. Luckily Ceph comes with a Prometheus module. After enabling it by running:

$ sudo ceph mgr module enable prometheus

A wide variety of metrics will be available on the given host on port 9283 by default. To make use of these exposed data, we’ll have to set up a prometheus instance.

I strongly suggest running the following containers on a separate machine from your Ceph cluster. In case you are just experimenting (like me) and don’t want to use a lot of VMs, make sure you have enough memory and CPU left on your virtual machine before firing up docker, as it can lead to strange behaviour and crashes if it runs out of resources.

There are multiple ways of firing up Prometheus, probably the most convenient is with docker. After installing docker on your machine, create a prometheus.yml file to provide the endpoint where it can access our Ceph metrics.

# /etc/prometheus.yml

scrape_configs:
  - job_name: 'ceph'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    static_configs:
    - targets: ['13.53.114.94:9283]

Then launch the container itself by running:

$ sudo docker run -p 9090:9090 -v /etc/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus

Prometheus will start scraping our data, and it will show up on its dashboard. We can access it on port 9090 on its host machine. Prometheus dashboard is great but does not provide a very eye-pleasing dashboard. That’s the main reason why it’s usually used in pair with Graphana, which provides awesome visualizations for the data provided by Prometheus. It can be launched with docker as well.

$ sudo docker run -d -p 3000:3000 grafana/grafana

Grafana is fantastic when it comes to visualizations, but setting up dashboards can be a daunting task. To make our lives easier, we can load one of the pre-prepared dashboards, for example this one.

ceph storage grafana monitoring

Ceph Deployment: Lessons Learned & Next Up

CEPH can be a great alternative to AWS S3 or other object storages when running in the public operating your service in the private cloud is simply not an option. The fact that it provides an S3 compatible interface makes it a lot easier to port other tools that were written with a “cloud first” mentality. It also plays nicely with Prometheus, thus you don’t need to worry about setting up proper monitoring for it, or you can swap it a more simple, more battle-hardened solution such as Nagios.

In this article, we deployed CEPH to bare virtual machines, but you might need to integrate it into your Kubernetes or Docker Swarm cluster. While it is perfectly fine to install it on VMs next to your container orchestration tool, you might want to leverage the services they provide when you deploy your CEPH cluster. If that is your use case, stay tuned for our next post covering CEPH where we’ll take a look at the black magic required to use CEPH on Docker Swarm and Kubernetes.

In the next CEPH tutorial which we’ll release next week, we’re going to take a look at valid ceph storage alternatives with Docker or with Kubernetes.

PS: Feel free to reach out to us at RisingStack in case you need help with Ceph or Ops in general!