Applications have quickly become complex webs of interconnected microservices. Failures in the API calls between microservices grow more common and far more dastardly - wreaking havoc throughout applications in unforeseen ways. Accidents and errors can happen even with the most brilliant engineers and most controlled environments in the world. Unfortunately, this means that outright elimination of API call failures is not an option. Instead, we have to prepare our applications for failure, and this is where event-driven architecture comes into play.
Get started with Kafka and Docker in 20 minutes
Apache Kafka is a high-throughput, high-availability, and scalable solution chosen by the world's top companies for uses such as event streaming, stream processing, log aggregation, and more. Kafka runs on the platform of your choice, such as Kubernetes or ECS, as a cluster of one or more Kafka nodes. A Kafka cluster will be initialized with zero or more topics, which you can think of as message channels or queues. Clients can connect to Kafka to publish messages to topics or to consume messages from topics the client is subscribed to.
Implement RabbitMQ on Docker in 20 minutes
Here at Architect, it's no secret that we love portable microservices. And what better way to make your services portable than by decoupling their interactions?
Today we talk about decoupling your services using a classic communication pattern: the message queue. In this tutorial, we'll show you how to get our favorite open source message broker-RabbitMQ-up and running in just 20 minutes. Then we'll use Architect to deploy the stack to both your local and remote environments.
Deploy your Django app with Docker
Django is an excellent Python Web framework, but it can be tricky to deploy to the cloud. If you're building in Python, you want the confidence that what you develop and deploy locally will translate to production. This quick-start guide demonstrates how to set up and run a simple Django/PostgreSQL app locally for development and production-ready in the cloud.
A developer's guide to GitOps
One of a modern DevOps team's driving objectives is to help developers deploy features as quickly and safely as possible. This means creating tools and processes that do everything from provisioning private developer environments to deploying and securing production workloads. This effort is a constant balance between enabling developers to move quickly and ensuring that their haste doesn't lead to critical outages. Fortunately, both speed and stability improve tremendously whenever automation, like GitOps, is introduced.
Why distributed apps need dependency management
Distributed cloud applications (aka microservices) have introduced an enormous amount of complexity into the design and operation of cloud software. What used to manifest itself as complexity hidden within a single process or runtime now finds itself spread across tens or hundreds of loosely coupled services. While all of these services can use different languages and can scale independently from one another, the distributed nature can often make the app as a whole hard to navigate, hard to deploy, and hard secure.
Creating microservices in Nest.js
Microservices can seem intimidating at first, but at the end of the day they're just regular applications. They can execute tasks, listen for requests, connect to databases, and everything else a regular API or process would do. We only call them microservices colloquially because of the way we use them, not because they are inherently small.
In this tutorial we'll demystify the creation and operation of microservices for Node.js developers by creating a microservice using a popular Node.js framework, NestJS. We won't go into detail about the design or architecture of NestJS applications specifically, so if you're unfamiliar with the framework I'd recommend you check out its docs first, or simply skip to another one of our Node.js samples that uses Express directly.
Introducing the world's first DevOps-as-a-Service platform
We are pleased to announce the open beta release of Architect's DevOps-as-a-Service platform – a groundbreaking continuous delivery toolset that helps teams achieve deployment, networking, and security automation on a distributed architecture, all at once. Through our unique incorporation of dependency management into the deployment process, even the most complex stacks can be deployed to your favorite cloud provider with the push of a button!
Cycling credentials without cycling containers
In my prior posts, we've talked about how to instrument credential cycling and why it's important to enable application portability. In this post, we'll take the notion of credential cycling even further and show how secrets can be injected into volumes mounted to your applications. Injecting secrets into mounted volumes is a great way to securely provide credentials to your applications without forcing all your containers to cycle on every update.
How dynamic credentialing makes apps portable
In my last post we talked about how to leverage secret managers to safely store and cycle application credentials in production. In this post we're going to take the concept of credential cycling a step further to streamline the ability for an app or service to be deployed to parallel environments through dynamic credentialing.