Continuous Integration and Continuous Delivery/Continuous Deployment (CI/CD) pipelines are a must for any modern software development. Continuous integration refers to having developers continuously merge changes to a shared main branch so that each pull request contains a small, isolated change, making it easier to diagnose problems and allowing you to find problems faster rather than waiting and merging several disparate changes all at once.
DevOps? DevEverything! (Or why I joined Architect)
There was a time when developers wrote code, QA engineers tested the code, Ops handled deployment and monitoring, and the concept of observability did not even exist. The DevOps movement intended to break down the silos between development and operations, but for many software companies, this was merely a rename of the Ops team to the DevOps team. Today, things are finally changing for real.
Microservice orchestration or choreography?
The adoption of microservice architecture came into fruition due to advancements in technology (such as GPU) and the exponential rise in data generation. At present, microservices are a well-established architecture framework that allows organizations to deliver large-scale applications rapidly. This blog post will cover the difference between microservice orchestration and microservice choreography, enabling microservices to communicate and coordinate.
What is a staging environment?
Within any software development process, “production” is essentially the final environment in a sequence of places any code will be deployed. Before code is pushed to production, it must be extensively tested, and therefore be ready for public availability.
This extensive testing is easier said than done. Developers write tests to ensure their features can handle unexpected user behavior, but what about odd infrastructure behavior? Running a production environment is an exhaustive and expensive effort. As developers, we don't want to wait for code to land in production only to find out that it doesn't behave as intended. This is where the staging environment comes in.
The basics of secret management
The need for secret management exists in just about every application that is developed, which is to say, every application needs to be able to run locally and in production at a minimum. In most cases, an application will also need a staging server and ways to dynamically spin up test environments to try out some new idea your team cooked up. This leads every application to have some sort of configuration system as a way to change how the system operates at runtime.
What is a production environment?
A lot of work goes into designing and developing cloud applications and services – from the applications or services themselves to the infrastructure that supports them. Despite the rigor involved in the software development lifecycle (SDLC), the final step is always the same: production.
However, developers often conflate production-ready code with the production environment itself. A production environment is traditionally seen as where new software, features, or other updates are made available to users. This is where the end-user experiences the application.
Exciting news! Architect.io has closed a $5M seed round
I'm thrilled to be able to share that our team here at Architect.io has closed $5M in seed round funding, led by Next Coast Ventures and including Abstraction Capital, Spike Ventures, angel investors Jean Sini, JJ Fliegelman, Chris Nguyen, and Marc Chenn, and returning investors NextGen Venture Partners and Comcast Ventures. This round of investment brings Architect's total funding to $6.5M.
Feature Spotlight: Validating component schemas in your IDE
We're excited today to announce improvements in our tooling for validating Component files, which
includes JSON Schema Store support as well as a new
validate command for the CLI!
Create and manage an AWS ECS cluster with Terraform
AWS ECS with Fargate is a serverless computing platform that makes running containerized services on AWS easier than ever before. Before Fargate, users who would like to deploy services to an AWS ECS cluster would need to manage one or many EC2 instances of similar or varying sizes and figure out how to scale them as necessary. With Fargate, a user simply defines the compute resources such as CPU and memory that a service will need to run, and Fargate will manage where to run the container behind the scenes. There is no point where setting up an EC2 instance is required.
Get started with the Terraform Kubernetes provider
Kubernetes is a powerful yet complicated container orchestration system. It can be used to run resilient workloads on virtually any cloud platform, including AWS, GCS, Azure, DigitalOcean, and more. In this tutorial, you'll explore some of the most commonly-used building blocks of a Kubernetes application – Pods, Deployments, and Services. These resources could be created with standard Kubernetes manifests if desired, but the method of using manifests has faults, including one major drawback which is that there's no state preservation.