Open-source projects can foster innovation, improve code quality, and attract a vibrant community of contributors. The open-source model is a powerful tool for enhancing their products and driving long-term success. But there are caveats, and not all open-source projects are successful. This piece on Why and How of Making Your Product Open Source from Prashant Ghildiyal provides practical insights into open sourcing, including choosing the right license, creating a strong community, and nurturing a healthy ecosystem around your project.
About the Speaker:
Viktor Farcic is a Google Developer Expert, Docker Captain, and published author member.
He is passionate about DevOps, Containers, Kubernetes, Microservices, Continuous Integration, Delivery and Deployment (CI/CD), and Test-Driven Development (TDD). He delivers talks at community gatherings and conferences. While he loves to share his experiment with the community, he also co-hosts a podcast- DevOps Paradox.
In this Podcast, Victor talks about the stark contrast between an organization's highly experimental culture to try new tools and platforms and the challenges of sticking to one solution like Kubernetes.
He believes that open-source software can significantly impact the highly compliance-driven culture of the "SDLC" in every organization and bring back the experimental culture. Here he also discusses the topic of low-code and no-code platforms making an impact. Finally, Viktor shares what organizations should do when things go wrong in complex systems.
The DevOps Toolkit Series, DevOps Paradox (https://amzn.to/2myrYYA), and Test-Driven Java Development (http://www.amazon.com/Test-Driven-Java-Development-Viktor-Farcic-ebook/dp/B00YSIM3SC) are three of his print publications.
The Key takeaway from the Podcast
Even though Kubernetes is regarded as one of the greatest open-source projects, adoption has been one of the most painful experiences. Before organizations could enjoy the advantages of portability, flexibility, and greater developer productivity, they would need to overcome multiple challenges in culture and processes.
The main challenges to Kubernetes adoption are higher learning curves and the allocation of insufficient IT resources. Nearly all enterprises (94 percent) encounter these difficulties while adopting Kubernetes. Businesses must overcome or work around these obstacles to leverage Kubernetes benefits successfully. While challenges remain, production projects employing Kubernetes are increasing by many folds. Viktor insists on looking at the "12-factor apps" guide (you can refer to the documentation) because it will help understand the best practices one needs for deploying into Kubernetes.
Let's learn the insights from the podcast through the 12-factor application.
List of 12 factors from https://12factor.net/
I.CodeBase: One codebase tracked in revision control, many deploys
According to the codebase principle, all software assets related to an application, including source code, manifests, the provisioning script, and configuration settings, are stored in a source code repository accessible to the development, testing, and DevOps engineering teams. The source code repository is also accessible to all automation scripts that are part of the Continuous Integration and Continuous Delivery (CI and CD) processes that are part of the enterprise's Software Development Lifecycle (SDLC).
II. Dependencies: Explicitly declare and isolate dependencies
The principle of dependencies asserts that only code that is unique and relevant to the purpose of the application is stored in source control. External artefacts, such as Node.js packages that have been built or.NET DLLs, should be referenced in a dependencies manifest loaded into memory during development, testing, and production runtime. You want to avoid storing artifacts and source code in the source code repository.
III. Config: Store config in the environment
According to the Configuration Principle, configuration data should be passed into the runtime environment as environment variables or as settings defined in a separate configuration file. While it is permissible to store default settings that can be overridden directly in code in some cases, settings like port numbers, dependency URLs, and state settings like DEBUG should exist independently and be applied upon deployment. Examples of external configuration files include a Java properties file, a Kubernetes manifest file, or a docker-compose.yml file.
The advantage of keeping configuration settings separate from application logic is that you can apply configuration settings based on deployment strategies.
For example, you can have one set of configuration settings for a deployment intended for testing and another set for a deployment intended for production.
IV. Backing service: Treat backing services as attached resources
The backing services principle encourages architects to treat external components as attached resources, such as databases, email servers, message brokers, and independent services that system personnel can provision and maintain. Treating resources as auxiliary services promotes flexibility and efficiency throughout the software development life cycle (SDLC).
V. Build, release, run: Strictly separate build and run stages
It is critical to separate different stages in a DevOps environment. The Build, Release, and Run principle divide the deployment process into three stages that can be instantiated at any time.
During the build stage, the code is retrieved from the source code management system and built or compiled into artifacts stored in an artifact repository such as Docker Hub or a Maven repository. After the code has been built, configuration settings are applied in the release stage. The run stage creates a runtime environment with scripts and a tool like Ansible. The application and its dependencies are installed in the newly created runtime environment.
The building, releasing, and running processes are completely ephemeral. If any artifacts or environments in the pipeline are destroyed, they can be rebuilt from the ground up using assets from the source code repository.
VI. Processes: Execute the app as one or more stateless processes
The Processes principle, also known as stateless processes, states that a 12-Factor App application will run as a collection of stateless processes. This means that no single process is aware of another's state, and no process is aware of information like session or workflow status. A stateless process facilitates scaling. When a process is stateless, instances can be added and removed to address a specific load at a given point in time. Statelessness prevents unintended consequences because each process runs independently.
VII. Port binding: Export services via port binding
According to the port binding principle, a service or application is identified to a network by a port number rather than a domain name. The reasoning is that both manual and automated service discovery mechanisms can dynamically assign domain names and associated IP addresses. As a result, using them as a reference point is risky. Exposing a service or application to the network by port number, on the other hand, is more reliable and manageable. At the very least, port forwarding can avoid potential issues caused by a collision between a private network port number assignment and the public use of that same port number by another process.
The port binding principle is based on the idea that using a port number consistently is the best way to expose a process to the network. Port 80, for example, is commonly used for HTTP web servers, while port 443 is the default port number for HTTPS, port 22 is used for SSH, port 3306 is the default port for MySQL, and port 27017 is the default port for MongoDB.
VIII. Concurrency: Scale out via the process model
Concurrency recommends organizing processes by purpose and then separating them so that they can be scaled up and down as needed. Web servers that operate behind a load balancer expose an application to the network. In turn, the load balancer's group of web servers employs business logic in business service processes that run behind their load balancer. If the load on the web servers grows, that group can be scaled up independently to meet the demand. If there is a bottleneck caused by a burden placed on the business service, that layer can be scaled up independently.
Concurrency means that different parts of an application can be scaled up to meet the situation's needs. Otherwise, architectures are forced to scale up the entire application when concurrency is not supported.
IX. Disposability: Maximize robustness with fast startup and graceful shutdown
According to the disposability principle, applications should start and stop gracefully. This entails doing all of the necessary "housekeeping" before making an application available to customers. For example, a graceful startup will ensure that all database connections and access to other network resources are operational. Any remaining configuration work has also been completed.
We must practice disposability to ensure that all database connections and other network resources are properly terminated and that all shutdown activity is logged.
X. Dev/prod parity: Keep development, staging, and production as similar as possible
Maintain as much consistency between development, staging, and production as possible.
Instead, the CI/CD process will be altered to shift V2's deployment target to Production. The CI/CD process will proceed toward the new target in the expected Build, Release, and Run pattern.
As you can see, Dev/Prod Parity is very similar to Build, Release, and Run. The key distinction is that Dev/Prod Parity ensures that the deployment process for Production and Development is identical.
XI. Logs : Treat logs as event streams
The Logs principle advocates sending log data in a stream accessible to a wide range of interested consumers. The routing of log data must be distinct from the processing of log data. For example, one consumer may be interested only in error data, whereas another consumer may be interested in request or response data. Another customer may wish to keep all log data for event archiving purposes. Another benefit is that the log data is preserved even if an app is terminated.
XII. Admin processes: Run admin/management tasks as one-off processes
In this podcast, he also refers to how the cloud adds its complexity of anonymity of the servers in dealing with such a vast system that is using Kubernetes. Lack of expertise, low emphasis on training the professionals, and several other challenges could lead to the inefficient implementation of CI/CD pipelines in Kubernetes. Teams are seen automating wrong processes, writing flawed manifests, and configuring CI/CD incorrectly.
Making way across the Kubernetes landscape could be confusing for new users. The transition to Kubernetes can become complicated and challenging to manage. Kubernetes has a steep learning curve. Kubernetes challenges generally arise in security, networking, deployment, scaling, and vendor support. With different usage and functional patterns, the challenges for particular users can be different.
The lesson from this podcast is before moving to Kubernetes, one should do a quick evaluation of "Do you really need Kubernetes, and do you need it now?" Don’t do it just because it’s good and the cloud service provider offers it.
Consider the hidden cost of spending extra hours building functionality alone rather than having a managed offering. When you have production issues and need to solve them as soon as possible, the complexity of the services and their anonymity on the servers are disadvantages. These issues can be easily resolved by relying on platforms that provide Kubernetes deployments without doing it from scratch. Such a platform is Devtron, where developers do not have to learn any additional commands to deploy on Kubernetes on production on Day 0.
The lessons from this podcast is before moving to Kubernetes, one should do a quick evaluation of "Do you really need Kubernetes, and do you need it now?" Don’t do it just because it’s good and is offered by the cloud service provider. Consider the hidden cost of spending extra hours building functionality all by yourself rather than having a managed offering. When you have production issues and need to solve them as soon as possible, the complexity of the services and their anonymity on the servers are disadvantages.
Enjoy this Podcast
Visit Devtron’s GitHub page to begin the installation. The well-maintained documentation makes installation and troubleshooting a breeze. You can reach out to the team on their dedicated Discord Community Server for any queries.