Publisher : Sailor - Sailor Cloud
Temporal

GCP IAM Binding using Temporal and GoLang(Gin Framework)

Gin is the web framework written in Go(GoLang). Gin is a high-performance micro-framework that can be used to build web applications. It allows you to write middleware that can be plugged into one or more request handlers or groups of request handlers.   By the end of this tutorial, you will: Prerequisites For this tutorial, you will need GoLang, Temporal, docker, and postman installed on your machine. Note: If you don’t have postman, you can use any other tool that you would use to test API endpoints. List of Packages we are going to use: Goroutine Goroutine is a lightweight thread in Golang. All programs executed by Golang run on the Goroutine. That is, the main function is also executed on the Goroutine. In other words, every program in Golang must have a least one Goroutine. In Golang, you can use the Goroutine to execute the function with the go keyword like the below. Temporal A Temporal Application is a set of Temporal Workflow Executions. Each Temporal Workflow Execution has exclusive access to its local state, executes concurrently to all other Workflow Executions, and communicates with other Workflow Executions and the environment via message passing. A Temporal Application can consist of millions to billions of Workflow Executions. Workflow Executions are lightweight components. A Workflow Execution consumes few compute resources; in fact, if a Workflow Execution is suspended, such as when it is in a waiting state, the Workflow Execution consumes no compute resources at all. main.go we will be running the temporal worker as a thread to intialize the worker and starting our Gin server in parallel. Temporal Worker In day-to-day conversations, the term Worker is used to denote either a Worker Program, a Worker Process, or a Worker Entity. Temporal documentation aims to be explicit and differentiate between them. worker/worker.go The IamBindingGoogle workFlow and AddIAMBinding Activity is registered in the Worker. Workflow Definition refers to the source for the instance of a Workflow Execution, while a Workflow Function refers to the source for the instance of a Workflow Function Execution. The purpose of an Activity is to execute a single, well-defined action (either short or long running), such as calling another service, transcoding a media file, or sending an email. worker/iam_model.go This defines the schema of the Iam Inputs. worker/base.go LoadData function is used to Unmarshal the data that is recieved in the Api request. worker/workflowsvc.go here is the service layer of the WorkFlow where there is an interface which implements the methods which is defined on the interface. worker/workflow.go A Workflow Execution effectively executes once to completion, while a Workflow Function Execution occurs many times during the life of a Workflow Execution. The IamBindingGoogle WorkFlow has been using the context of workflow and the iamDetails which contains information of google_project_id, user_name and the role that should be given in gcp. Those details will be send to an activity function which executes IAM Binding. The ExecuteActivity function should have the Activity options such as StartToCloseTimeout, ScheduleToCloseTimeout, Retry policy and TaskQueue. Each Activity function can return the output that is defined the Activity. worker/activity.go Google Cloud Go SDK is used here for actual iamBinding. Finally we need temporal setup using docker, .local/quickstart.yml Export the environment variables in terminal : Run the docker-compose file to start the temporal : Perfect!! We are all set now. Let’s run this project: And I can see an Engine instance has been created and the APIs are running and the temporal is started as a thread: And Even the Temporal UI is on localhost:8088 Let’s hit our POST API: The Workflow is completed and IamBinding is Done is GCP also. If you are looking for an easy way to manage and automate your cloud infrastructure, Sailor Cloud is a good option to consider. To learn more about Sailor Cloud, please visit the Sailor Cloud website: https://www.sailorcloud.io/ git clone github.com/venkateshsuresh/temporal-iamBind.. I hope this article helped you. Thanks for reading and stay tuned!

Read More »
Velero

Velero:The Ultimate #1 Guide to Kubernetes Backup and Disaster Recovery

In the ever-evolving landscape of Kubernetes deployments, safeguarding data integrity and ensuring resilience against unforeseen disruptions is of paramount importance. Velero, a robust open-source tool, addresses this crucial need by providing a comprehensive solution for backup, restore, and disaster recovery of Kubernetes clusters. This user-friendly tool streamlines the process of protecting valuable data and maintaining business continuity in the face of unexpected events.   Understanding Velero: The Guardian of Kubernetes Data: Velero seamlessly integrates with Kubernetes, empowering users to create, manage, and restore backups of their cluster data. It operates by securely storing backups in an object storage service, such as Amazon S3 or Google Cloud Storage, ensuring data durability and accessibility. Velero’s core functionalities encompass: Backup Creation: Velero facilitates the creation of comprehensive backups of Kubernetes resources, including pods, deployments, services, and persistent volumes. Backup Storage: Velero securely stores backups in object storage, ensuring data persistence and availability. Backup Management: Velero provides a user-friendly interface for managing backups, including viewing, deleting, and restoring them. Restore Capabilities: Velero enables seamless restoration of backups, allowing users to recover from data loss or cluster failures. Velero Installation: Embracing Data Protection Velero can be effortlessly installed using Helm, a package manager for Kubernetes. The installation process involves creating a Helm chart and deploying it to the desired namespace. Once installed, Velero is ready to safeguard your Kubernetes data. Establishing Connections: Bridging Cluster and Storage Velero requires configuration to establish a connection between the Kubernetes cluster and the object storage service where backups will be stored. This configuration involves specifying the bucket name, credential file, and region of the object storage service. Automated Backups: Scheduling Data Protection Velero empowers users to automate backup creation using a scheduling mechanism. This feature allows users to define schedules for regular backups, ensuring that data is consistently protected against loss. Disaster Recovery: A Lifeline for Critical Data Velero proves invaluable in disaster recovery scenarios. In the event of a cluster failure or data loss, Velero’s restore capabilities enable users to quickly restore their cluster to a previous state, minimizing downtime and business disruption. Velero’s Benefits: Safeguarding Data and Business Continuity Velero offers a plethora of benefits that make it a compelling choice for Kubernetes data protection: Data Security: Velero stores backups in object storage, ensuring data encryption and secure access. Simplified Backup Management: Velero provides a user-friendly interface for managing backups, streamlining the backup process. Efficient Restore Operations: Velero’s restore capabilities enable rapid recovery from data loss or cluster failures. Automated Backup Scheduling: Velero’s scheduling feature automates backup creation, ensuring consistent data protection. Disaster Recovery Readiness: Velero facilitates disaster recovery by enabling seamless restoration of backups. Velero in Action: Practical Examples: To illustrate Velero’s practical applications, consider the following scenarios: Backing Up an Entire Cluster: To back up the entire cluster, use the command velero backup create t2. Backing Up a Specific Namespace: To back up a specific namespace, use the command velero backup create t2 -n <namespace_name>. Restoring from a Backup: To restore a cluster from a backup, use the command velero restore create <restore_name> –from-backup <backup_name>. 1.Velero: A Must-Have for Kubernetes Data Protection Velero has emerged as an indispensable tool for Kubernetes data protection, empowering users to safeguard their valuable data and ensure business continuity in the face of unexpected challenges. Its intuitive interface, powerful backup and restore capabilities, and automated scheduling features make it an ideal solution for organizations of all sizes. By embracing Velero, organizations can confidently navigate the dynamic world of Kubernetes, knowing that their data is secure and readily recoverable. 2.Velero: Beyond the Basics Velero offers a range of advanced features that extend its capabilities beyond basic backup and restore: Plugin Support: Velero supports a growing ecosystem of plugins that extend its functionality, such as plugins for backing up specific types of data or integrating with cloud-based backup services. Custom Resource Definitions (CRDs): Velero utilizes CRDs to define and manage backup and restore resources, providing a structured and consistent approach to data protection. Webhooks: Velero supports webhooks, enabling integration with external systems for triggering actions based on backup and restore events. 3.Integrating Velero into CI/CD Pipelines Velero can be seamlessly integrated into CI/CD pipelines for automated backup creation and restoration. This integration enables organizations to incorporate data protection into their development and deployment processes, ensuring that data is always protected even during frequent code changes and deployments. 4.Velero for Multi-Cluster Environments Velero can be effectively used to manage backups and restorations across multiple Kubernetes clusters. This capability is particularly beneficial for organizations that operate multiple clusters in different environments, such as development, staging, and production. Velero’s centralized management console simplifies the process of managing backups and restorations across these disparate environments. 5.Velero’s Role in Data Governance Velero plays a crucial role in data governance by providing a framework for defining and enforcing data protection policies. Organizations can utilize Velero to establish retention policies for backups, ensuring that data is stored for a specified period and then automatically deleted to comply with regulatory requirements or organizational policies. 6.Velero in the Cloud-Native Landscape Velero has emerged as a leading solution for data protection in the cloud-native landscape, gaining widespread adoption among organizations that embrace Kubernetes and containerized applications. Its open-source nature, flexibility, and integration with cloud-based object storage services make it a compelling choice for organizations of all sizes seeking to safeguard their critical data.  7.Safeguarding the Future of Kubernetes Data Velero has revolutionized Kubernetes data protection, providing a comprehensive and user-friendly solution for backing up, restoring, and recovering valuable data. Its integration with Kubernetes, cloud-based object storage services, and CI/CD pipelines makes it an invaluable tool for organizations operating in the dynamic realm of containerized applications. As Kubernetes continues to evolve, Velero is poised to remain at the forefront of data protection strategies, ensuring that organizations can confidently navigate the ever-changing cloud-native landscape. If you are looking for an easy way to manage and automate your cloud infrastructure, Sailor Cloud is a good option to consider. To learn more about Sailor Cloud, please

Read More »
CLI using GoLang and Cobra

Commanding Superiority: Craft Your Own Dynamic CLI using GoLang and Cobra Mastery

“In the dynamic realm of programming languages, leveraging CLI using GoLang and Cobra development can significantly boost productivity. In this comprehensive tutorial, we will delve into the intricacies of building a sophisticated Command-Line Interface (CLI) in Go using Cobra—a powerful library that has left its mark on prominent projects such as Kubernetes, Hugo, and GitHub CLI. Our ultimate goal is to seamlessly create a CLI command for Git that interacts with GitHub’s RESTful APIs, providing an insightful listing of all repositories associated with a specific account.” The Essence of CLI using GoLang and Cobra:   Go’s Unique Characteristics: Go is more than just a programming language; it’s a paradigm shift. Expressive, concise, clean, and efficient, Go strikes a delicate balance between readability and performance. Its concurrency mechanisms are tailored for multicore and networked machines, while the novel type system facilitates flexible and modular program construction. Go compiles quickly to machine code, retains the convenience of garbage collection, and boasts the power of runtime reflection, all within the framework of a fast, statically typed, compiled language that feels remarkably dynamic. Cobra Unleashed: At the heart of many robust Go projects, Cobra is a library designed for crafting powerful and modern CLI applications. Widely employed in projects like Kubernetes, Hugo, and GitHub CLI, Cobra provides a simple yet potent interface, akin to popular tools like Git and the Go tools. Its feature set includes easy subcommand-based CLIs, fully POSIX-compliant flags, support for nested subcommands, intelligent suggestions, automatic help generation, shell autocomplete, and much more. The flexibility Cobra offers extends to defining custom help, usage, and seamless integration with Viper for 12-factor apps. As we embark on our journey to build a Git-centric CLI, Cobra will be our trusty companion. Prerequisites and Package Overview of CLI using GoLang and Cobra:   1.Setting the Stage: Before diving into the nitty-gritty of our CLI, ensure that GoLang is installed on your machine. A crucial prerequisite for this tutorial is a working knowledge of the Go programming language. 2.Key Packages: Our toolkit for this endeavor revolves around a crucial package: github.com/spf13/cobra. This package forms the backbone of our CLI, providing the scaffolding needed to create a robust and feature-rich interface. 3.Building Blocks of the CLI: Initializing the CLI: Our journey commences with the main.go file, the entry point of our CLI: package main import ( “go-cli-for-git/cmd” ) func main() { cmd.Execute() } This sets the stage for the CLI’s execution, with the main function triggering the cmd.Execute() function. Command Execution: The cmd/execute.go file is pivotal, serving as the orchestrator of our CLI’s operations: package cmd import ( “fmt” “github.com/spf13/cobra” “os” ) var rootCmd = &cobra.Command{ Use: “cli”, Short: “git cli execution using cobra to get all the repositories and their clone URL”, } func Execute() { if err := rootCmd.Execute(); err != nil { fmt.Println(err) os.Exit(1) } } Here, the rootCmd is initialized with essential metadata, and the Execute function ensures seamless execution while handling any errors that may arise. Core Functionality with Cobra: The essence of our CLI is captured in cmd/base.go, where we define the core functionality of our command using the capabilities provided by Cobra: package cmd import ( b64 “encoding/base64” “encoding/json” “fmt” “github.com/spf13/cobra” “io/ioutil” “net/http” ) var addCmd = &cobra.Command{ Use: “get”, Short: “get repo details”, Long: `Get Repo information using the Cobra Command`, Run: func(cmd *cobra.Command, args []string) { username, _ := rootCmd.Flags().GetString(“username”) password, _ := rootCmd.Flags().GetString(“password”) auth := fmt.Sprintf(“%s:%s”, username, password) authEncode := b64.StdEncoding.EncodeToString([]byte(auth)) url := “https://api.github.com/user/repos” method := “GET” // … (HTTP request setup and response handling) for _, repoDetails := range response { repo := repoDetails.(map[string]interface{}) fmt.Println(” name: “, repo[“name”], ” private: “, repo[“private”], “clone_url: “, repo[“clone_url”]) } }, } func init() { rootCmd.AddCommand(addCmd) rootCmd.PersistentFlags().StringP(“username”, “u”, “”, “the username of git”) rootCmd.PersistentFlags().StringP(“password”, “p”, “”, “the access token of the git”) } This file encapsulates the orchestration of GitHub API requests, decoding the response, and presenting a well-structured output of repository details. Unveiling GitHub RESTful APIs with Go:   Authentication and API Interaction: The core of our CLI’s functionality lies in communicating with GitHub’s RESTful APIs. In the cmd/base.go file, we extract user credentials, encode them, and construct an HTTP request to fetch repository details. The GitHub API endpoint https://api.github.com/user/repos is utilized for this purpose. The response is then parsed and formatted for presentation. Crafting a Robust CLI Experience:   Command Initialization and Flags: In the cmd/base.go file, the init function plays a pivotal role in setting up the CLI commands and their associated flags. We introduce the addCmd command, responsible for fetching repository details. Two persistent flags, -u and -p, are defined for capturing the GitHub username and password, respectively. func init() {rootCmd.AddCommand(addCmd)rootCmd.PersistentFlags().StringP(“username”, “u”, “”, “the username of git”)rootCmd.PersistentFlags().StringP(“password”, “p”, “”, “the access token of the git”)} This structure ensures a seamless and intuitive user experience while interacting with our CLI. Building and Executing the CLI :   Building the Binary: To transform our Go code into a usable binary, execute the following command: go build -o git-cli Running the CLI: Now, let’s put our CLI to the test: ./git-cli get -u -p or ./git-cli get –username –password Witness the success as the Cobra Command executes, and your GitHub repositories elegantly unfold in the terminal. In this exhaustive tutorial, we’ve embarked on a journey to master Go and build a robust Command-Line Interface -CLI using GoLang and Cobra. From the foundational aspects of Go’s uniqueness to the intricate details of crafting a feature-rich CLI with Cobra, we’ve covered ground that empowers you as a developer. The GitHub RESTful API integration showcases the real-world applicability of our CLI, emphasizing the seamless interaction between GoLang and Cobra. The careful orchestration of commands, flags, and API requests results in a comprehensive tool for managing GitHub repositories from the command line. As you reflect on this tutorial, you’ve not only built a CLI but also gained insights into Go’s capabilities and the artistry of crafting elegant and efficient tools. The collaboration of CLI using GoLang and Cobra has opened doors to

Read More »
Multi-Cloud

Multi-Cloud Secrets Management: Streamlined Password Rotation with Terraform

Securing Secrets Management for Hybrid and Multi-Cloud Infrastructure As infrastructure and application environments become increasingly complex spanning multiple clouds and on-prem data centers, managing access credentials and secrets poses an escalating security challenge. Administrators need to track hundreds of API keys, database passwords, SSH keys and certificates across heterogenous platforms while ensuring encryption, access controls and routine rotations. Native cloud provider secrets tools like AWS Secrets Manager and Azure Key Vault simplify management to some extent within individual cloud platforms. But adopting multi-cloud or hybrid infrastructure requires consistent abstractions. This is where Infrastructure-as-Code approaches provide compelling value.   The Multi-Cloud Secret Management Dilemma   Early approaches to securing infrastructure credentials involved embedding passwords directly in scripts or reusing identical shared secrets widely across teams to simplify administration. But these practices pose unacceptable risks especially for external facing infrastructure components. As cloud platforms gained dominance, dedicated secrets management services emerged from AWS, Azure and GCP – AWS Secrets Manager, Azure Key Vault and GCP Secret Manager. While helping overcome immediate challenges, increased cloud adoption also exacerbated longer term complexity: No central visibility or control: With no unified pane of glass into secrets across hybrid or multi-cloud environments, governance becomes fragmented across disparate point tools. This leads to credential sprawl with keys duplicated across platforms, and security teams lacking insight into which assets need rotation. Policy inconsistencies: Individual administrators end up defining localized conventions per platform rather than enforcing global enterprise standards. One team may rotate IAM keys every 2 days while another resets VM admin passwords annually. Partial visibility furthers policy drift. Challenging auditability: Providing reports showing all certificates nearing expiry or accounts with overdue rotations involves heavy lifting. Disjointed management interfaces make generating unified views into compliance health difficult without custom engineering. Reinforcing vendor lock-in: Tight coupling of secrets to specific cloud vendor capabilities through proprietary interfaces hinders workload portability. Organizations lose leverage to negotiate pricing or adopt best-of-breed infrastructure services across clouds. Migrating applications becomes exponentially harder. This dilemma arises from securing infrastructure secrets in isolation from the resources they connect while workloads targeted for deployment may span environments. Cloud vendor secrets managers focus narrowly on their individual platforms rather than business applications requirements. A fundamental paradigm shift is needed in multi-cloud secrets orchestration- one rooted in abstraction.   The Path Forward – Unified Secrets Abstraction Infrastructure-as-code paradigms provides compelling ways forward. Expanding cloud-agnostic infrastructure automation approaches pioneered by Terraform to also orchestrate secrets management offers an enterprise-class solution. Some key ways this addresses existing gaps: Unified identity and access policies not fragmented across cloud native interfaces Global secret rotation rules tied to central corporate security standards Holistic compliance validation against frameworks like SOC2 Reduced coupling to any one platform through compatibility across all major cloud providers. Let’s analyze how Terraform addresses existing secrets management dilemmas in multi-cloud environments.Orchestrating Secrets with Infrastructure-as-Code Infrastructure-as-Code (IaC) brings codification, reusable components and policy-driven management to provisioning and configuration. Expanding this approach to also orchestrate secrets provides similar advantages: Unified identity and access: Federate administrators from central auth providers rather than per platform IAM inconsistencies. Simplified secret rotations: Whole stack refreshes based on central policy rather than reconfiguring individually. Compliance reporting: Continually assess posture against frameworks like SOC2 and ISO27001. Abstraction to prevent lock-in: Reduce coupling to any one platform’s proprietary interfaces. Here is sample Terraform code to demonstrate IaC secrets orchestration:   Copy code # Azure Redis Cacherotated password resource “random_password” “redis_pass” { length = 16 special = false keepers {rotate = time_rotating.45d.id}} # Azure Key Vault resource “azurerm_key_vault” “vault” {name = “RedisVault”} resource “azurerm_key_vault_secret” “redis_secret” {name = “RedisPassword”value = random_password.redis_pass.resultkey_vault_id = azurerm_key_vault.vault.id} # Rotation trigger   resource “time_rotating” “45d” { rotation_days = 45} This allows centralized credential management across Azure Cache instances deployed across multiple regions and cloud platforms rather than eventual consistency across fragmented tool sets. Enterprise-Grade Secrets Management Expanding on these patterns with reusable libraries allows organizations to industrialize secrets management fulfilling complex compliance, security and audit requirements while retaining flexibility across diverse infrastructure: Broad platform support: Orchestrate secrects consistently across major public clouds, private data centers, VM, container and serverless platforms. Automated rotations: Ensure credentials like keys and passwords refreshed globally on schedules rather than risky manual processes. Compliance validation: Continually assess secret configurations against frameworks like PCI DSS, SOC2 and ISO27001. Change tracking: Provide full audit trails for secret access, rotation and modifications. In essence, applying fundamentals pioneered in policy-as-code, GitOps and compliance-as-code for application security to infrastructure management drives the next evolution in multi-cloud secrets orchestration – one based on unified abstractions rather than fragmented per platform tool sets.   If you are looking for an easy way to manage and automate your cloud infrastructure, Sailor Cloud is a good option to consider. To learn more about Sailor Cloud, please visit the Sailor Cloud website: https://www.sailorcloud.io/   External Resources: Terraform Blog: Managing Secrets with Terraform by HashiCorp https://www.hashicorp.com/blog/managing-secrets-with-terraform 2.Tutorial: Manage Azure Key Vault Secrets with Terraform https://learn.hashicorp.com/tutorials/terraform/azure-key-vault-secret?in=terraform/secrets-management Security at Scale: Secrets Management on AWS using Terraform https://www.anchore.com/blog/aws-secrets-management-at-scale-with-terraform/ Terraform Rotation Policies for Secrets Management https://www.terraform.io/cli/commands/providers/template#example-rotation-secret

Read More »
AKS Security Practices

AKS Security Practices | Access Control using RBAC with Terraform Code | Part 1

As organizations adopt Azure Kubernetes Service (AKS) for running containerized applications, securing access to clusters becomes paramount. AKS provides various AKS security practices like role-based access control (RBAC) to restrict unauthorized access. When multiple developer teams share an AKS cluster, access to the Kubernetes API server needs to be carefully managed. At the same time, access should not be overly restrictive. When you create an AKS cluster to be used by multiple developers from different product teams, access to the api server has to be carefully managed. At the same time access should not be restrictive in any way, especially with respect to K8S. In this AKS series, we’ll be looking at different operational solutions for AKS. This first part will help you define a workflow for user access control to the api server as shown below. Example Terraform code is available for all configurations. You can find the source code in this repository github.com/aravindarc/aks-access-control Caution:This code creates a public cluster with default network configurations. When you create a cluster, always create a private cluster with proper network configurations. The block azure_active_directory_role_based_access_control manages the cluster’s rbac, the key admin_group_object_ids is used to configure the ops group with admin access. infoWhether it be admin access or restricted access, all principals have to be provided with Azure Kubernetes Service Cluster User Role. Only then the users will be able to list and get credentials of the cluster. 2.AKS Security Practices:Groups Creation e’ll create one AD group per k8s namespace, users of the group will be given access to one particular Namespace in the AKS cluster. Once the group is created we have to create a Role and RoleBinding with the subject as the AD group. This will create the Azure AD groups, it is a good convention to use the same name for the AD group and the K8S Namespace. 3.AKS Security Practices:8S Manifests: We have to create a Role and RoleBinding in the namespace. This K8S manifest cannot be added to the application specific helm chart. This has to be executed with admin rights. I have used helm to install the tipYou can use terraform outputs to output the group names and their object-ids, and use it in helm command with –set flag to do a seamless integration. Here I am just hard-coding the namespaces in the values.yaml. But when I try to access something from default namespace, I will be blocked. If you are looking for an easy way to manage and automate your cloud infrastructure, Sailor Cloud is a good option to consider. To learn more about Sailor Cloud, please visit the Sailor Cloud website: https://www.sailorcloud.io/

Read More »
Scroll to Top