# SOME DESCRIPTIVE TITLE. # Copyright (C) 2016-2023, OpenStack Foundation # This file is distributed under the same license as the openstack-helm package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: openstack-helm 0.1.1.dev4021\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2023-10-27 22:03+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: ../../source/install/before_deployment.rst:2 msgid "Before deployment" msgstr "" #: ../../source/install/before_deployment.rst:4 msgid "" "Before proceeding with the steps outlined in the following sections and " "executing the actions detailed therein, it is imperative that you clone the " "essential Git repositories containing all the required Helm charts, " "deployment scripts, and Ansible roles. This preliminary step will ensure " "that you have access to the necessary assets for a seamless deployment " "process." msgstr "" #: ../../source/install/before_deployment.rst:20 msgid "" "All further steps assume these two repositories are cloned into the `~/osh` " "directory." msgstr "" #: ../../source/install/before_deployment.rst:23 msgid "" "Also before deploying the OpenStack cluster you have to specify the " "OpenStack and the operating system version that you would like to use for " "deployment. For doing this export the following environment variables" msgstr "" #: ../../source/install/before_deployment.rst:34 msgid "The list of supported versions can be found :doc:`here `." msgstr "" #: ../../source/install/deploy_ceph.rst:2 msgid "Deploy Ceph" msgstr "" #: ../../source/install/deploy_ceph.rst:4 msgid "" "Ceph is a highly scalable and fault-tolerant distributed storage system " "designed to store vast amounts of data across a cluster of commodity " "hardware. It offers object storage, block storage, and file storage " "capabilities, making it a versatile solution for various storage needs. " "Ceph's architecture is based on a distributed object store, where data is " "divided into objects, each with its unique identifier, and distributed " "across multiple storage nodes. It uses a CRUSH algorithm to ensure data " "resilience and efficient data placement, even as the cluster scales. Ceph is " "widely used in cloud computing environments and provides a cost-effective " "and flexible storage solution for organizations managing large volumes of " "data." msgstr "" #: ../../source/install/deploy_ceph.rst:16 msgid "" "Kubernetes introduced the CSI standard to allow storage providers like Ceph " "to implement their drivers as plugins. Kubernetes can use the CSI driver for " "Ceph to provision and manage volumes directly. By means of CSI stateful " "applications deployed on top of Kubernetes can use Ceph to store their data." msgstr "" #: ../../source/install/deploy_ceph.rst:22 msgid "" "At the same time, Ceph provides the RBD API, which applications can utilize " "to create and mount block devices distributed across the Ceph cluster. The " "OpenStack Cinder service utilizes this Ceph capability to offer persistent " "block devices to virtual machines managed by the OpenStack Nova." msgstr "" #: ../../source/install/deploy_ceph.rst:28 msgid "" "The recommended way to deploy Ceph on top of Kubernetes is by means of " "`Rook`_ operator. Rook provides Helm charts to deploy the operator itself " "which extends the Kubernetes API adding CRDs that enable managing Ceph " "clusters via Kuberntes custom objects. For details please refer to the " "`Rook`_ documentation." msgstr "" #: ../../source/install/deploy_ceph.rst:34 msgid "" "To deploy the Rook Ceph operator and a Ceph cluster you can use the script " "`ceph.sh`_. Then to generate the client secrets to interface with the Ceph " "RBD API use this script `ceph_secrets.sh`" msgstr "" #: ../../source/install/deploy_ceph.rst:45 msgid "" "Please keep in mind that these are the deployment scripts that we use for " "testing. For example we place Ceph OSD data object on loop devices which are " "slow and are not recommended to use in production." msgstr "" #: ../../source/install/deploy_ingress_controller.rst:2 msgid "Deploy ingress controller" msgstr "" #: ../../source/install/deploy_ingress_controller.rst:4 msgid "" "Deploying an ingress controller when deploying OpenStack on Kubernetes is " "essential to ensure proper external access and SSL termination for your " "OpenStack services." msgstr "" #: ../../source/install/deploy_ingress_controller.rst:8 msgid "" "In the OpenStack-Helm project, we utilize multiple ingress controllers to " "optimize traffic routing. Specifically, we deploy three independent " "instances of the Nginx ingress controller for distinct purposes:" msgstr "" #: ../../source/install/deploy_ingress_controller.rst:13 msgid "External Traffic Routing" msgstr "" #: ../../source/install/deploy_ingress_controller.rst:15 msgid "``Namespace``: kube-system" msgstr "" #: ../../source/install/deploy_ingress_controller.rst:16 msgid "" "``Functionality``: This instance monitors ingress objects across all " "namespaces, primarily focusing on routing external traffic into the " "OpenStack environment." msgstr "" #: ../../source/install/deploy_ingress_controller.rst:21 msgid "Internal Traffic Routing within OpenStack" msgstr "" #: ../../source/install/deploy_ingress_controller.rst:23 msgid "``Namespace``: openstack" msgstr "" #: ../../source/install/deploy_ingress_controller.rst:24 msgid "" "``Functionality``: Designed to handle traffic exclusively within the " "OpenStack namespace, this instance plays a crucial role in SSL termination " "for enhanced security among OpenStack services." msgstr "" #: ../../source/install/deploy_ingress_controller.rst:29 msgid "Traffic Routing to Ceph Rados Gateway Service" msgstr "" #: ../../source/install/deploy_ingress_controller.rst:31 msgid "``Namespace``: ceph" msgstr "" #: ../../source/install/deploy_ingress_controller.rst:32 msgid "" "``Functionality``: Dedicated to routing traffic specifically to the Ceph " "Rados Gateway service, ensuring efficient communication with Ceph storage " "resources." msgstr "" #: ../../source/install/deploy_ingress_controller.rst:36 msgid "" "By deploying these three distinct ingress controller instances in their " "respective namespaces, we optimize traffic management and security within " "the OpenStack-Helm environment." msgstr "" #: ../../source/install/deploy_ingress_controller.rst:40 msgid "" "To deploy these three ingress controller instances use the script `ingress." "sh`_" msgstr "" #: ../../source/install/deploy_ingress_controller.rst:48 msgid "" "These script uses Helm chart from the `openstack-helm-infra`_ repository. We " "assume this repo is cloned to the `~/osh` directory. See this :doc:`section " "`." msgstr "" #: ../../source/install/deploy_kubernetes.rst:2 #: ../../source/install/deploy_kubernetes.rst:105 msgid "Deploy Kubernetes" msgstr "" #: ../../source/install/deploy_kubernetes.rst:4 msgid "" "OpenStack-Helm provides charts that can be deployed on any Kubernetes " "cluster if it meets the supported version requirements. However, deploying " "the Kubernetes cluster itself is beyond the scope of OpenStack-Helm." msgstr "" #: ../../source/install/deploy_kubernetes.rst:8 msgid "" "You can use any Kubernetes deployment tool for this purpose. In this guide, " "we detail how to set up a Kubernetes cluster using Kubeadm and Ansible. " "While not production-ready, this cluster is ideal as a starting point for " "lab or proof-of-concept environments." msgstr "" #: ../../source/install/deploy_kubernetes.rst:12 msgid "" "All OpenStack projects test their code through an infrastructure managed by " "the CI tool, Zuul, which executes Ansible playbooks on one or more test " "nodes. Therefore, we employ Ansible roles/playbooks to install required " "packages, deploy Kubernetes, and then execute tests on it." msgstr "" #: ../../source/install/deploy_kubernetes.rst:16 msgid "" "To establish a test environment, the Ansible role deploy-env_ is employed. " "This role establishes a basic single/multi-node Kubernetes cluster, ensuring " "the functionality of commonly used deployment configurations. The role is " "compatible with Ubuntu Focal and Ubuntu Jammy distributions." msgstr "" #: ../../source/install/deploy_kubernetes.rst:21 msgid "Install Ansible" msgstr "" #: ../../source/install/deploy_kubernetes.rst:28 msgid "Prepare Ansible roles" msgstr "" #: ../../source/install/deploy_kubernetes.rst:30 msgid "" "Here is the Ansible `playbook`_ that is used to deploy Kubernetes. The roles " "used in this playbook are defined in different repositories. So in addition " "to OpenStack-Helm repositories that we assume have already been cloned to " "the `~/osh` directory you have to clone yet another one" msgstr "" #: ../../source/install/deploy_kubernetes.rst:40 msgid "" "Now let's set the environment variable ``ANSIBLE_ROLES_PATH`` which " "specifies where Ansible will lookup roles" msgstr "" #: ../../source/install/deploy_kubernetes.rst:47 msgid "" "To avoid setting it every time when you start a new terminal instance you " "can define this in the Ansible configuration file. Please see the Ansible " "documentation." msgstr "" #: ../../source/install/deploy_kubernetes.rst:51 msgid "Prepare Ansible inventory" msgstr "" #: ../../source/install/deploy_kubernetes.rst:53 msgid "" "We assume you have three nodes, usually VMs. Those nodes must be available " "via SSH using the public key authentication and a ssh user (let say " "`ubuntu`) must have passwordless sudo on the nodes." msgstr "" #: ../../source/install/deploy_kubernetes.rst:57 msgid "Create the Ansible inventory file using the following command" msgstr "" #: ../../source/install/deploy_kubernetes.rst:97 msgid "If you have just one node then it must be `primary` in the file above." msgstr "" #: ../../source/install/deploy_kubernetes.rst:100 msgid "" "If you would like to set up a Kubernetes cluster on the local host, " "configure the Ansible inventory to designate the `primary` node as the local " "host. For further guidance, please refer to the Ansible documentation." msgstr "" #: ../../source/install/deploy_kubernetes.rst:112 msgid "" "The playbook only changes the state of the nodes listed in the Ansible " "inventory." msgstr "" #: ../../source/install/deploy_kubernetes.rst:114 msgid "" "It installs necessary packages, deploys and configures Containerd and " "Kubernetes. For details please refer to the role `deploy-env`_ and other " "roles (`ensure-python`_, `ensure-pip`_, `clear-firewall`_) used in the " "playbook." msgstr "" #: ../../source/install/deploy_kubernetes.rst:119 msgid "" "The role `deploy-env`_ by default will use Google DNS servers, 8.8.8.8 or " "8.8.4.4 and update `/etc/resolv.conf` on the nodes. These DNS nameserver " "entries can be changed by updating the file ``~/osh/openstack-helm-infra/" "roles/deploy-env/files/resolv.conf``." msgstr "" #: ../../source/install/deploy_kubernetes.rst:123 msgid "" "It also configures internal Kubernetes DNS server (Coredns) to work as a " "recursive DNS server and adds its IP address (10.96.0.10 by default) to the " "`/etc/resolv.conf` file." msgstr "" #: ../../source/install/deploy_kubernetes.rst:126 msgid "" "Programs running on those nodes will be able to resolve names in the default " "Kubernetes domain `.svc.cluster.local`. E.g. if you run OpenStack command " "line client on one of those nodes it will be able to access OpenStack API " "services via these names." msgstr "" #: ../../source/install/deploy_kubernetes.rst:132 msgid "" "The role `deploy-env`_ installs and confiugres Kubectl and Helm on the " "`primary` node. You can login to it via SSH, clone `openstack-helm`_ and " "`openstack-helm-infra`_ repositories and then run the OpenStack-Helm " "deployment scipts which employ Kubectl and Helm to deploy OpenStack." msgstr "" #: ../../source/install/deploy_openstack.rst:2 msgid "Deploy OpenStack" msgstr "" #: ../../source/install/deploy_openstack.rst:4 msgid "" "Now we are ready for the deployment of OpenStack components. Some of them " "are mandatory while others are optional." msgstr "" #: ../../source/install/deploy_openstack.rst:8 msgid "Keystone" msgstr "" #: ../../source/install/deploy_openstack.rst:10 msgid "" "OpenStack Keystone is the identity and authentication service for the " "OpenStack cloud computing platform. It serves as the central point of " "authentication and authorization, managing user identities, roles, and " "access to OpenStack resources. Keystone ensures secure and controlled access " "to various OpenStack services, making it an integral component for user " "management and security in OpenStack deployments." msgstr "" #: ../../source/install/deploy_openstack.rst:18 msgid "This is a ``mandatory`` component of any OpenStack cluster." msgstr "" #: ../../source/install/deploy_openstack.rst:20 msgid "To deploy the Keystone service run the script `keystone.sh`_" msgstr "" #: ../../source/install/deploy_openstack.rst:29 msgid "Heat" msgstr "" #: ../../source/install/deploy_openstack.rst:31 msgid "" "OpenStack Heat is an orchestration service that provides templates and " "automation for deploying and managing cloud resources. It enables users to " "define infrastructure as code, making it easier to create and manage complex " "environments in OpenStack through templates and automation scripts." msgstr "" #: ../../source/install/deploy_openstack.rst:37 msgid "Here is the script `heat.sh`_ for the deployment of Heat service." msgstr "" #: ../../source/install/deploy_openstack.rst:45 msgid "Glance" msgstr "" #: ../../source/install/deploy_openstack.rst:47 msgid "" "OpenStack Glance is the image service component of OpenStack. It manages and " "catalogs virtual machine images, such as operating system images and " "snapshots, making them available for use in OpenStack compute instances." msgstr "" #: ../../source/install/deploy_openstack.rst:52 msgid "This is a ``mandatory`` component." msgstr "" #: ../../source/install/deploy_openstack.rst:54 msgid "The Glance deployment script is here `glance.sh`_." msgstr "" #: ../../source/install/deploy_openstack.rst:62 msgid "Placement, Nova, Neutron" msgstr "" #: ../../source/install/deploy_openstack.rst:64 msgid "" "OpenStack Placement is a service that helps manage and allocate resources in " "an OpenStack cloud environment. It helps Nova (compute) find and allocate " "the right resources (CPU, memory, etc.) for virtual machine instances." msgstr "" #: ../../source/install/deploy_openstack.rst:69 msgid "" "OpenStack Nova is the compute service responsible for managing and " "orchestrating virtual machines in an OpenStack cloud. It provisions and " "schedules instances, handles their lifecycle, and interacts with underlying " "hypervisors." msgstr "" #: ../../source/install/deploy_openstack.rst:74 msgid "" "OpenStack Neutron is the networking service that provides network " "connectivity and enables users to create and manage network resources for " "their virtual machines and other services." msgstr "" #: ../../source/install/deploy_openstack.rst:78 msgid "" "These three services are ``mandatory`` and together constitue so-called " "``compute kit``." msgstr "" #: ../../source/install/deploy_openstack.rst:81 msgid "" "To set up the compute service, the first step involves deploying the " "hypervisor backend using the `libvirt.sh`_ script. By default, the " "networking service is deployed with OpenvSwitch as the networking backend, " "and the deployment script for OpenvSwitch can be found here: `openvswitch." "sh`_. And finally the deployment script for Placement, Nova and Neutron is " "here: `compute-kit.sh`_." msgstr "" #: ../../source/install/deploy_openstack.rst:96 msgid "Cinder" msgstr "" #: ../../source/install/deploy_openstack.rst:98 msgid "" "OpenStack Cinder is the block storage service component of the OpenStack " "cloud computing platform. It manages and provides persistent block storage " "to virtual machines, enabling users to attach and detach persistent storage " "volumes to their VMs as needed." msgstr "" #: ../../source/install/deploy_openstack.rst:103 msgid "To deploy the OpenStack Cinder service use the script `cinder.sh`_" msgstr "" #: ../../source/install/deploy_openstack_backend.rst:2 msgid "Deploy OpenStack backend" msgstr "" #: ../../source/install/deploy_openstack_backend.rst:4 msgid "" "OpenStack is a cloud computing platform that consists of a variety of " "services, and many of these services rely on backend services like RabbitMQ, " "MariaDB, and Memcached for their proper functioning. These backend services " "play crucial roles in OpenStack's architecture." msgstr "" #: ../../source/install/deploy_openstack_backend.rst:10 msgid "RabbitMQ" msgstr "" #: ../../source/install/deploy_openstack_backend.rst:11 msgid "" "RabbitMQ is a message broker that is often used in OpenStack to handle " "messaging between different components and services. It helps in managing " "communication and coordination between various parts of the OpenStack " "infrastructure. Services like Nova (compute), Neutron (networking), and " "Cinder (block storage) use RabbitMQ to exchange messages and ensure proper " "orchestration." msgstr "" #: ../../source/install/deploy_openstack_backend.rst:19 msgid "MariaDB" msgstr "" #: ../../source/install/deploy_openstack_backend.rst:20 msgid "" "Database services like MariaDB are used as the backend database for several " "OpenStack services. These databases store critical information such as user " "credentials, service configurations, and data related to instances, " "networks, and volumes. Services like Keystone (identity), Nova, Glance " "(image), and Cinder rely on MariaDB for data storage." msgstr "" #: ../../source/install/deploy_openstack_backend.rst:27 msgid "Memcached" msgstr "" #: ../../source/install/deploy_openstack_backend.rst:28 msgid "" "Memcached is a distributed memory object caching system that is often used " "in OpenStack to improve performance and reduce database load. OpenStack " "services cache frequently accessed data in Memcached, which helps in faster " "data retrieval and reduces the load on the database backend. Services like " "Keystone and Nova can benefit from Memcached for caching." msgstr "" #: ../../source/install/deploy_openstack_backend.rst:35 msgid "Deployment" msgstr "" #: ../../source/install/deploy_openstack_backend.rst:37 msgid "" "The following scripts `rabbitmq.sh`_, `mariadb.sh`_, `memcached.sh`_ can be " "used to deploy the backend services." msgstr "" #: ../../source/install/deploy_openstack_backend.rst:48 msgid "" "These scripts use Helm charts from the `openstack-helm-infra`_ repository. " "We assume this repo is cloned to the `~/osh` directory. See this :doc:" "`section `." msgstr "" #: ../../source/install/index.rst:2 msgid "Installation" msgstr "" #: ../../source/install/index.rst:4 msgid "Contents:" msgstr "" #: ../../source/install/prepare_kubernetes.rst:2 msgid "Prepare Kubernetes" msgstr "" #: ../../source/install/prepare_kubernetes.rst:4 msgid "" "In this section we assume you have a working Kubernetes cluster and Kubectl " "and Helm properly configured to interact with the cluster." msgstr "" #: ../../source/install/prepare_kubernetes.rst:7 msgid "" "Before deploying OpenStack components using OpenStack-Helm you have to set " "labels on Kubernetes worker nodes which are used as node selectors." msgstr "" #: ../../source/install/prepare_kubernetes.rst:10 msgid "Also necessary namespaces must be created." msgstr "" #: ../../source/install/prepare_kubernetes.rst:12 msgid "" "You can use the `prepare-k8s.sh`_ script as an example of how to prepare the " "Kubernetes cluster for OpenStack deployment. The script is assumed to be run " "from the openstack-helm repository" msgstr "" #: ../../source/install/prepare_kubernetes.rst:23 msgid "" "Pay attention that in the above script we set labels on all Kubernetes nodes " "including Kubernetes control plane nodes which are usually not aimed to run " "workload pods (OpenStack in our case). So you have to either untaint control " "plane nodes or modify the `prepare-k8s.sh`_ script so it sets labels only on " "the worker nodes." msgstr "" #: ../../source/install/setup_openstack_client.rst:2 msgid "Setup OpenStack client" msgstr "" #: ../../source/install/setup_openstack_client.rst:4 msgid "" "The OpenStack client software is a crucial tool for interacting with " "OpenStack services. In certain OpenStack-Helm deployment scripts, the " "OpenStack client software is utilized to conduct essential checks during " "deployment. Therefore, installing the OpenStack client on the developer's " "machine is a vital step." msgstr "" #: ../../source/install/setup_openstack_client.rst:10 msgid "" "The script `setup-client.sh`_ can be used to setup the OpenStack client." msgstr "" #: ../../source/install/setup_openstack_client.rst:18 msgid "" "At this point you have to keep in mind that the above script configures " "OpenStack client so it uses internal Kubernetes FQDNs like `keystone." "openstack.svc.cluster.local`. In order to be able to resolve these internal " "names you have to configure the Kubernetes authoritative DNS server " "(CoreDNS) to work as a recursive resolver and then add its IP (`10.96.0.10` " "by default) to `/etc/resolv.conf`. This is only going to work when you try " "to access to OpenStack services from one of Kubernetes nodes because IPs " "from the Kubernetes service network are routed only between Kubernetes nodes." msgstr "" #: ../../source/install/setup_openstack_client.rst:27 msgid "" "If you wish to access OpenStack services from outside the Kubernetes " "cluster, you need to expose the OpenStack Ingress controller using an IP " "address accessible from outside the Kubernetes cluster, typically achieved " "through solutions like `MetalLB`_ or similar tools. In this scenario, you " "should also ensure that you have set up proper FQDN resolution to map to the " "external IP address and create the necessary Ingress objects for the " "associated FQDN." msgstr ""