# SOME DESCRIPTIVE TITLE. # Copyright (C) 2015-2016, OpenStack contributors # This file is distributed under the same license as the Administrator Guide package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: Administrator Guide 0.9\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2016-05-04 06:21+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: ../baremetal.rst:5 msgid "Bare Metal" msgstr "" #: ../baremetal.rst:7 msgid "The Bare Metal service provides physical hardware management features." msgstr "" # #-#-#-#-# baremetal.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# database.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# orchestration-introduction.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_intro.pot (Administrator Guide 0.9) #-#-#-#-# #: ../baremetal.rst:10 ../database.rst:10 ../orchestration-introduction.rst:3 #: ../shared_file_systems_intro.rst:5 msgid "Introduction" msgstr "" #: ../baremetal.rst:12 msgid "" "The Bare Metal service provides physical hardware as opposed to virtual " "machines. It also provides several reference drivers, which leverage common " "technologies like PXE and IPMI, to cover a wide range of hardware. The " "pluggable driver architecture also allows vendor-specific drivers to be " "added for improved performance or functionality not provided by reference " "drivers. The Bare Metal service makes physical servers as easy to provision " "as virtual machines in a cloud, which in turn will open up new avenues for " "enterprises and service providers." msgstr "" # #-#-#-#-# baremetal.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-system-architecture.pot (Administrator Guide 0.9) #-#-#-#-# #: ../baremetal.rst:23 ../compute_arch.rst:3 #: ../telemetry-system-architecture.rst:5 msgid "System architecture" msgstr "" #: ../baremetal.rst:25 msgid "The Bare Metal service is composed of the following components:" msgstr "" #: ../baremetal.rst:27 msgid "" "An admin-only RESTful API service, by which privileged users, such as " "operators and other services within the cloud control plane, may interact " "with the managed bare-metal servers." msgstr "" #: ../baremetal.rst:31 msgid "" "A conductor service, which conducts all activity related to bare-metal " "deployments. Functionality is exposed via the API service. The Bare Metal " "service conductor and API service communicate via RPC." msgstr "" #: ../baremetal.rst:36 msgid "" "Various drivers that support heterogeneous hardware, which enable features " "specific to unique hardware platforms and leverage divergent capabilities " "via a common API." msgstr "" #: ../baremetal.rst:40 msgid "" "A message queue, which is a central hub for passing messages, such as " "RabbitMQ. It should use the same implementation as that of the Compute " "service." msgstr "" #: ../baremetal.rst:44 msgid "" "A database for storing information about the resources. Among other things, " "this includes the state of the conductors, nodes (physical servers), and " "drivers." msgstr "" #: ../baremetal.rst:48 msgid "" "When a user requests to boot an instance, the request is passed to the " "Compute service via the Compute service API and scheduler. The Compute " "service hands over this request to the Bare Metal service, where the request " "passes from the Bare Metal service API, to the conductor which will invoke a " "driver to successfully provision a physical server for the user." msgstr "" #: ../baremetal.rst:56 msgid "Bare Metal deployment" msgstr "" #: ../baremetal.rst:58 msgid "PXE deploy process" msgstr "" #: ../baremetal.rst:60 msgid "Agent deploy process" msgstr "" #: ../baremetal.rst:65 msgid "Use Bare Metal" msgstr "" #: ../baremetal.rst:67 msgid "Install the Bare Metal service." msgstr "" #: ../baremetal.rst:69 msgid "Setup the Bare Metal driver in the compute node's ``nova.conf`` file." msgstr "" #: ../baremetal.rst:71 msgid "Setup TFTP folder and prepare PXE boot loader file." msgstr "" #: ../baremetal.rst:73 msgid "Prepare the bare metal flavor." msgstr "" #: ../baremetal.rst:75 msgid "Register the nodes with correct drivers." msgstr "" #: ../baremetal.rst:77 msgid "Configure the driver information." msgstr "" #: ../baremetal.rst:79 msgid "Register the ports information." msgstr "" #: ../baremetal.rst:81 msgid "Use nova boot to kick off the bare metal provision." msgstr "" #: ../baremetal.rst:83 msgid "Check nodes' provision state and power state." msgstr "" # #-#-#-#-# baremetal.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# cross_project_cors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../baremetal.rst:88 ../cross_project_cors.rst:114 msgid "Troubleshooting" msgstr "" #: ../baremetal.rst:91 msgid "No valid host found error" msgstr "" # #-#-#-#-# baremetal.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-networking-nova.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# cross_project_cors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# identity_troubleshoot.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# objectstorage-troubleshoot.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_troubleshoot.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# support-compute.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_cinder_config.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-duplicate-3par-host.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-eql-volume-size.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-failed-attach-vol-after-detach.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-failed-attach-vol-no-sysfsutils.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-failed-connect-vol-FC-SAN.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-HTTP-bad-req-in-cinder-vol-log.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_multipath_warn.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_no_emulator_x86_64.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_non_existent_host.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_non_existent_vlun.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_vol_attach_miss_sg_scan.pot (Administrator Guide 0.9) #-#-#-#-# #: ../baremetal.rst:94 ../compute-networking-nova.rst:849 #: ../compute-networking-nova.rst:920 ../compute-networking-nova.rst:944 #: ../compute-networking-nova.rst:1015 ../cross_project_cors.rst:128 #: ../cross_project_cors.rst:142 ../cross_project_cors.rst:154 #: ../identity_troubleshoot.rst:23 ../identity_troubleshoot.rst:147 #: ../identity_troubleshoot.rst:179 ../objectstorage-troubleshoot.rst:14 #: ../objectstorage-troubleshoot.rst:40 ../objectstorage-troubleshoot.rst:69 #: ../objectstorage-troubleshoot.rst:125 #: ../shared_file_systems_troubleshoot.rst:11 #: ../shared_file_systems_troubleshoot.rst:33 #: ../shared_file_systems_troubleshoot.rst:54 #: ../shared_file_systems_troubleshoot.rst:69 #: ../shared_file_systems_troubleshoot.rst:96 ../support-compute.rst:98 #: ../support-compute.rst:128 ../support-compute.rst:175 #: ../support-compute.rst:207 ../support-compute.rst:237 #: ../support-compute.rst:267 ../ts-HTTP-bad-req-in-cinder-vol-log.rst:6 #: ../ts-duplicate-3par-host.rst:6 ../ts-eql-volume-size.rst:6 #: ../ts-failed-attach-vol-after-detach.rst:6 #: ../ts-failed-attach-vol-no-sysfsutils.rst:6 #: ../ts-failed-connect-vol-FC-SAN.rst:6 ../ts_cinder_config.rst:106 #: ../ts_cinder_config.rst:136 ../ts_cinder_config.rst:159 #: ../ts_cinder_config.rst:177 ../ts_multipath_warn.rst:6 #: ../ts_no_emulator_x86_64.rst:6 ../ts_non_existent_host.rst:6 #: ../ts_non_existent_vlun.rst:6 ../ts_vol_attach_miss_sg_scan.rst:6 msgid "Problem" msgstr "" #: ../baremetal.rst:96 msgid "" "Sometimes ``/var/log/nova/nova-conductor.log`` contains the following error:" msgstr "" #: ../baremetal.rst:102 msgid "" "The message ``No valid host was found`` means that the Compute service " "scheduler could not find a bare metal node suitable for booting the new " "instance." msgstr "" #: ../baremetal.rst:106 msgid "" "This means there will be some mismatch between resources that the Compute " "service expects to find and resources that Bare Metal service advertised to " "the Compute service." msgstr "" # #-#-#-#-# baremetal.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-networking-nova.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# cross_project_cors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# identity_troubleshoot.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# objectstorage-troubleshoot.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_troubleshoot.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# support-compute.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_cinder_config.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-duplicate-3par-host.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-eql-volume-size.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-failed-attach-vol-after-detach.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-failed-attach-vol-no-sysfsutils.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-failed-connect-vol-FC-SAN.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-HTTP-bad-req-in-cinder-vol-log.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_multipath_warn.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_no_emulator_x86_64.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_non_existent_host.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_non_existent_vlun.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts_vol_attach_miss_sg_scan.pot (Administrator Guide 0.9) #-#-#-#-# #: ../baremetal.rst:111 ../compute-networking-nova.rst:854 #: ../compute-networking-nova.rst:926 ../compute-networking-nova.rst:958 #: ../compute-networking-nova.rst:1021 ../cross_project_cors.rst:133 #: ../cross_project_cors.rst:147 ../cross_project_cors.rst:159 #: ../identity_troubleshoot.rst:35 ../identity_troubleshoot.rst:154 #: ../identity_troubleshoot.rst:185 ../objectstorage-troubleshoot.rst:19 #: ../objectstorage-troubleshoot.rst:46 ../objectstorage-troubleshoot.rst:75 #: ../objectstorage-troubleshoot.rst:131 #: ../shared_file_systems_troubleshoot.rst:16 #: ../shared_file_systems_troubleshoot.rst:39 #: ../shared_file_systems_troubleshoot.rst:59 #: ../shared_file_systems_troubleshoot.rst:76 #: ../shared_file_systems_troubleshoot.rst:103 ../support-compute.rst:103 #: ../support-compute.rst:137 ../support-compute.rst:188 #: ../support-compute.rst:212 ../support-compute.rst:243 #: ../support-compute.rst:274 ../ts-HTTP-bad-req-in-cinder-vol-log.rst:43 #: ../ts-duplicate-3par-host.rst:20 ../ts-eql-volume-size.rst:134 #: ../ts-failed-attach-vol-after-detach.rst:11 #: ../ts-failed-attach-vol-no-sysfsutils.rst:23 #: ../ts-failed-connect-vol-FC-SAN.rst:26 ../ts_cinder_config.rst:120 #: ../ts_cinder_config.rst:146 ../ts_cinder_config.rst:164 #: ../ts_cinder_config.rst:190 ../ts_multipath_warn.rst:23 #: ../ts_no_emulator_x86_64.rst:12 ../ts_non_existent_host.rst:19 #: ../ts_non_existent_vlun.rst:18 ../ts_vol_attach_miss_sg_scan.rst:22 msgid "Solution" msgstr "" #: ../baremetal.rst:113 msgid "If you get this message, check the following:" msgstr "" #: ../baremetal.rst:115 msgid "" "Introspection should have succeeded for you before, or you should have " "entered the required bare-metal node properties manually. For each node in :" "command:`ironic node-list` use:" msgstr "" #: ../baremetal.rst:123 msgid "" "and make sure that ``properties`` JSON field has valid values for keys " "``cpus``, ``cpu_arch``, ``memory_mb`` and ``local_gb``." msgstr "" #: ../baremetal.rst:126 msgid "" "The flavor in the Compute service that you are using does not exceed the " "bare-metal node properties above for a required number of nodes. Use:" msgstr "" #: ../baremetal.rst:133 msgid "" "Make sure that enough nodes are in ``available`` state according to :command:" "`ironic node-list`. Nodes in ``manageable`` state usually mean they have " "failed introspection." msgstr "" #: ../baremetal.rst:137 msgid "" "Make sure nodes you are going to deploy to are not in maintenance mode. Use :" "command:`ironic node-list` to check. A node automatically going to " "maintenance mode usually means the incorrect credentials for this node. " "Check them and then remove maintenance mode:" msgstr "" #: ../baremetal.rst:146 msgid "" "It takes some time for nodes information to propagate from the Bare Metal " "service to the Compute service after introspection. Our tooling usually " "accounts for it, but if you did some steps manually there may be a period of " "time when nodes are not available to the Compute service yet. Check that :" "command:`nova hypervisor-stats` correctly shows total amount of resources in " "your system." msgstr "" #: ../blockstorage-api-throughput.rst:3 msgid "Increase Block Storage API service throughput" msgstr "" #: ../blockstorage-api-throughput.rst:5 msgid "" "By default, the Block Storage API service runs in one process. This limits " "the number of API requests that the Block Storage service can process at any " "given time. In a production environment, you should increase the Block " "Storage API throughput by allowing the Block Storage API service to run in " "as many processes as the machine capacity allows." msgstr "" #: ../blockstorage-api-throughput.rst:13 msgid "" "The Block Storage API service is named ``openstack-cinder-api`` on the " "following distributions: CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, " "and SUSE Linux Enterprise. In Ubuntu and Debian distributions, the Block " "Storage API service is named ``cinder-api``." msgstr "" #: ../blockstorage-api-throughput.rst:18 msgid "" "To do so, use the Block Storage API service option ``osapi_volume_workers``. " "This option allows you to specify the number of API service workers (or OS " "processes) to launch for the Block Storage API service." msgstr "" #: ../blockstorage-api-throughput.rst:22 msgid "" "To configure this option, open the ``/etc/cinder/cinder.conf`` configuration " "file and set the ``osapi_volume_workers`` configuration key to the number of " "CPU cores/threads on a machine." msgstr "" # #-#-#-#-# blockstorage-api-throughput.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_glusterfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_nfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage-api-throughput.rst:26 #: ../blockstorage_glusterfs_backend.rst:172 #: ../blockstorage_nfs_backend.rst:72 ../blockstorage_nfs_backend.rst:99 #: ../blockstorage_nfs_backend.rst:117 ../blockstorage_nfs_backend.rst:141 msgid "" "On distributions that include ``openstack-config``, you can configure this " "by running the following command instead:" msgstr "" #: ../blockstorage-api-throughput.rst:34 msgid "Replace ``CORES`` with the number of CPU cores/threads on a machine." msgstr "" #: ../blockstorage-boot-from-volume.rst:3 msgid "Boot from volume" msgstr "" #: ../blockstorage-boot-from-volume.rst:5 msgid "" "In some cases, you can store and run instances from inside volumes. For " "information, see the `Launch an instance from a volume`_ section in the " "`OpenStack End User Guide`_." msgstr "" # #-#-#-#-# blockstorage-consistency-groups.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_cgroups.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage-consistency-groups.rst:3 #: ../shared_file_systems_cgroups.rst:5 ../shared_file_systems_cgroups.rst:28 msgid "Consistency groups" msgstr "" #: ../blockstorage-consistency-groups.rst:5 msgid "" "Consistency group support is available in OpenStack Block Storage. The " "support is added for creating snapshots of consistency groups. This feature " "leverages the storage level consistency technology. It allows snapshots of " "multiple volumes in the same consistency group to be taken at the same point-" "in-time to ensure data consistency. The consistency group operations can be " "performed using the Block Storage command line." msgstr "" #: ../blockstorage-consistency-groups.rst:14 msgid "" "Only Block Storage V2 API supports consistency groups. You can specify :" "option:`--os-volume-api-version 2` when using Block Storage command line for " "consistency group operations." msgstr "" #: ../blockstorage-consistency-groups.rst:18 msgid "" "Before using consistency groups, make sure the Block Storage driver that you " "are running has consistency group support by reading the Block Storage " "manual or consulting the driver maintainer. There are a small number of " "drivers that have implemented this feature. The default LVM driver does not " "support consistency groups yet because the consistency technology is not " "available at the storage level." msgstr "" #: ../blockstorage-consistency-groups.rst:25 msgid "" "Before using consistency groups, you must change policies for the " "consistency group APIs in the ``/etc/cinder/policy.json`` file. By default, " "the consistency group APIs are disabled. Enable them before running " "consistency group operations." msgstr "" #: ../blockstorage-consistency-groups.rst:30 msgid "Here are existing policy entries for consistency groups:" msgstr "" #: ../blockstorage-consistency-groups.rst:44 msgid "Remove ``group:nobody`` to enable these APIs:" msgstr "" #: ../blockstorage-consistency-groups.rst:58 msgid "Restart Block Storage API service after changing policies." msgstr "" #: ../blockstorage-consistency-groups.rst:60 msgid "The following consistency group operations are supported:" msgstr "" #: ../blockstorage-consistency-groups.rst:62 msgid "Create a consistency group, given volume types." msgstr "" #: ../blockstorage-consistency-groups.rst:66 msgid "" "A consistency group can support more than one volume type. The scheduler is " "responsible for finding a back end that can support all given volume types." msgstr "" #: ../blockstorage-consistency-groups.rst:70 msgid "" "A consistency group can only contain volumes hosted by the same back end." msgstr "" #: ../blockstorage-consistency-groups.rst:73 msgid "" "A consistency group is empty upon its creation. Volumes need to be created " "and added to it later." msgstr "" #: ../blockstorage-consistency-groups.rst:76 msgid "Show a consistency group." msgstr "" #: ../blockstorage-consistency-groups.rst:78 msgid "List consistency groups." msgstr "" #: ../blockstorage-consistency-groups.rst:80 msgid "" "Create a volume and add it to a consistency group, given volume type and " "consistency group id." msgstr "" #: ../blockstorage-consistency-groups.rst:83 msgid "Create a snapshot for a consistency group." msgstr "" #: ../blockstorage-consistency-groups.rst:85 msgid "Show a snapshot of a consistency group." msgstr "" #: ../blockstorage-consistency-groups.rst:87 msgid "List consistency group snapshots." msgstr "" #: ../blockstorage-consistency-groups.rst:89 msgid "Delete a snapshot of a consistency group." msgstr "" #: ../blockstorage-consistency-groups.rst:91 msgid "Delete a consistency group." msgstr "" #: ../blockstorage-consistency-groups.rst:93 msgid "Modify a consistency group." msgstr "" #: ../blockstorage-consistency-groups.rst:95 msgid "" "Create a consistency group from the snapshot of another consistency group." msgstr "" #: ../blockstorage-consistency-groups.rst:98 msgid "Create a consistency group from a source consistency group." msgstr "" #: ../blockstorage-consistency-groups.rst:100 msgid "" "The following operations are not allowed if a volume is in a consistency " "group:" msgstr "" #: ../blockstorage-consistency-groups.rst:103 msgid "Volume migration." msgstr "" #: ../blockstorage-consistency-groups.rst:105 msgid "Volume retype." msgstr "" #: ../blockstorage-consistency-groups.rst:107 msgid "Volume deletion." msgstr "" #: ../blockstorage-consistency-groups.rst:111 msgid "A consistency group has to be deleted as a whole with all the volumes." msgstr "" #: ../blockstorage-consistency-groups.rst:114 msgid "" "The following operations are not allowed if a volume snapshot is in a " "consistency group snapshot:" msgstr "" #: ../blockstorage-consistency-groups.rst:117 msgid "Volume snapshot deletion." msgstr "" #: ../blockstorage-consistency-groups.rst:121 msgid "" "A consistency group snapshot has to be deleted as a whole with all the " "volume snapshots." msgstr "" #: ../blockstorage-consistency-groups.rst:124 msgid "The details of consistency group operations are shown in the following." msgstr "" #: ../blockstorage-consistency-groups.rst:126 msgid "**Create a consistency group**:" msgstr "" #: ../blockstorage-consistency-groups.rst:138 msgid "" "The parameter ``volume-types`` is required. It can be a list of names or " "UUIDs of volume types separated by commas without spaces in between. For " "example, ``volumetype1,volumetype2,volumetype3.``." msgstr "" #: ../blockstorage-consistency-groups.rst:157 msgid "**Show a consistency group**:" msgstr "" #: ../blockstorage-consistency-groups.rst:175 msgid "**List consistency groups**:" msgstr "" #: ../blockstorage-consistency-groups.rst:188 msgid "**Create a volume and add it to a consistency group**:" msgstr "" #: ../blockstorage-consistency-groups.rst:192 msgid "" "When creating a volume and adding it to a consistency group, a volume type " "and a consistency group id must be provided. This is because a consistency " "group can support more than one volume type." msgstr "" #: ../blockstorage-consistency-groups.rst:229 msgid "**Create a snapshot for a consistency group**:" msgstr "" #: ../blockstorage-consistency-groups.rst:246 msgid "**Show a snapshot of a consistency group**:" msgstr "" #: ../blockstorage-consistency-groups.rst:252 msgid "**List consistency group snapshots**:" msgstr "" #: ../blockstorage-consistency-groups.rst:267 msgid "**Delete a snapshot of a consistency group**:" msgstr "" #: ../blockstorage-consistency-groups.rst:273 msgid "**Delete a consistency group**:" msgstr "" #: ../blockstorage-consistency-groups.rst:277 msgid "" "The force flag is needed when there are volumes in the consistency group:" msgstr "" #: ../blockstorage-consistency-groups.rst:284 msgid "**Modify a consistency group**:" msgstr "" #: ../blockstorage-consistency-groups.rst:295 msgid "" "The parameter ``CG`` is required. It can be a name or UUID of a consistency " "group. UUID1,UUID2,...... are UUIDs of one or more volumes to be added to " "the consistency group, separated by commas. Default is None. UUID3," "UUId4,...... are UUIDs of one or more volumes to be removed from the " "consistency group, separated by commas. Default is None." msgstr "" #: ../blockstorage-consistency-groups.rst:308 msgid "" "**Create a consistency group from the snapshot of another consistency " "group**:" msgstr "" #: ../blockstorage-consistency-groups.rst:318 msgid "" "The parameter ``CGSNAPSHOT`` is a name or UUID of a snapshot of a " "consistency group:" msgstr "" #: ../blockstorage-consistency-groups.rst:326 msgid "**Create a consistency group from a source consistency group**:" msgstr "" #: ../blockstorage-consistency-groups.rst:335 msgid "" "The parameter ``SOURCECG`` is a name or UUID of a source consistency group:" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:5 msgid "Configure and use driver filter and weighing for scheduler" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:7 msgid "" "OpenStack Block Storage enables you to choose a volume back end based on " "back-end specific properties by using the DriverFilter and GoodnessWeigher " "for the scheduler. The driver filter and weigher scheduling can help ensure " "that the scheduler chooses the best back end based on requested volume " "properties as well as various back-end specific properties." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:15 msgid "What is driver filter and weigher and when to use it" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:17 msgid "" "The driver filter and weigher gives you the ability to more finely control " "how the OpenStack Block Storage scheduler chooses the best back end to use " "when handling a volume request. One example scenario where using the driver " "filter and weigher can be if a back end that utilizes thin-provisioning is " "used. The default filters use the ``free capacity`` property to determine " "the best back end, but that is not always perfect. If a back end has the " "ability to provide a more accurate back-end specific value you can use that " "as part of the weighing. Another example of when the driver filter and " "weigher can prove useful is if a back end exists where there is a hard limit " "of 1000 volumes. The maximum volume size is 500 GB. Once 75% of the total " "space is occupied the performance of the back end degrades. The driver " "filter and weigher can provide a way for these limits to be checked for." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:32 msgid "Enable driver filter and weighing" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:34 msgid "" "To enable the driver filter, set the ``scheduler_default_filters`` option in " "the ``cinder.conf`` file to ``DriverFilter`` or add it to the list if other " "filters are already present." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:38 msgid "" "To enable the goodness filter as a weigher, set the " "``scheduler_default_weighers`` option in the ``cinder.conf`` file to " "``GoodnessWeigher`` or add it to the list if other weighers are already " "present." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:43 msgid "" "You can choose to use the ``DriverFilter`` without the ``GoodnessWeigher`` " "or vice-versa. The filter and weigher working together, however, create the " "most benefits when helping the scheduler choose an ideal back end." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:50 msgid "" "The support for the ``DriverFilter`` and ``GoodnessWeigher`` is optional for " "back ends. If you are using a back end that does not support the filter and " "weigher functionality you may not get the full benefit." msgstr "" # #-#-#-#-# blockstorage-driver-filter-weighing.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_image_volume_cache.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage-driver-filter-weighing.rst:55 #: ../blockstorage_image_volume_cache.rst:38 msgid "Example ``cinder.conf`` configuration file:" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:64 msgid "" "It is useful to use the other filters and weighers available in OpenStack in " "combination with these custom ones. For example, the ``CapacityFilter`` and " "``CapacityWeigher`` can be combined with these." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:70 msgid "Defining your own filter and goodness functions" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:72 msgid "" "You can define your own filter and goodness functions through the use of " "various properties that OpenStack Block Storage has exposed. Properties " "exposed include information about the volume request being made, " "``volume_type`` settings, and back-end specific information about drivers. " "All of these allow for a lot of control over how the ideal back end for a " "volume request will be decided." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:79 msgid "" "The ``filter_function`` option is a string defining an equation that will " "determine whether a back end should be considered as a potential candidate " "in the scheduler." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:83 msgid "" "The ``goodness_function`` option is a string defining an equation that will " "rate the quality of the potential host (0 to 100, 0 lowest, 100 highest)." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:89 msgid "" "Default values for the filter and goodness functions will be used for each " "back end if you do not define them yourself. If complete control is desired " "then a filter and goodness function should be defined for each of the back " "ends in the ``cinder.conf`` file." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:96 msgid "Supported operations in filter and goodness functions" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:98 msgid "" "Below is a table of all the operations currently usable in custom filter and " "goodness functions created by you:" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:102 msgid "Operations" msgstr "" # #-#-#-#-# blockstorage-driver-filter-weighing.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-manage-logs.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage-driver-filter-weighing.rst:102 #: ../compute-manage-logs.rst:202 ../networking_adv-features.rst:130 #: ../telemetry-measurements.rst:31 ../telemetry-measurements.rst:100 #: ../telemetry-measurements.rst:434 ../telemetry-measurements.rst:489 #: ../telemetry-measurements.rst:533 ../telemetry-measurements.rst:595 #: ../telemetry-measurements.rst:671 ../telemetry-measurements.rst:705 #: ../telemetry-measurements.rst:771 ../telemetry-measurements.rst:821 #: ../telemetry-measurements.rst:859 ../telemetry-measurements.rst:931 #: ../telemetry-measurements.rst:995 ../telemetry-measurements.rst:1082 #: ../telemetry-measurements.rst:1162 ../telemetry-measurements.rst:1257 #: ../telemetry-measurements.rst:1323 ../telemetry-measurements.rst:1374 #: ../telemetry-measurements.rst:1401 ../telemetry-measurements.rst:1425 #: ../telemetry-measurements.rst:1446 msgid "Type" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:104 msgid "+, -, \\*, /, ^" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:104 msgid "standard math" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:106 msgid "logic" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:106 msgid "not, and, or, &, \\|, !" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:108 msgid ">, >=, <, <=, ==, <>, !=" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:108 msgid "equality" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:110 msgid "+, -" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:110 msgid "sign" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:112 msgid "ternary" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:112 msgid "x ? a : b" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:114 msgid "abs(x), max(x, y), min(x, y)" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:114 msgid "math helper functions" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:119 msgid "" "Syntax errors in filter or goodness strings defined by you will cause errors " "to be thrown at volume request time." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:123 msgid "Available properties when creating custom functions" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:125 msgid "" "There are various properties that can be used in either the " "``filter_function`` or the ``goodness_function`` strings. The properties " "allow access to volume info, qos settings, extra specs, and so on." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:129 msgid "" "The following properties and their sub-properties are currently available " "for use:" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:133 msgid "Host stats for a back end" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:135 msgid "The host's name" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:135 msgid "host" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:138 msgid "The volume back end name" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:138 msgid "volume\\_backend\\_name" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:141 msgid "The vendor name" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:141 msgid "vendor\\_name" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:144 msgid "The driver version" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:144 msgid "driver\\_version" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:147 msgid "The storage protocol" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:147 msgid "storage\\_protocol" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:150 msgid "Boolean signifying whether QoS is supported" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:150 msgid "QoS\\_support" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:153 msgid "The total capacity in GB" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:153 msgid "total\\_capacity\\_gb" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:156 msgid "The allocated capacity in GB" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:156 msgid "allocated\\_capacity\\_gb" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:159 msgid "The reserved storage percentage" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:159 msgid "reserved\\_percentage" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:162 msgid "Capabilities specific to a back end" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:164 msgid "" "These properties are determined by the specific back end you are creating " "filter and goodness functions for. Some back ends may not have any " "properties available here." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:169 msgid "Requested volume properties" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:172 msgid "Status for the requested volume" msgstr "" # #-#-#-#-# blockstorage-driver-filter-weighing.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage-driver-filter-weighing.rst:172 #: ../compute-live-migration-usage.rst:75 msgid "status" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:175 msgid "The volume type ID" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:175 msgid "volume\\_type\\_id" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:178 msgid "The display name of the volume" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:178 msgid "display\\_name" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:181 msgid "Any metadata the volume has" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:181 msgid "volume\\_metadata" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:184 msgid "Any reservations the volume has" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:184 msgid "reservations" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:187 msgid "The volume's user ID" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:187 msgid "user\\_id" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:190 msgid "The attach status for the volume" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:190 msgid "attach\\_status" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:193 msgid "The volume's display description" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:193 msgid "display\\_description" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:196 msgid "The volume's ID" msgstr "" # #-#-#-#-# blockstorage-driver-filter-weighing.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage-driver-filter-weighing.rst:196 #: ../compute-live-migration-usage.rst:68 msgid "id" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:199 msgid "The volume's replication status" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:199 msgid "replication\\_status" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:202 msgid "The volume's snapshot ID" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:202 msgid "snapshot\\_id" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:205 msgid "The volume's encryption key ID" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:205 msgid "encryption\\_key\\_id" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:208 msgid "The source volume ID" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:208 msgid "source\\_volid" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:211 msgid "Any admin metadata for this volume" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:211 msgid "volume\\_admin\\_metadata" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:214 msgid "The source replication ID" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:214 msgid "source\\_replicaid" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:217 msgid "The consistency group ID" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:217 msgid "consistencygroup\\_id" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:220 msgid "The size of the volume in GB" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:220 msgid "size" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:223 msgid "General metadata" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:223 msgid "metadata" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:225 msgid "" "The property most used from here will most likely be the ``size`` sub-" "property." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:228 msgid "Extra specs for the requested volume type" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:230 #: ../blockstorage-driver-filter-weighing.rst:239 msgid "View the available properties for volume types by running:" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:237 msgid "Current QoS specs for the requested volume type" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:245 msgid "" "In order to access these properties in a custom string use the following " "format:" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:248 msgid "``.``" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:251 msgid "Driver filter and weigher usage examples" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:253 msgid "" "Below are examples for using the filter and weigher separately, together, " "and using driver-specific properties." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:256 msgid "" "Example ``cinder.conf`` file configuration for customizing the filter " "function:" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:275 msgid "" "The above example will filter volumes to different back ends depending on " "the size of the requested volume. Default OpenStack Block Storage scheduler " "weighing is done. Volumes with a size less than 10 GB are sent to lvm-1 and " "volumes with a size greater than or equal to 10 GB are sent to lvm-2." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:281 msgid "" "Example ``cinder.conf`` file configuration for customizing the goodness " "function:" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:300 msgid "" "The above example will determine the goodness rating of a back end based off " "of the requested volume's size. Default OpenStack Block Storage scheduler " "filtering is done. The example shows how the ternary if statement can be " "used in a filter or goodness function. If a requested volume is of size " "10 GB then lvm-1 is rated as 50 and lvm-2 is rated as 100. In this case " "lvm-2 wins. If a requested volume is of size 3 GB then lvm-1 is rated 100 " "and lvm-2 is rated 25. In this case lvm-1 would win." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:308 msgid "" "Example ``cinder.conf`` file configuration for customizing both the filter " "and goodness functions:" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:330 msgid "" "The above example combines the techniques from the first two examples. The " "best back end is now decided based off of the total capacity of the back end " "and the requested volume's size." msgstr "" #: ../blockstorage-driver-filter-weighing.rst:334 msgid "" "Example ``cinder.conf`` file configuration for accessing driver specific " "properties:" msgstr "" #: ../blockstorage-driver-filter-weighing.rst:364 msgid "" "The above is an example of how back-end specific properties can be used in " "the filter and goodness functions. In this example the LVM driver's " "``total_volumes`` capability is being used to determine which host gets used " "during a volume request. In the above example, lvm-1 and lvm-2 will handle " "volume requests for all volumes with a size less than 5 GB. The lvm-1 host " "will have priority until it contains three or more volumes. After than lvm-2 " "will have priority until it contains eight or more volumes. The lvm-3 will " "collect all volumes greater or equal to 5 GB as well as all volumes once " "lvm-1 and lvm-2 lose priority." msgstr "" #: ../blockstorage-lio-iscsi-support.rst:3 msgid "Use LIO iSCSI support" msgstr "" #: ../blockstorage-lio-iscsi-support.rst:5 msgid "" "The default mode for the ``iscsi_helper`` tool is ``tgtadm``. To use LIO " "iSCSI, install the ``python-rtslib`` package, and set " "``iscsi_helper=lioadm`` in the ``cinder.conf`` file." msgstr "" #: ../blockstorage-lio-iscsi-support.rst:9 msgid "" "Once configured, you can use the :command:`cinder-rtstool` command to manage " "the volumes. This command enables you to create, delete, and verify volumes " "and determine targets and add iSCSI initiators to the system." msgstr "" # #-#-#-#-# blockstorage-manage-volumes.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-manage-volumes.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage-manage-volumes.rst:3 ../compute-manage-volumes.rst:3 msgid "Manage volumes" msgstr "" #: ../blockstorage-manage-volumes.rst:5 msgid "" "The default OpenStack Block Storage service implementation is an iSCSI " "solution that uses :term:`Logical Volume Manager (LVM)` for Linux." msgstr "" #: ../blockstorage-manage-volumes.rst:10 msgid "" "The OpenStack Block Storage service is not a shared storage solution like a " "Network Attached Storage (NAS) of NFS volumes where you can attach a volume " "to multiple servers. With the OpenStack Block Storage service, you can " "attach a volume to only one instance at a time." msgstr "" #: ../blockstorage-manage-volumes.rst:16 msgid "" "The OpenStack Block Storage service also provides drivers that enable you to " "use several vendors' back-end storage devices in addition to the base LVM " "implementation. These storage devices can also be used instead of the base " "LVM installation." msgstr "" #: ../blockstorage-manage-volumes.rst:21 msgid "" "This high-level procedure shows you how to create and attach a volume to a " "server instance." msgstr "" #: ../blockstorage-manage-volumes.rst:24 msgid "**To create and attach a volume to an instance**" msgstr "" #: ../blockstorage-manage-volumes.rst:26 msgid "" "Configure the OpenStack Compute and the OpenStack Block Storage services " "through the ``cinder.conf`` file." msgstr "" #: ../blockstorage-manage-volumes.rst:28 msgid "" "Use the :command:`cinder create` command to create a volume. This command " "creates an LV into the volume group (VG) ``cinder-volumes``." msgstr "" #: ../blockstorage-manage-volumes.rst:30 msgid "" "Use the nova :command:`volume-attach` command to attach the volume to an " "instance. This command creates a unique :term:`IQN` that is exposed to the " "compute node." msgstr "" #: ../blockstorage-manage-volumes.rst:34 msgid "" "The compute node, which runs the instance, now has an active iSCSI session " "and new local storage (usually a ``/dev/sdX`` disk)." msgstr "" #: ../blockstorage-manage-volumes.rst:37 msgid "" "Libvirt uses that local storage as storage for the instance. The instance " "gets a new disk (usually a ``/dev/vdX`` disk)." msgstr "" #: ../blockstorage-manage-volumes.rst:40 msgid "" "For this particular walkthrough, one cloud controller runs ``nova-api``, " "``nova-scheduler``, ``nova-objectstore``, ``nova-network`` and ``cinder-*`` " "services. Two additional compute nodes run ``nova-compute``. The walkthrough " "uses a custom partitioning scheme that carves out 60 GB of space and labels " "it as LVM. The network uses the ``FlatManager`` and ``NetworkManager`` " "settings for OpenStack Compute." msgstr "" #: ../blockstorage-manage-volumes.rst:48 msgid "" "The network mode does not interfere with OpenStack Block Storage operations, " "but you must set up networking for Block Storage to work. For details, see :" "ref:`networking`." msgstr "" #: ../blockstorage-manage-volumes.rst:52 msgid "" "To set up Compute to use volumes, ensure that Block Storage is installed " "along with ``lvm2``. This guide describes how to troubleshoot your " "installation and back up your Compute volumes." msgstr "" #: ../blockstorage-troubleshoot.rst:3 msgid "Troubleshoot your installation" msgstr "" #: ../blockstorage-troubleshoot.rst:5 msgid "" "This section provides useful tips to help you troubleshoot your Block " "Storage installation." msgstr "" # #-#-#-#-# blockstorage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage.rst:5 ../dashboard_set_quotas.rst:40 #: ../dashboard_set_quotas.rst:66 ../dashboard_set_quotas.rst:72 msgid "Block Storage" msgstr "" #: ../blockstorage.rst:7 msgid "" "The OpenStack Block Storage service works through the interaction of a " "series of daemon processes named ``cinder-*`` that reside persistently on " "the host machine or machines. You can run all the binaries from a single " "node, or spread across multiple nodes. You can also run them on the same " "node as other OpenStack services." msgstr "" #: ../blockstorage.rst:13 msgid "" "To administer the OpenStack Block Storage service, it is helpful to " "understand a number of concepts. You must make certain choices when you " "configure the Block Storage service in OpenStack. The bulk of the options " "come down to two choices - single node or multi-node install. You can read a " "longer discussion about `Storage Decisions`_ in the `OpenStack Operations " "Guide`_." msgstr "" #: ../blockstorage.rst:20 msgid "" "OpenStack Block Storage enables you to add extra block-level storage to your " "OpenStack Compute instances. This service is similar to the Amazon EC2 " "Elastic Block Storage (EBS) offering." msgstr "" #: ../blockstorage_backup_disks.rst:3 msgid "Back up Block Storage service disks" msgstr "" #: ../blockstorage_backup_disks.rst:5 msgid "" "While you can use the LVM snapshot to create snapshots, you can also use it " "to back up your volumes. By using LVM snapshot, you reduce the size of the " "backup; only existing data is backed up instead of the entire volume." msgstr "" #: ../blockstorage_backup_disks.rst:10 msgid "" "To back up a volume, you must create a snapshot of it. An LVM snapshot is " "the exact copy of a logical volume, which contains data in a frozen state. " "This prevents data corruption because data cannot be manipulated during the " "volume creation process. Remember that the volumes created through a :" "command:`nova volume-create` command exist in an LVM logical volume." msgstr "" #: ../blockstorage_backup_disks.rst:17 msgid "" "You must also make sure that the operating system is not using the volume " "and that all data has been flushed on the guest file systems. This usually " "means that those file systems have to be unmounted during the snapshot " "creation. They can be mounted again as soon as the logical volume snapshot " "has been created." msgstr "" #: ../blockstorage_backup_disks.rst:23 msgid "" "Before you create the snapshot you must have enough space to save it. As a " "precaution, you should have at least twice as much space as the potential " "snapshot size. If insufficient space is available, the snapshot might become " "corrupted." msgstr "" #: ../blockstorage_backup_disks.rst:28 msgid "" "For this example assume that a 100 GB volume named ``volume-00000001`` was " "created for an instance while only 4 GB are used. This example uses these " "commands to back up only those 4 GB:" msgstr "" #: ../blockstorage_backup_disks.rst:32 msgid ":command:`lvm2` command. Directly manipulates the volumes." msgstr "" #: ../blockstorage_backup_disks.rst:34 msgid "" ":command:`kpartx` command. Discovers the partition table created inside the " "instance." msgstr "" #: ../blockstorage_backup_disks.rst:37 msgid ":command:`tar` command. Creates a minimum-sized backup." msgstr "" #: ../blockstorage_backup_disks.rst:39 msgid "" ":command:`sha1sum` command. Calculates the backup checksum to check its " "consistency." msgstr "" #: ../blockstorage_backup_disks.rst:42 msgid "You can apply this process to volumes of any size." msgstr "" #: ../blockstorage_backup_disks.rst:44 msgid "**To back up Block Storage service disks**" msgstr "" #: ../blockstorage_backup_disks.rst:46 msgid "Create a snapshot of a used volume" msgstr "" #: ../blockstorage_backup_disks.rst:48 msgid "Use this command to list all volumes" msgstr "" #: ../blockstorage_backup_disks.rst:54 msgid "" "Create the snapshot; you can do this while the volume is attached to an " "instance:" msgstr "" #: ../blockstorage_backup_disks.rst:62 msgid "" "Use the :option:`--snapshot` configuration option to tell LVM that you want " "a snapshot of an already existing volume. The command includes the size of " "the space reserved for the snapshot volume, the name of the snapshot, and " "the path of an already existing volume. Generally, this path is ``/dev/" "cinder-volumes/VOLUME_NAME``." msgstr "" #: ../blockstorage_backup_disks.rst:68 msgid "" "The size does not have to be the same as the volume of the snapshot. The :" "option:`--size` parameter defines the space that LVM reserves for the " "snapshot volume. As a precaution, the size should be the same as that of the " "original volume, even if the whole space is not currently used by the " "snapshot." msgstr "" #: ../blockstorage_backup_disks.rst:74 msgid "Run the :command:`lvdisplay` command again to verify the snapshot:" msgstr "" #: ../blockstorage_backup_disks.rst:115 msgid "Partition table discovery" msgstr "" #: ../blockstorage_backup_disks.rst:117 msgid "" "To exploit the snapshot with the :command:`tar` command, mount your " "partition on the Block Storage service server." msgstr "" #: ../blockstorage_backup_disks.rst:120 msgid "" "The :command:`kpartx` utility discovers and maps table partitions. You can " "use it to view partitions that are created inside the instance. Without " "using the partitions created inside instances, you cannot see its content " "and create efficient backups." msgstr "" #: ../blockstorage_backup_disks.rst:131 msgid "" "On a Debian-based distribution, you can use the :command:`apt-get install " "kpartx` command to install :command:`kpartx`." msgstr "" #: ../blockstorage_backup_disks.rst:135 msgid "" "If the tools successfully find and map the partition table, no errors are " "returned." msgstr "" #: ../blockstorage_backup_disks.rst:138 msgid "To check the partition table map, run this command:" msgstr "" #: ../blockstorage_backup_disks.rst:144 msgid "" "You can see the ``cinder--volumes-volume--00000001--snapshot1`` partition." msgstr "" #: ../blockstorage_backup_disks.rst:147 msgid "" "If you created more than one partition on that volume, you see several " "partitions; for example: ``cinder--volumes-volume--00000001--snapshot2``, " "``cinder--volumes-volume--00000001--snapshot3``, and so on." msgstr "" #: ../blockstorage_backup_disks.rst:152 msgid "Mount your partition" msgstr "" #: ../blockstorage_backup_disks.rst:158 msgid "If the partition mounts successfully, no errors are returned." msgstr "" #: ../blockstorage_backup_disks.rst:160 msgid "" "You can directly access the data inside the instance. If a message prompts " "you for a partition or you cannot mount it, determine whether enough space " "was allocated for the snapshot or the :command:`kpartx` command failed to " "discover the partition table." msgstr "" #: ../blockstorage_backup_disks.rst:165 msgid "Allocate more space to the snapshot and try the process again." msgstr "" #: ../blockstorage_backup_disks.rst:167 msgid "Use the :command:`tar` command to create archives" msgstr "" #: ../blockstorage_backup_disks.rst:169 msgid "Create a backup of the volume:" msgstr "" #: ../blockstorage_backup_disks.rst:176 msgid "" "This command creates a ``tar.gz`` file that contains the data, *and data " "only*. This ensures that you do not waste space by backing up empty sectors." msgstr "" #: ../blockstorage_backup_disks.rst:180 msgid "Checksum calculation I" msgstr "" #: ../blockstorage_backup_disks.rst:182 msgid "" "You should always have the checksum for your backup files. When you transfer " "the same file over the network, you can run a checksum calculation to ensure " "that your file was not corrupted during its transfer. The checksum is a " "unique ID for a file. If the checksums are different, the file is corrupted." msgstr "" #: ../blockstorage_backup_disks.rst:188 msgid "" "Run this command to run a checksum for your file and save the result to a " "file:" msgstr "" #: ../blockstorage_backup_disks.rst:197 msgid "" "Use the :command:`sha1sum` command carefully because the time it takes to " "complete the calculation is directly proportional to the size of the file." msgstr "" #: ../blockstorage_backup_disks.rst:201 msgid "" "Depending on your CPU, the process might take a long time for files larger " "than around 4 to 6 GB." msgstr "" #: ../blockstorage_backup_disks.rst:204 msgid "After work cleaning" msgstr "" #: ../blockstorage_backup_disks.rst:206 msgid "" "Now that you have an efficient and consistent backup, use this command to " "clean up the file system:" msgstr "" #: ../blockstorage_backup_disks.rst:209 msgid "Unmount the volume." msgstr "" #: ../blockstorage_backup_disks.rst:215 msgid "Delete the partition table." msgstr "" #: ../blockstorage_backup_disks.rst:221 msgid "Remove the snapshot." msgstr "" #: ../blockstorage_backup_disks.rst:227 msgid "Repeat these steps for all your volumes." msgstr "" #: ../blockstorage_backup_disks.rst:229 msgid "Automate your backups" msgstr "" #: ../blockstorage_backup_disks.rst:231 msgid "" "Because more and more volumes might be allocated to your Block Storage " "service, you might want to automate your backups. The `SCR_5005_V01_NUAC-" "OPENSTACK-EBS-volumes-backup.sh`_ script assists you with this task. The " "script performs the operations from the previous example, but also provides " "a mail report and runs the backup based on the ``backups_retention_days`` " "setting." msgstr "" #: ../blockstorage_backup_disks.rst:238 msgid "Launch this script from the server that runs the Block Storage service." msgstr "" #: ../blockstorage_backup_disks.rst:240 msgid "This example shows a mail report:" msgstr "" #: ../blockstorage_backup_disks.rst:258 msgid "" "The script also enables you to SSH to your instances and run a :command:" "`mysqldump` command into them. To make this work, enable the connection to " "the Compute project keys. If you do not want to run the :command:`mysqldump` " "command, you can add ``enable_mysql_dump=0`` to the script to turn off this " "functionality." msgstr "" #: ../blockstorage_get_capabilities.rst:6 msgid "Get capabilities" msgstr "" #: ../blockstorage_get_capabilities.rst:8 msgid "" "When an administrator configures ``volume type`` and ``extra specs`` of " "storage on the back end, the administrator has to read the right " "documentation that corresponds to the version of the storage back end. Deep " "knowledge of storage is also required." msgstr "" #: ../blockstorage_get_capabilities.rst:13 msgid "" "OpenStack Block Storage enables administrators to configure ``volume type`` " "and ``extra specs`` without specific knowledge of the storage back end." msgstr "" #: ../blockstorage_get_capabilities.rst:18 msgid "``Volume Type``: A group of volume policies." msgstr "" #: ../blockstorage_get_capabilities.rst:19 msgid "" "``Extra Specs``: The definition of a volume type. This is a group of " "policies. For example, provision type, QOS that will be used to define a " "volume at creation time." msgstr "" #: ../blockstorage_get_capabilities.rst:22 msgid "" "``Capabilities``: What the current deployed back end in Cinder is able to " "do. These correspond to extra specs." msgstr "" #: ../blockstorage_get_capabilities.rst:26 msgid "Usage of cinder client" msgstr "" #: ../blockstorage_get_capabilities.rst:28 msgid "" "When an administrator wants to define new volume types for their OpenStack " "cloud, the administrator would fetch a list of ``capabilities`` for a " "particular back end using the cinder client." msgstr "" #: ../blockstorage_get_capabilities.rst:32 msgid "First, get a list of the services:" msgstr "" #: ../blockstorage_get_capabilities.rst:44 msgid "" "With one of the listed hosts, pass that to ``get-capabilities``, then the " "administrator can obtain volume stats and also back end ``capabilities`` as " "listed below." msgstr "" #: ../blockstorage_get_capabilities.rst:78 msgid "Usage of REST API" msgstr "" #: ../blockstorage_get_capabilities.rst:80 msgid "" "New endpoint to ``get capabilities`` list for specific storage back end is " "also available. For more details, refer to the Block Storage API reference." msgstr "" #: ../blockstorage_get_capabilities.rst:83 msgid "API request:" msgstr "" #: ../blockstorage_get_capabilities.rst:89 msgid "Example of return value:" msgstr "" #: ../blockstorage_get_capabilities.rst:150 msgid "Usage of volume type access extension" msgstr "" #: ../blockstorage_get_capabilities.rst:151 msgid "" "Some volume types should be restricted only. For example, test volume types " "where you are testing a new technology or ultra high performance volumes " "(for special cases) where you do not want most users to be able to select " "these volumes. An administrator/operator can then define private volume " "types using cinder client. Volume type access extension adds the ability to " "manage volume type access. Volume types are public by default. Private " "volume types can be created by setting the ``is_public`` Boolean field to " "``False`` at creation time. Access to a private volume type can be " "controlled by adding or removing a project from it. Private volume types " "without projects are only visible by users with the admin role/context." msgstr "" #: ../blockstorage_get_capabilities.rst:163 msgid "Create a public volume type by setting ``is_public`` field to ``True``:" msgstr "" #: ../blockstorage_get_capabilities.rst:174 msgid "" "Create a private volume type by setting ``is_public`` field to ``False``:" msgstr "" #: ../blockstorage_get_capabilities.rst:185 msgid "Get a list of the volume types:" msgstr "" #: ../blockstorage_get_capabilities.rst:198 msgid "Get a list of the projects:" msgstr "" #: ../blockstorage_get_capabilities.rst:213 msgid "" "Add volume type access for the given demo project, using its project-id:" msgstr "" #: ../blockstorage_get_capabilities.rst:219 msgid "List the access information about the given volume type:" msgstr "" #: ../blockstorage_get_capabilities.rst:230 msgid "Remove volume type access for the given project:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:3 msgid "Configure a GlusterFS back end" msgstr "" #: ../blockstorage_glusterfs_backend.rst:5 msgid "" "This section explains how to configure OpenStack Block Storage to use " "GlusterFS as a back end. You must be able to access the GlusterFS shares " "from the server that hosts the ``cinder`` volume service." msgstr "" #: ../blockstorage_glusterfs_backend.rst:11 msgid "" "The cinder volume service is named ``openstack-cinder-volume`` on the " "following distributions:" msgstr "" # #-#-#-#-# blockstorage_glusterfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_nfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_glusterfs_backend.rst:14 #: ../blockstorage_glusterfs_backend.rst:154 #: ../blockstorage_nfs_backend.rst:14 ../blockstorage_nfs_backend.rst:82 msgid "CentOS" msgstr "" # #-#-#-#-# blockstorage_glusterfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_nfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_glusterfs_backend.rst:16 #: ../blockstorage_glusterfs_backend.rst:156 #: ../blockstorage_nfs_backend.rst:16 ../blockstorage_nfs_backend.rst:84 msgid "Fedora" msgstr "" # #-#-#-#-# blockstorage_glusterfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_nfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_glusterfs_backend.rst:18 #: ../blockstorage_glusterfs_backend.rst:158 #: ../blockstorage_nfs_backend.rst:18 ../blockstorage_nfs_backend.rst:86 msgid "openSUSE" msgstr "" # #-#-#-#-# blockstorage_glusterfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_nfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_glusterfs_backend.rst:20 #: ../blockstorage_glusterfs_backend.rst:160 #: ../blockstorage_nfs_backend.rst:20 ../blockstorage_nfs_backend.rst:88 msgid "Red Hat Enterprise Linux" msgstr "" # #-#-#-#-# blockstorage_glusterfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_nfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_glusterfs_backend.rst:22 #: ../blockstorage_glusterfs_backend.rst:162 #: ../blockstorage_nfs_backend.rst:22 ../blockstorage_nfs_backend.rst:90 msgid "SUSE Linux Enterprise" msgstr "" # #-#-#-#-# blockstorage_glusterfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_nfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_glusterfs_backend.rst:24 ../blockstorage_nfs_backend.rst:24 msgid "" "In Ubuntu and Debian distributions, the ``cinder`` volume service is named " "``cinder-volume``." msgstr "" #: ../blockstorage_glusterfs_backend.rst:27 msgid "" "Mounting GlusterFS volumes requires utilities and libraries from the " "``glusterfs-fuse`` package. This package must be installed on all systems " "that will access volumes backed by GlusterFS." msgstr "" #: ../blockstorage_glusterfs_backend.rst:33 msgid "" "The utilities and libraries required for mounting GlusterFS volumes on " "Ubuntu and Debian distributions are available from the ``glusterfs-client`` " "package instead." msgstr "" #: ../blockstorage_glusterfs_backend.rst:37 msgid "" "For information on how to install and configure GlusterFS, refer to the " "`GlusterDocumentation`_ page." msgstr "" #: ../blockstorage_glusterfs_backend.rst:40 msgid "**Configure GlusterFS for OpenStack Block Storage**" msgstr "" #: ../blockstorage_glusterfs_backend.rst:42 msgid "" "The GlusterFS server must also be configured accordingly in order to allow " "OpenStack Block Storage to use GlusterFS shares:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:45 msgid "Log in as ``root`` to the GlusterFS server." msgstr "" #: ../blockstorage_glusterfs_backend.rst:47 msgid "" "Set each Gluster volume to use the same UID and GID as the ``cinder`` user:" msgstr "" # #-#-#-#-# blockstorage_glusterfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_nfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# identity_service_api_protection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_secure_identity_to_ldap_backend.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_glusterfs_backend.rst:55 #: ../blockstorage_glusterfs_backend.rst:107 #: ../blockstorage_nfs_backend.rst:44 ../compute-flavors.rst:334 #: ../compute-flavors.rst:355 ../identity_service_api_protection.rst:19 #: ../keystone_secure_identity_to_ldap_backend.rst:64 msgid "Where:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:57 msgid "VOL_NAME is the Gluster volume name." msgstr "" #: ../blockstorage_glusterfs_backend.rst:59 msgid "CINDER_UID is the UID of the ``cinder`` user." msgstr "" #: ../blockstorage_glusterfs_backend.rst:61 msgid "CINDER_GID is the GID of the ``cinder`` user." msgstr "" #: ../blockstorage_glusterfs_backend.rst:65 msgid "" "The default UID and GID of the ``cinder`` user is 165 on most distributions." msgstr "" #: ../blockstorage_glusterfs_backend.rst:68 msgid "" "Configure each Gluster volume to accept ``libgfapi`` connections. To do " "this, set each Gluster volume to allow insecure ports:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:75 msgid "" "Enable client connections from unprivileged ports. To do this, add the " "following line to ``/etc/glusterfs/glusterd.vol``:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:82 msgid "Restart the ``glusterd`` service:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:89 msgid "**Configure Block Storage to use a GlusterFS back end**" msgstr "" #: ../blockstorage_glusterfs_backend.rst:91 msgid "After you configure the GlusterFS service, complete these steps:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:93 msgid "Log in as ``root`` to the system hosting the Block Storage service." msgstr "" #: ../blockstorage_glusterfs_backend.rst:95 msgid "Create a text file named ``glusterfs`` in ``/etc/cinder/`` directory." msgstr "" #: ../blockstorage_glusterfs_backend.rst:97 msgid "" "Add an entry to ``/etc/cinder/glusterfs`` for each GlusterFS share that " "OpenStack Block Storage should use for back end storage. Each entry should " "be a separate line, and should use the following format:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:109 msgid "HOST is the IP address or host name of the Red Hat Storage server." msgstr "" #: ../blockstorage_glusterfs_backend.rst:111 msgid "" "VOL_NAME is the name of an existing and accessible volume on the GlusterFS " "server." msgstr "" #: ../blockstorage_glusterfs_backend.rst:116 msgid "" "Optionally, if your environment requires additional mount options for a " "share, you can add them to the share's entry:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:123 msgid "Replace OPTIONS with a comma-separated list of mount options." msgstr "" #: ../blockstorage_glusterfs_backend.rst:125 msgid "" "Set ``/etc/cinder/glusterfs`` to be owned by the root user and the " "``cinder`` group:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:132 msgid "" "Set ``/etc/cinder/glusterfs`` to be readable by members of the ``cinder`` " "group:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:139 msgid "" "Configure OpenStack Block Storage to use the ``/etc/cinder/glusterfs`` file " "created earlier. To do so, open the ``/etc/cinder/cinder.conf`` " "configuration file and set the ``glusterfs_shares_config`` configuration key " "to ``/etc/cinder/glusterfs``." msgstr "" #: ../blockstorage_glusterfs_backend.rst:144 msgid "" "On distributions that include openstack-config, you can configure this by " "running the following command instead:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:152 msgid "The following distributions include ``openstack-config``:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:166 msgid "" "Configure OpenStack Block Storage to use the correct volume driver, namely " "``cinder.volume.drivers.glusterfs.GlusterfsDriver``. To do so, open the ``/" "etc/cinder/cinder.conf`` configuration file and set the ``volume_driver`` " "configuration key to ``cinder.volume.drivers.glusterfs.GlusterfsDriver``." msgstr "" # #-#-#-#-# blockstorage_glusterfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_nfs_backend.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_glusterfs_backend.rst:180 #: ../blockstorage_nfs_backend.rst:125 msgid "You can now restart the service to apply the configuration." msgstr "" #: ../blockstorage_glusterfs_backend.rst:183 msgid "OpenStack Block Storage is now configured to use a GlusterFS back end." msgstr "" #: ../blockstorage_glusterfs_backend.rst:187 msgid "" "If a client host has SELinux enabled, the ``virt_use_fusefs`` boolean should " "also be enabled if the host requires access to GlusterFS volumes on an " "instance. To enable this Boolean, run the following command as the ``root`` " "user:" msgstr "" #: ../blockstorage_glusterfs_backend.rst:196 msgid "" "This command also makes the Boolean persistent across reboots. Run this " "command on all client hosts that require access to GlusterFS volumes on an " "instance. This includes all compute nodes." msgstr "" #: ../blockstorage_glusterfs_removal.rst:5 msgid "Gracefully remove a GlusterFS volume from usage" msgstr "" #: ../blockstorage_glusterfs_removal.rst:7 msgid "" "Configuring the ``cinder`` volume service to use GlusterFS involves creating " "a shares file (for example, ``/etc/cinder/glusterfs``). This shares file " "lists each GlusterFS volume (with its corresponding storage server) that the " "``cinder`` volume service can use for back end storage." msgstr "" #: ../blockstorage_glusterfs_removal.rst:12 msgid "" "To remove a GlusterFS volume from usage as a back end, delete the volume's " "corresponding entry from the shares file. After doing so, restart the Block " "Storage services." msgstr "" #: ../blockstorage_glusterfs_removal.rst:16 msgid "" "Restarting the Block Storage services will prevent the ``cinder`` volume " "service from exporting the deleted GlusterFS volume. This will prevent any " "instances from mounting the volume from that point onwards." msgstr "" #: ../blockstorage_glusterfs_removal.rst:20 msgid "" "However, the removed GlusterFS volume might still be mounted on an instance " "at this point. Typically, this is the case when the volume was already " "mounted while its entry was deleted from the shares file. Whenever this " "occurs, you will have to unmount the volume as normal after the Block " "Storage services are restarted." msgstr "" #: ../blockstorage_image_volume_cache.rst:6 msgid "Image-Volume cache" msgstr "" #: ../blockstorage_image_volume_cache.rst:8 msgid "" "OpenStack Block Storage has an optional Image cache which can dramatically " "improve the performance of creating a volume from an image. The improvement " "depends on many factors, primarily how quickly the configured back end can " "clone a volume." msgstr "" #: ../blockstorage_image_volume_cache.rst:13 msgid "" "When a volume is first created from an image, a new cached image-volume will " "be created that is owned by the Block Storage Internal Tenant. Subsequent " "requests to create volumes from that image will clone the cached version " "instead of downloading the image contents and copying data to the volume." msgstr "" #: ../blockstorage_image_volume_cache.rst:18 msgid "" "The cache itself is configurable per back end and will contain the most " "recently used images." msgstr "" #: ../blockstorage_image_volume_cache.rst:22 msgid "Configure the Internal Tenant" msgstr "" #: ../blockstorage_image_volume_cache.rst:24 msgid "" "The Image-Volume cache requires that the Internal Tenant be configured for " "the Block Storage services. This tenant will own the cached image-volumes so " "they can be managed like normal users including tools like volume quotas. " "This protects normal users from having to see the cached image-volumes, but " "does not make them globally hidden." msgstr "" #: ../blockstorage_image_volume_cache.rst:30 msgid "" "To enable the Block Storage services to have access to an Internal Tenant, " "set the following options in the ``cinder.conf`` file:" msgstr "" #: ../blockstorage_image_volume_cache.rst:47 msgid "" "The actual user and project that are configured for the Internal Tenant do " "not require any special privileges. They can be the Block Storage service " "tenant or can be any normal project and user." msgstr "" #: ../blockstorage_image_volume_cache.rst:52 msgid "Configure the Image-Volume cache" msgstr "" #: ../blockstorage_image_volume_cache.rst:54 msgid "" "To enable the Image-Volume cache, set the following configuration option in " "``cinder.conf``:" msgstr "" #: ../blockstorage_image_volume_cache.rst:61 msgid "This can be scoped per back end definition or in the default options." msgstr "" #: ../blockstorage_image_volume_cache.rst:63 msgid "" "There are optional configuration settings that can limit the size of the " "cache. These can also be scoped per back end or in the default options in " "``cinder.conf``:" msgstr "" #: ../blockstorage_image_volume_cache.rst:72 msgid "By default they will be set to 0, which means unlimited." msgstr "" #: ../blockstorage_image_volume_cache.rst:74 msgid "" "For example, a configuration which would limit the max size to 200 GB and 50 " "cache entries will be configured as:" msgstr "" # #-#-#-#-# blockstorage_image_volume_cache.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_adv-operational-features.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_image_volume_cache.rst:83 #: ../networking_adv-operational-features.rst:42 #: ../telemetry-data-collection.rst:24 ../telemetry-data-collection.rst:35 msgid "Notifications" msgstr "" #: ../blockstorage_image_volume_cache.rst:85 msgid "" "Cache actions will trigger Telemetry messages. There are several that will " "be sent." msgstr "" #: ../blockstorage_image_volume_cache.rst:88 msgid "" "``image_volume_cache.miss`` - A volume is being created from an image which " "was not found in the cache. Typically this will mean a new cache entry would " "be created for it." msgstr "" #: ../blockstorage_image_volume_cache.rst:92 msgid "" "``image_volume_cache.hit`` - A volume is being created from an image which " "was found in the cache and the fast path can be taken." msgstr "" #: ../blockstorage_image_volume_cache.rst:95 msgid "" "``image_volume_cache.evict`` - A cached image-volume has been deleted from " "the cache." msgstr "" #: ../blockstorage_image_volume_cache.rst:100 msgid "Managing cached Image-Volumes" msgstr "" #: ../blockstorage_image_volume_cache.rst:102 msgid "" "In normal usage there should be no need for manual intervention with the " "cache. The entries and their backing Image-Volumes are managed automatically." msgstr "" #: ../blockstorage_image_volume_cache.rst:105 msgid "" "If needed, you can delete these volumes manually to clear the cache. By " "using the standard volume deletion APIs, the Block Storage service will " "clean up correctly." msgstr "" # #-#-#-#-# blockstorage_multi_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_volume_number_weigher.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_multi_backend.rst:5 #: ../blockstorage_volume_number_weigher.rst:22 msgid "Configure multiple-storage back ends" msgstr "" #: ../blockstorage_multi_backend.rst:7 msgid "" "When you configure multiple-storage back ends, you can create several back-" "end storage solutions that serve the same OpenStack Compute configuration " "and one ``cinder-volume`` is launched for each back-end storage or back-end " "storage pool." msgstr "" #: ../blockstorage_multi_backend.rst:12 msgid "" "In a multiple-storage back-end configuration, each back end has a name " "(``volume_backend_name``). Several back ends can have the same name. In that " "case, the scheduler properly decides which back end the volume has to be " "created in." msgstr "" #: ../blockstorage_multi_backend.rst:17 msgid "" "The name of the back end is declared as an extra-specification of a volume " "type (such as, ``volume_backend_name=LVM``). When a volume is created, the " "scheduler chooses an appropriate back end to handle the request, according " "to the volume type specified by the user." msgstr "" #: ../blockstorage_multi_backend.rst:23 msgid "Enable multiple-storage back ends" msgstr "" #: ../blockstorage_multi_backend.rst:25 msgid "" "To enable a multiple-storage back ends, you must set the `enabled_backends` " "flag in the ``cinder.conf`` file. This flag defines the names (separated by " "a comma) of the configuration groups for the different back ends: one name " "is associated to one configuration group for a back end (such as, " "``[lvmdriver-1]``)." msgstr "" #: ../blockstorage_multi_backend.rst:33 msgid "" "The configuration group name is not related to the ``volume_backend_name``." msgstr "" #: ../blockstorage_multi_backend.rst:37 msgid "" "After setting the ``enabled_backends`` flag on an existing cinder service, " "and restarting the Block Storage services, the original ``host`` service is " "replaced with a new host service. The new service appears with a name like " "``host@backend``. Use:" msgstr "" #: ../blockstorage_multi_backend.rst:46 msgid "to convert current block devices to the new host name." msgstr "" #: ../blockstorage_multi_backend.rst:48 msgid "" "The options for a configuration group must be defined in the group (or " "default options are used). All the standard Block Storage configuration " "options (``volume_group``, ``volume_driver``, and so on) might be used in a " "configuration group. Configuration values in the ``[DEFAULT]`` configuration " "group are not used." msgstr "" #: ../blockstorage_multi_backend.rst:54 msgid "These examples show three back ends:" msgstr "" #: ../blockstorage_multi_backend.rst:72 msgid "" "In this configuration, ``lvmdriver-1`` and ``lvmdriver-2`` have the same " "``volume_backend_name``. If a volume creation requests the ``LVM`` back end " "name, the scheduler uses the capacity filter scheduler to choose the most " "suitable driver, which is either ``lvmdriver-1`` or ``lvmdriver-2``. The " "capacity filter scheduler is enabled by default. The next section provides " "more information. In addition, this example presents a ``lvmdriver-3`` back " "end." msgstr "" #: ../blockstorage_multi_backend.rst:82 msgid "" "For Fiber Channel drivers that support multipath, the configuration group " "requires the ``use_multipath_for_image_xfer=true`` option. In the example " "below, you can see details for HPE 3PAR and EMC Fiber Channel drivers." msgstr "" #: ../blockstorage_multi_backend.rst:100 msgid "Configure Block Storage scheduler multi back end" msgstr "" #: ../blockstorage_multi_backend.rst:102 msgid "" "You must enable the `filter_scheduler` option to use multiple-storage back " "ends. The filter scheduler:" msgstr "" #: ../blockstorage_multi_backend.rst:105 msgid "" "Filters the available back ends. By default, ``AvailabilityZoneFilter``, " "``CapacityFilter`` and ``CapabilitiesFilter`` are enabled." msgstr "" #: ../blockstorage_multi_backend.rst:108 msgid "" "Weights the previously filtered back ends. By default, the `CapacityWeigher` " "option is enabled. When this option is enabled, the filter scheduler assigns " "the highest weight to back ends with the most available capacity." msgstr "" #: ../blockstorage_multi_backend.rst:113 msgid "" "The scheduler uses filters and weights to pick the best back end to handle " "the request. The scheduler uses volume types to explicitly create volumes on " "specific back ends. For more information about filter and weighing, see :ref:" "`filter_weigh_scheduler`." msgstr "" # #-#-#-#-# blockstorage_multi_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_volume_number_weigher.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_multi_backend.rst:120 #: ../blockstorage_volume_number_weigher.rst:46 msgid "Volume type" msgstr "" #: ../blockstorage_multi_backend.rst:122 msgid "" "Before using it, a volume type has to be declared to Block Storage. This can " "be done by the following command:" msgstr "" #: ../blockstorage_multi_backend.rst:129 msgid "" "Then, an extra-specification has to be created to link the volume type to a " "back end name. Run this command:" msgstr "" #: ../blockstorage_multi_backend.rst:137 msgid "" "This example creates a ``lvm`` volume type with " "``volume_backend_name=LVM_iSCSI`` as extra-specifications." msgstr "" #: ../blockstorage_multi_backend.rst:140 msgid "Create another volume type:" msgstr "" #: ../blockstorage_multi_backend.rst:149 msgid "" "This second volume type is named ``lvm_gold`` and has ``LVM_iSCSI_b`` as " "back end name." msgstr "" #: ../blockstorage_multi_backend.rst:154 msgid "To list the extra-specifications, use this command:" msgstr "" #: ../blockstorage_multi_backend.rst:162 msgid "" "If a volume type points to a ``volume_backend_name`` that does not exist in " "the Block Storage configuration, the ``filter_scheduler`` returns an error " "that it cannot find a valid host with the suitable back end." msgstr "" # #-#-#-#-# blockstorage_multi_backend.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# blockstorage_volume_number_weigher.pot (Administrator Guide 0.9) #-#-#-#-# #: ../blockstorage_multi_backend.rst:168 #: ../blockstorage_volume_number_weigher.rst:64 msgid "Usage" msgstr "" #: ../blockstorage_multi_backend.rst:170 msgid "" "When you create a volume, you must specify the volume type. The extra-" "specifications of the volume type are used to determine which back end has " "to be used." msgstr "" #: ../blockstorage_multi_backend.rst:178 msgid "" "Considering the ``cinder.conf`` described previously, the scheduler creates " "this volume on ``lvmdriver-1`` or ``lvmdriver-2``." msgstr "" #: ../blockstorage_multi_backend.rst:185 msgid "This second volume is created on ``lvmdriver-3``." msgstr "" #: ../blockstorage_nfs_backend.rst:3 msgid "Configure an NFS storage back end" msgstr "" #: ../blockstorage_nfs_backend.rst:5 msgid "" "This section explains how to configure OpenStack Block Storage to use NFS " "storage. You must be able to access the NFS shares from the server that " "hosts the ``cinder`` volume service." msgstr "" #: ../blockstorage_nfs_backend.rst:11 msgid "" "The ``cinder`` volume service is named ``openstack-cinder-volume`` on the " "following distributions:" msgstr "" #: ../blockstorage_nfs_backend.rst:27 msgid "**Configure Block Storage to use an NFS storage back end**" msgstr "" #: ../blockstorage_nfs_backend.rst:29 msgid "Log in as ``root`` to the system hosting the ``cinder`` volume service." msgstr "" #: ../blockstorage_nfs_backend.rst:32 msgid "" "Create a text file named ``nfsshares`` in the ``/etc/cinder/`` directory." msgstr "" #: ../blockstorage_nfs_backend.rst:34 msgid "" "Add an entry to ``/etc/cinder/nfsshares`` for each NFS share that the " "``cinder`` volume service should use for back end storage. Each entry should " "be a separate line, and should use the following format:" msgstr "" #: ../blockstorage_nfs_backend.rst:46 msgid "HOST is the IP address or host name of the NFS server." msgstr "" #: ../blockstorage_nfs_backend.rst:48 msgid "SHARE is the absolute path to an existing and accessible NFS share." msgstr "" #: ../blockstorage_nfs_backend.rst:52 msgid "" "Set ``/etc/cinder/nfsshares`` to be owned by the ``root`` user and the " "``cinder`` group:" msgstr "" #: ../blockstorage_nfs_backend.rst:59 msgid "" "Set ``/etc/cinder/nfsshares`` to be readable by members of the cinder group:" msgstr "" #: ../blockstorage_nfs_backend.rst:66 msgid "" "Configure the ``cinder`` volume service to use the ``/etc/cinder/nfsshares`` " "file created earlier. To do so, open the ``/etc/cinder/cinder.conf`` " "configuration file and set the ``nfs_shares_config`` configuration key to ``/" "etc/cinder/nfsshares``." msgstr "" #: ../blockstorage_nfs_backend.rst:80 msgid "The following distributions include openstack-config:" msgstr "" #: ../blockstorage_nfs_backend.rst:93 msgid "" "Optionally, provide any additional NFS mount options required in your " "environment in the ``nfs_mount_options`` configuration key of ``/etc/cinder/" "cinder.conf``. If your NFS shares do not require any additional mount " "options (or if you are unsure), skip this step." msgstr "" #: ../blockstorage_nfs_backend.rst:107 msgid "" "Replace OPTIONS with the mount options to be used when accessing NFS shares. " "See the manual page for NFS for more information on available mount options " "(:command:`man nfs`)." msgstr "" #: ../blockstorage_nfs_backend.rst:111 msgid "" "Configure the ``cinder`` volume service to use the correct volume driver, " "namely ``cinder.volume.drivers.nfs.NfsDriver``. To do so, open the ``/etc/" "cinder/cinder.conf`` configuration file and set the volume_driver " "configuration key to ``cinder.volume.drivers.nfs.NfsDriver``." msgstr "" #: ../blockstorage_nfs_backend.rst:129 msgid "" "The ``nfs_sparsed_volumes`` configuration key determines whether volumes are " "created as sparse files and grown as needed or fully allocated up front. The " "default and recommended value is ``true``, which ensures volumes are " "initially created as sparse files." msgstr "" #: ../blockstorage_nfs_backend.rst:134 msgid "" "Setting ``nfs_sparsed_volumes`` to ``false`` will result in volumes being " "fully allocated at the time of creation. This leads to increased delays in " "volume creation." msgstr "" #: ../blockstorage_nfs_backend.rst:138 msgid "" "However, should you choose to set ``nfs_sparsed_volumes`` to ``false``, you " "can do so directly in ``/etc/cinder/cinder.conf``." msgstr "" #: ../blockstorage_nfs_backend.rst:151 msgid "" "If a client host has SELinux enabled, the ``virt_use_nfs`` boolean should " "also be enabled if the host requires access to NFS volumes on an instance. " "To enable this boolean, run the following command as the ``root`` user:" msgstr "" #: ../blockstorage_nfs_backend.rst:160 msgid "" "This command also makes the boolean persistent across reboots. Run this " "command on all client hosts that require access to NFS volumes on an " "instance. This includes all compute nodes." msgstr "" #: ../blockstorage_over_subscription.rst:5 msgid "Oversubscription in thin provisioning" msgstr "" #: ../blockstorage_over_subscription.rst:7 msgid "" "OpenStack Block Storage enables you to choose a volume back end based on " "virtual capacities for thin provisioning using the oversubscription ratio." msgstr "" #: ../blockstorage_over_subscription.rst:10 msgid "" "A reference implementation is provided for the default LVM driver. The " "illustration below uses the LVM driver as an example." msgstr "" #: ../blockstorage_over_subscription.rst:14 msgid "Configure oversubscription settings" msgstr "" #: ../blockstorage_over_subscription.rst:16 msgid "" "To support oversubscription in thin provisioning, a flag " "``max_over_subscription_ratio`` is introduced into ``cinder.conf``. This is " "a float representation of the oversubscription ratio when thin provisioning " "is involved. Default ratio is 20.0, meaning provisioned capacity can be 20 " "times of the total physical capacity. A ratio of 10.5 means provisioned " "capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 " "means provisioned capacity cannot exceed the total physical capacity. A " "ratio lower than 1.0 is ignored and the default value is used instead." msgstr "" #: ../blockstorage_over_subscription.rst:28 msgid "" "``max_over_subscription_ratio`` can be configured for each back end when " "multiple-storage back ends are enabled. It is provided as a reference " "implementation and is used by the LVM driver. However, it is not a " "requirement for a driver to use this option from ``cinder.conf``." msgstr "" #: ../blockstorage_over_subscription.rst:33 msgid "" "``max_over_subscription_ratio`` is for configuring a back end. For a driver " "that supports multiple pools per back end, it can report this ratio for each " "pool. The LVM driver does not support multiple pools." msgstr "" #: ../blockstorage_over_subscription.rst:37 msgid "" "The existing ``reserved_percentage`` flag is used to prevent over " "provisioning. This flag represents the percentage of the back-end capacity " "that is reserved." msgstr "" #: ../blockstorage_over_subscription.rst:42 msgid "" "There is a change on how ``reserved_percentage`` is used. It was measured " "against the free capacity in the past. Now it is measured against the total " "capacity." msgstr "" #: ../blockstorage_over_subscription.rst:47 msgid "Capabilities" msgstr "" #: ../blockstorage_over_subscription.rst:49 msgid "Drivers can report the following capabilities for a back end or a pool:" msgstr "" #: ../blockstorage_over_subscription.rst:58 msgid "" "Where ``PROVISIONED_CAPACITY`` is the apparent allocated space indicating " "how much capacity has been provisioned and ``MAX_RATIO`` is the maximum " "oversubscription ratio. For the LVM driver, it is " "``max_over_subscription_ratio`` in ``cinder.conf``." msgstr "" #: ../blockstorage_over_subscription.rst:63 msgid "" "Two capabilities are added here to allow a back end or pool to claim support " "for thin provisioning, or thick provisioning, or both." msgstr "" #: ../blockstorage_over_subscription.rst:66 msgid "" "The LVM driver reports ``thin_provisioning_support=True`` and " "``thick_provisioning_support=False`` if the ``lvm_type`` flag in ``cinder." "conf`` is ``thin``. Otherwise it reports ``thin_provisioning_support=False`` " "and ``thick_provisioning_support=True``." msgstr "" #: ../blockstorage_over_subscription.rst:72 msgid "Volume type extra specs" msgstr "" #: ../blockstorage_over_subscription.rst:74 msgid "" "If volume type is provided as part of the volume creation request, it can " "have the following extra specs defined:" msgstr "" #: ../blockstorage_over_subscription.rst:84 msgid "" "``capabilities`` scope key before ``thin_provisioning_support`` and " "``thick_provisioning_support`` is not required. So the following works too:" msgstr "" #: ../blockstorage_over_subscription.rst:92 msgid "" "The above extra specs are used by the scheduler to find a back end that " "supports thin provisioning, thick provisioning, or both to match the needs " "of a specific volume type." msgstr "" #: ../blockstorage_over_subscription.rst:97 msgid "Volume replication extra specs" msgstr "" #: ../blockstorage_over_subscription.rst:99 msgid "" "OpenStack Block Storage has the ability to create volume replicas. " "Administrators can define a storage policy that includes replication by " "adjusting the cinder volume driver. Volume replication for OpenStack Block " "Storage helps safeguard OpenStack environments from data loss during " "disaster recovery." msgstr "" #: ../blockstorage_over_subscription.rst:105 msgid "" "To enable replication when creating volume types, configure the cinder " "volume with ``capabilities:replication=\" True\"``." msgstr "" #: ../blockstorage_over_subscription.rst:108 msgid "" "Each volume created with the replication capability set to ``True`` " "generates a copy of the volume on a storage back end." msgstr "" #: ../blockstorage_over_subscription.rst:111 msgid "" "One use case for replication involves an OpenStack cloud environment " "installed across two data centers located nearby each other. The distance " "between the two data centers in this use case is the length of a city." msgstr "" #: ../blockstorage_over_subscription.rst:116 msgid "" "At each data center, a cinder host supports the Block Storage service. Both " "data centers include storage back ends." msgstr "" #: ../blockstorage_over_subscription.rst:119 msgid "" "Depending on the storage requirements, there can be one or two cinder hosts. " "The administrator accesses the ``/etc/cinder/cinder.conf`` configuration " "file and sets ``capabilities:replication=\" True\"``." msgstr "" #: ../blockstorage_over_subscription.rst:124 msgid "" "If one data center experiences a service failure, administrators can " "redeploy the VM. The VM will run using a replicated, backed up volume on a " "host in the second data center." msgstr "" #: ../blockstorage_over_subscription.rst:129 msgid "Capacity filter" msgstr "" #: ../blockstorage_over_subscription.rst:131 msgid "" "In the capacity filter, ``max_over_subscription_ratio`` is used when " "choosing a back end if ``thin_provisioning_support`` is True and " "``max_over_subscription_ratio`` is greater than 1.0." msgstr "" #: ../blockstorage_over_subscription.rst:136 msgid "Capacity weigher" msgstr "" #: ../blockstorage_over_subscription.rst:138 msgid "" "In the capacity weigher, virtual free capacity is used for ranking if " "``thin_provisioning_support`` is True. Otherwise, real free capacity will be " "used as before." msgstr "" #: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:5 msgid "Rate-limit volume copy bandwidth" msgstr "" #: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:7 msgid "" "When you create a new volume from an image or an existing volume, or when " "you upload a volume image to the Image service, large data copy may stress " "disk and network bandwidth. To mitigate slow down of data access from the " "instances, OpenStack Block Storage supports rate-limiting of volume data " "copy bandwidth." msgstr "" #: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:14 msgid "Configure volume copy bandwidth limit" msgstr "" #: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:16 msgid "" "To configure the volume copy bandwidth limit, set the " "``volume_copy_bps_limit`` option in the configuration groups for each back " "end in the ``cinder.conf`` file. This option takes the integer of maximum " "bandwidth allowed for volume data copy in byte per second. If this option is " "set to ``0``, the rate-limit is disabled." msgstr "" #: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:22 msgid "" "While multiple volume data copy operations are running in the same back end, " "the specified bandwidth is divided to each copy." msgstr "" #: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:25 msgid "" "Example ``cinder.conf`` configuration file to limit volume copy bandwidth of " "``lvmdriver-1`` up to 100 MiB/s:" msgstr "" #: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:38 msgid "" "This feature requires libcgroup to set up blkio cgroup for disk I/O " "bandwidth limit. The libcgroup is provided by the cgroup-bin package in " "Debian and Ubuntu, or by the libcgroup-tools package in Fedora, Red Hat " "Enterprise Linux, CentOS, openSUSE, and SUSE Linux Enterprise." msgstr "" #: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:45 msgid "" "Some back ends which use remote file systems such as NFS are not supported " "by this feature." msgstr "" #: ../blockstorage_volume_backed_image.rst:6 msgid "Volume-backed image" msgstr "" #: ../blockstorage_volume_backed_image.rst:8 msgid "" "OpenStack Block Storage can quickly create a volume from an image that " "refers to a volume storing image data (Image-Volume). Compared to the other " "stores such as file and swift, creating a volume from a Volume-backed image " "performs better when the block storage driver supports efficient volume " "cloning." msgstr "" #: ../blockstorage_volume_backed_image.rst:13 msgid "" "If the image is set to public in the Image service, the volume data can be " "shared among tenants." msgstr "" #: ../blockstorage_volume_backed_image.rst:17 msgid "Configure the Volume-backed image" msgstr "" #: ../blockstorage_volume_backed_image.rst:19 msgid "" "Volume-backed image feature requires locations information from the cinder " "store of the Image service. To enable the Image service to use the cinder " "store, add ``cinder`` to the ``stores`` option in the ``glance_store`` " "section of the ``glance-api.conf`` file:" msgstr "" #: ../blockstorage_volume_backed_image.rst:28 msgid "" "To expose locations information, set the following options in the " "``DEFAULT`` section of the ``glance-api.conf`` file:" msgstr "" #: ../blockstorage_volume_backed_image.rst:35 msgid "" "To enable the Block Storage services to create a new volume by cloning " "Image- Volume, set the following options in the ``DEFAULT`` section of the " "``cinder.conf`` file. For example:" msgstr "" #: ../blockstorage_volume_backed_image.rst:44 msgid "" "To enable the :command:`cinder upload-to-image` command to create an image " "that refers an ``Image-Volume``, set the following options in each back-end " "section of the ``cinder.conf`` file:" msgstr "" #: ../blockstorage_volume_backed_image.rst:52 msgid "" "By default, the :command:`upload-to-image` command creates the Image-Volume " "in the current tenant. To store the Image-Volume into the internal tenant, " "set the following options in each back-end section of the ``cinder.conf`` " "file:" msgstr "" #: ../blockstorage_volume_backed_image.rst:60 msgid "" "To make the Image-Volume in the internal tenant accessible from the Image " "service, set the following options in the ``glance_store`` section of the " "``glance-api.conf`` file:" msgstr "" #: ../blockstorage_volume_backed_image.rst:64 msgid "``cinder_store_auth_address``" msgstr "" #: ../blockstorage_volume_backed_image.rst:65 msgid "``cinder_store_user_name``" msgstr "" #: ../blockstorage_volume_backed_image.rst:66 msgid "``cinder_store_password``" msgstr "" #: ../blockstorage_volume_backed_image.rst:67 msgid "``cinder_store_project_name``" msgstr "" #: ../blockstorage_volume_backed_image.rst:70 msgid "Creating a Volume-backed image" msgstr "" #: ../blockstorage_volume_backed_image.rst:72 msgid "" "To register an existing volume as a new Volume-backed image, use the " "following commands:" msgstr "" #: ../blockstorage_volume_backed_image.rst:81 msgid "" "If the ``image_upload_use_cinder_backend`` option is enabled, the following " "command creates a new Image-Volume by cloning the specified volume and then " "registers its location to a new image. The disk format and the container " "format must be raw and bare (default). Otherwise, the image is uploaded to " "the default store of the Image service." msgstr "" #: ../blockstorage_volume_backups.rst:5 msgid "Back up and restore volumes and snapshots" msgstr "" #: ../blockstorage_volume_backups.rst:7 msgid "" "The ``cinder`` command-line interface provides the tools for creating a " "volume backup. You can restore a volume from a backup as long as the " "backup's associated database information (or backup metadata) is intact in " "the Block Storage database." msgstr "" #: ../blockstorage_volume_backups.rst:12 msgid "Run this command to create a backup of a volume:" msgstr "" #: ../blockstorage_volume_backups.rst:18 msgid "" "Where ``VOLUME`` is the name or ID of the volume, ``incremental`` is a flag " "that indicates whether an incremental backup should be performed, and " "``force`` is a flag that allows or disallows backup of a volume when the " "volume is attached to an instance." msgstr "" #: ../blockstorage_volume_backups.rst:23 msgid "" "Without the ``incremental`` flag, a full backup is created by default. With " "the ``incremental`` flag, an incremental backup is created." msgstr "" #: ../blockstorage_volume_backups.rst:26 msgid "" "Without the ``force`` flag, the volume will be backed up only if its status " "is ``available``. With the ``force`` flag, the volume will be backed up " "whether its status is ``available`` or ``in-use``. A volume is ``in-use`` " "when it is attached to an instance. The backup of an ``in-use`` volume means " "your data is crash consistent. The ``force`` flag is False by default." msgstr "" #: ../blockstorage_volume_backups.rst:35 msgid "" "The ``incremental`` and ``force`` flags are only available for block storage " "API v2. You have to specify ``[--os-volume-api-version 2]`` in the " "``cinder`` command-line interface to use this parameter." msgstr "" #: ../blockstorage_volume_backups.rst:41 msgid "The ``force`` flag is new in OpenStack Liberty." msgstr "" #: ../blockstorage_volume_backups.rst:43 msgid "" "The incremental backup is based on a parent backup which is an existing " "backup with the latest timestamp. The parent backup can be a full backup or " "an incremental backup depending on the timestamp." msgstr "" #: ../blockstorage_volume_backups.rst:50 msgid "" "The first backup of a volume has to be a full backup. Attempting to do an " "incremental backup without any existing backups will fail. There is an " "``is_incremental`` flag that indicates whether a backup is incremental when " "showing details on the backup. Another flag, ``has_dependent_backups``, " "returned when showing backup details, will indicate whether the backup has " "dependent backups. If it is ``true``, attempting to delete this backup will " "fail." msgstr "" #: ../blockstorage_volume_backups.rst:58 msgid "" "A new configure option ``backup_swift_block_size`` is introduced into " "``cinder.conf`` for the default Swift backup driver. This is the size in " "bytes that changes are tracked for incremental backups. The existing " "``backup_swift_object_size`` option, the size in bytes of Swift backup " "objects, has to be a multiple of ``backup_swift_block_size``. The default is " "32768 for ``backup_swift_block_size``, and the default is 52428800 for " "``backup_swift_object_size``." msgstr "" #: ../blockstorage_volume_backups.rst:66 msgid "" "The configuration option ``backup_swift_enable_progress_timer`` in ``cinder." "conf`` is used when backing up the volume to Object Storage back end. This " "option enables or disables the timer. It is enabled by default to send the " "periodic progress notifications to the Telemetry service." msgstr "" #: ../blockstorage_volume_backups.rst:71 msgid "" "This command also returns a backup ID. Use this backup ID when restoring the " "volume:" msgstr "" #: ../blockstorage_volume_backups.rst:78 msgid "When restoring from a full backup, it is a full restore." msgstr "" #: ../blockstorage_volume_backups.rst:80 msgid "" "When restoring from an incremental backup, a list of backups is built based " "on the IDs of the parent backups. A full restore is performed based on the " "full backup first, then restore is done based on the incremental backup, " "laying on top of it in order." msgstr "" #: ../blockstorage_volume_backups.rst:85 msgid "" "You can view a backup list with the :command:`cinder backup-list` command. " "Optional arguments to clarify the status of your backups include: running :" "option:`--name`, :option:`--status`, and :option:`--volume-id` to filter " "through backups by the specified name, status, or volume-id. Search with :" "option:`--all-tenants` for details of the tenants associated with the listed " "backups." msgstr "" #: ../blockstorage_volume_backups.rst:92 msgid "" "Because volume backups are dependent on the Block Storage database, you must " "also back up your Block Storage database regularly to ensure data recovery." msgstr "" #: ../blockstorage_volume_backups.rst:97 msgid "" "Alternatively, you can export and save the metadata of selected volume " "backups. Doing so precludes the need to back up the entire Block Storage " "database. This is useful if you need only a small subset of volumes to " "survive a catastrophic database failure." msgstr "" #: ../blockstorage_volume_backups.rst:102 msgid "" "If you specify a UUID encryption key when setting up the volume " "specifications, the backup metadata ensures that the key will remain valid " "when you back up and restore the volume." msgstr "" #: ../blockstorage_volume_backups.rst:106 msgid "" "For more information about how to export and import volume backup metadata, " "see the section called :ref:`volume_backups_export_import`." msgstr "" #: ../blockstorage_volume_backups.rst:109 msgid "By default, the swift object store is used for the backup repository." msgstr "" #: ../blockstorage_volume_backups.rst:111 msgid "" "If instead you want to use an NFS export as the backup repository, add the " "following configuration options to the ``[DEFAULT]`` section of the ``cinder." "conf`` file and restart the Block Storage services:" msgstr "" #: ../blockstorage_volume_backups.rst:120 msgid "" "For the ``backup_share`` option, replace ``HOST`` with the DNS resolvable " "host name or the IP address of the storage server for the NFS share, and " "``EXPORT_PATH`` with the path to that share. If your environment requires " "that non-default mount options be specified for the share, set these as " "follows:" msgstr "" #: ../blockstorage_volume_backups.rst:130 msgid "" "``MOUNT_OPTIONS`` is a comma-separated string of NFS mount options as " "detailed in the NFS man page." msgstr "" #: ../blockstorage_volume_backups.rst:133 msgid "" "There are several other options whose default values may be overridden as " "appropriate for your environment:" msgstr "" #: ../blockstorage_volume_backups.rst:142 msgid "" "The option ``backup_compression_algorithm`` can be set to ``bz2`` or " "``None``. The latter can be a useful setting when the server providing the " "share for the backup repository itself performs deduplication or compression " "on the backup data." msgstr "" #: ../blockstorage_volume_backups.rst:147 msgid "" "The option ``backup_file_size`` must be a multiple of " "``backup_sha_block_size_bytes``. It is effectively the maximum file size to " "be used, given your environment, to hold backup data. Volumes larger than " "this will be stored in multiple files in the backup repository. The " "``backup_sha_block_size_bytes`` option determines the size of blocks from " "the cinder volume being backed up on which digital signatures are calculated " "in order to enable incremental backup capability." msgstr "" #: ../blockstorage_volume_backups.rst:155 msgid "" "You also have the option of resetting the state of a backup. When creating " "or restoring a backup, sometimes it may get stuck in the creating or " "restoring states due to problems like the database or rabbitmq being down. " "In situations like these resetting the state of the backup can restore it to " "a functional status." msgstr "" #: ../blockstorage_volume_backups.rst:161 msgid "Run this command to restore the state of a backup:" msgstr "" #: ../blockstorage_volume_backups.rst:167 msgid "Run this command to create a backup of a snapshot:" msgstr "" #: ../blockstorage_volume_backups.rst:173 msgid "Where ``SNAPSHOT`` is the name or ID of the snapshot." msgstr "" #: ../blockstorage_volume_backups_export_import.rst:5 msgid "Export and import backup metadata" msgstr "" #: ../blockstorage_volume_backups_export_import.rst:8 msgid "" "A volume backup can only be restored on the same Block Storage service. This " "is because restoring a volume from a backup requires metadata available on " "the database used by the Block Storage service." msgstr "" #: ../blockstorage_volume_backups_export_import.rst:14 msgid "" "For information about how to back up and restore a volume, see the section " "called :ref:`volume_backups`." msgstr "" #: ../blockstorage_volume_backups_export_import.rst:17 msgid "" "You can, however, export the metadata of a volume backup. To do so, run this " "command as an OpenStack ``admin`` user (presumably, after creating a volume " "backup):" msgstr "" #: ../blockstorage_volume_backups_export_import.rst:25 msgid "" "Where ``BACKUP_ID`` is the volume backup's ID. This command should return " "the backup's corresponding database information as encoded string metadata." msgstr "" #: ../blockstorage_volume_backups_export_import.rst:28 msgid "" "Exporting and storing this encoded string metadata allows you to completely " "restore the backup, even in the event of a catastrophic database failure. " "This will preclude the need to back up the entire Block Storage database, " "particularly if you only need to keep complete backups of a small subset of " "volumes." msgstr "" #: ../blockstorage_volume_backups_export_import.rst:34 msgid "" "If you have placed encryption on your volumes, the encryption will still be " "in place when you restore the volume if a UUID encryption key is specified " "when creating volumes. Using backup metadata support, UUID keys set up for a " "volume (or volumes) will remain valid when you restore a backed-up volume. " "The restored volume will remain encrypted, and will be accessible with your " "credentials." msgstr "" #: ../blockstorage_volume_backups_export_import.rst:41 msgid "" "In addition, having a volume backup and its backup metadata also provides " "volume portability. Specifically, backing up a volume and exporting its " "metadata will allow you to restore the volume on a completely different " "Block Storage database, or even on a different cloud service. To do so, " "first import the backup metadata to the Block Storage database and then " "restore the backup." msgstr "" #: ../blockstorage_volume_backups_export_import.rst:48 msgid "" "To import backup metadata, run the following command as an OpenStack " "``admin``:" msgstr "" #: ../blockstorage_volume_backups_export_import.rst:55 msgid "Where ``METADATA`` is the backup metadata exported earlier." msgstr "" #: ../blockstorage_volume_backups_export_import.rst:57 msgid "" "Once you have imported the backup metadata into a Block Storage database, " "restore the volume (see the section called :ref:`volume_backups`)." msgstr "" #: ../blockstorage_volume_migration.rst:5 msgid "Migrate volumes" msgstr "" #: ../blockstorage_volume_migration.rst:7 msgid "" "OpenStack has the ability to migrate volumes between back-ends which support " "its volume-type. Migrating a volume transparently moves its data from the " "current back-end for the volume to a new one. This is an administrator " "function, and can be used for functions including storage evacuation (for " "maintenance or decommissioning), or manual optimizations (for example, " "performance, reliability, or cost)." msgstr "" #: ../blockstorage_volume_migration.rst:14 msgid "These workflows are possible for a migration:" msgstr "" #: ../blockstorage_volume_migration.rst:16 msgid "" "If the storage can migrate the volume on its own, it is given the " "opportunity to do so. This allows the Block Storage driver to enable " "optimizations that the storage might be able to perform. If the back-end is " "not able to perform the migration, the Block Storage uses one of two generic " "flows, as follows." msgstr "" #: ../blockstorage_volume_migration.rst:22 msgid "" "If the volume is not attached, the Block Storage service creates a volume " "and copies the data from the original to the new volume." msgstr "" #: ../blockstorage_volume_migration.rst:27 msgid "" "While most back-ends support this function, not all do. See the driver " "documentation in the `OpenStack Configuration Reference `__ for more details." msgstr "" #: ../blockstorage_volume_migration.rst:32 msgid "" "If the volume is attached to a VM instance, the Block Storage creates a " "volume, and calls Compute to copy the data from the original to the new " "volume. Currently this is supported only by the Compute libvirt driver." msgstr "" #: ../blockstorage_volume_migration.rst:36 msgid "" "As an example, this scenario shows two LVM back-ends and migrates an " "attached volume from one to the other. This scenario uses the third " "migration flow." msgstr "" #: ../blockstorage_volume_migration.rst:39 msgid "First, list the available back-ends:" msgstr "" #: ../blockstorage_volume_migration.rst:57 msgid "Only Block Storage V2 API supports :command:`get-pools`." msgstr "" #: ../blockstorage_volume_migration.rst:59 msgid "You can also get available back-ends like following:" msgstr "" #: ../blockstorage_volume_migration.rst:67 msgid "" "But it needs to add pool name in the end. For example, " "``server1@lvmstorage-1#zone1``." msgstr "" #: ../blockstorage_volume_migration.rst:70 msgid "" "Next, as the admin user, you can see the current status of the volume " "(replace the example ID with your own):" msgstr "" #: ../blockstorage_volume_migration.rst:98 msgid "Note these attributes:" msgstr "" #: ../blockstorage_volume_migration.rst:100 msgid "``os-vol-host-attr:host`` - the volume's current back-end." msgstr "" #: ../blockstorage_volume_migration.rst:101 msgid "" "``os-vol-mig-status-attr:migstat`` - the status of this volume's migration " "(None means that a migration is not currently in progress)." msgstr "" #: ../blockstorage_volume_migration.rst:103 msgid "" "``os-vol-mig-status-attr:name_id`` - the volume ID that this volume's name " "on the back-end is based on. Before a volume is ever migrated, its name on " "the back-end storage may be based on the volume's ID (see the " "``volume_name_template`` configuration parameter). For example, if " "``volume_name_template`` is kept as the default value (``volume-%s``), your " "first LVM back-end has a logical volume named ``volume-6088f80a-f116-4331-" "ad48-9afb0dfb196c``. During the course of a migration, if you create a " "volume and copy over the data, the volume get the new name but keeps its " "original ID. This is exposed by the ``name_id`` attribute." msgstr "" #: ../blockstorage_volume_migration.rst:116 msgid "" "If you plan to decommission a block storage node, you must stop the " "``cinder`` volume service on the node after performing the migration." msgstr "" #: ../blockstorage_volume_migration.rst:119 msgid "" "On nodes that run CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, or " "SUSE Linux Enterprise, run:" msgstr "" #: ../blockstorage_volume_migration.rst:127 msgid "On nodes that run Ubuntu or Debian, run:" msgstr "" #: ../blockstorage_volume_migration.rst:134 msgid "" "Stopping the cinder volume service will prevent volumes from being allocated " "to the node." msgstr "" #: ../blockstorage_volume_migration.rst:137 msgid "Migrate this volume to the second LVM back-end:" msgstr "" #: ../blockstorage_volume_migration.rst:144 msgid "" "You can use the :command:`cinder show` command to see the status of the " "migration. While migrating, the ``migstat`` attribute shows states such as " "``migrating`` or ``completing``. On error, ``migstat`` is set to None and " "the host attribute shows the original ``host``. On success, in this example, " "the output looks like:" msgstr "" #: ../blockstorage_volume_migration.rst:174 msgid "" "Note that ``migstat`` is None, host is the new host, and ``name_id`` holds " "the ID of the volume created by the migration. If you look at the second LVM " "back end, you find the logical volume ``volume-133d1f56-9ffc-4f57-8798-" "d5217d851862``." msgstr "" #: ../blockstorage_volume_migration.rst:181 msgid "" "The migration is not visible to non-admin users (for example, through the " "volume ``status``). However, some operations are not allowed while a " "migration is taking place, such as attaching/detaching a volume and deleting " "a volume. If a user performs such an action during a migration, an error is " "returned." msgstr "" #: ../blockstorage_volume_migration.rst:189 msgid "Migrating volumes that have snapshots are currently not allowed." msgstr "" #: ../blockstorage_volume_number_weigher.rst:5 msgid "Configure and use volume number weigher" msgstr "" #: ../blockstorage_volume_number_weigher.rst:7 msgid "" "OpenStack Block Storage enables you to choose a volume back end according to " "``free_capacity`` and ``allocated_capacity``. The volume number weigher " "feature lets the scheduler choose a volume back end based on its volume " "number in the volume back end. This can provide another means to improve the " "volume back ends' I/O balance and the volumes' I/O performance." msgstr "" #: ../blockstorage_volume_number_weigher.rst:14 msgid "Enable volume number weigher" msgstr "" #: ../blockstorage_volume_number_weigher.rst:16 msgid "" "To enable a volume number weigher, set the ``scheduler_default_weighers`` to " "``VolumeNumberWeigher`` flag in the ``cinder.conf`` file to define " "``VolumeNumberWeigher`` as the selected weigher." msgstr "" #: ../blockstorage_volume_number_weigher.rst:24 msgid "" "To configure ``VolumeNumberWeigher``, use ``LVMVolumeDriver`` as the volume " "driver." msgstr "" #: ../blockstorage_volume_number_weigher.rst:27 msgid "" "This configuration defines two LVM volume groups: ``stack-volumes`` with 10 " "GB capacity and ``stack-volumes-1`` with 60 GB capacity. This example " "configuration defines two back ends:" msgstr "" #: ../blockstorage_volume_number_weigher.rst:48 msgid "Define a volume type in Block Storage:" msgstr "" #: ../blockstorage_volume_number_weigher.rst:54 msgid "" "Create an extra specification that links the volume type to a back-end name:" msgstr "" #: ../blockstorage_volume_number_weigher.rst:60 msgid "" "This example creates a lvm volume type with ``volume_backend_name=LVM`` as " "extra specifications." msgstr "" #: ../blockstorage_volume_number_weigher.rst:66 msgid "" "To create six 1-GB volumes, run the :command:`cinder create --volume-type " "lvm 1` command six times:" msgstr "" #: ../blockstorage_volume_number_weigher.rst:73 msgid "" "This command creates three volumes in ``stack-volumes`` and three volumes in " "``stack-volumes-1``." msgstr "" #: ../blockstorage_volume_number_weigher.rst:76 msgid "List the available volumes:" msgstr "" #: ../cli.rst:3 msgid "OpenStack command-line clients" msgstr "" #: ../cli_admin_manage_environment.rst:3 msgid "Manage the OpenStack environment" msgstr "" #: ../cli_admin_manage_environment.rst:5 msgid "This section includes tasks specific to the OpenStack environment." msgstr "" #: ../cli_admin_manage_ip_addresses.rst:3 msgid "Manage IP addresses" msgstr "" #: ../cli_admin_manage_ip_addresses.rst:5 msgid "" "Each instance has a private, fixed IP address that is assigned when the " "instance is launched. In addition, an instance can have a public or floating " "IP address. Private IP addresses are used for communication between " "instances, and public IP addresses are used for communication with networks " "outside the cloud, including the Internet." msgstr "" #: ../cli_admin_manage_ip_addresses.rst:12 msgid "" "By default, both administrative and end users can associate floating IP " "addresses with projects and instances. You can change user permissions for " "managing IP addresses by updating the ``/etc/nova/policy.json`` file. For " "basic floating-IP procedures, refer to the `Allocate a floating address to " "an instance `_ section in the OpenStack End User Guide." msgstr "" #: ../cli_admin_manage_ip_addresses.rst:19 msgid "" "For details on creating public networks using OpenStack Networking " "(``neutron``), refer to :ref:`networking-adv-features`. No floating IP " "addresses are created by default in OpenStack Networking." msgstr "" #: ../cli_admin_manage_ip_addresses.rst:23 msgid "" "As an administrator using legacy networking (``nova-network``), you can use " "the following bulk commands to list, create, and delete ranges of floating " "IP addresses. These addresses can then be associated with instances by end " "users." msgstr "" #: ../cli_admin_manage_ip_addresses.rst:29 msgid "List addresses for all projects" msgstr "" #: ../cli_admin_manage_ip_addresses.rst:31 msgid "To list all floating IP addresses for all projects, run:" msgstr "" #: ../cli_admin_manage_ip_addresses.rst:62 msgid "Bulk create floating IP addresses" msgstr "" #: ../cli_admin_manage_ip_addresses.rst:64 msgid "To create a range of floating IP addresses, run:" msgstr "" # #-#-#-#-# cli_admin_manage_ip_addresses.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# cli_cinder_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# cli_keystone_manage_services.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# cli_nova_manage_projects_security.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# cli_nova_specify_host.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# cli_set_compute_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_certificates_for_pki.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_config-agents.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_config-identity.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_use.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_admin_manage_ip_addresses.rst:70 #: ../cli_admin_manage_ip_addresses.rst:103 ../cli_cinder_quotas.rst:26 #: ../cli_cinder_quotas.rst:38 ../cli_cinder_quotas.rst:57 #: ../cli_cinder_quotas.rst:76 ../cli_cinder_quotas.rst:117 #: ../cli_keystone_manage_services.rst:93 #: ../cli_keystone_manage_services.rst:151 #: ../cli_nova_manage_projects_security.rst:67 #: ../cli_nova_manage_projects_security.rst:104 #: ../cli_nova_manage_projects_security.rst:174 #: ../cli_nova_manage_projects_security.rst:198 #: ../cli_nova_specify_host.rst:13 ../cli_set_compute_quotas.rst:56 #: ../cli_set_compute_quotas.rst:84 ../cli_set_compute_quotas.rst:105 #: ../cli_set_compute_quotas.rst:142 ../cli_set_compute_quotas.rst:197 #: ../cli_set_compute_quotas.rst:240 ../keystone_certificates_for_pki.rst:157 #: ../networking_adv-features.rst:830 ../networking_adv-features.rst:854 #: ../networking_config-agents.rst:94 ../networking_config-identity.rst:53 #: ../networking_config-identity.rst:75 ../networking_use.rst:101 msgid "For example:" msgstr "" #: ../cli_admin_manage_ip_addresses.rst:76 msgid "" "By default, ``floating-ip-bulk-create`` uses the ``public`` pool and " "``eth0`` interface values." msgstr "" #: ../cli_admin_manage_ip_addresses.rst:81 msgid "" "You should use a range of free IP addresses that is valid for your network. " "If you are not sure, at least try to avoid the DHCP address range:" msgstr "" #: ../cli_admin_manage_ip_addresses.rst:85 msgid "" "Pick a small range (/29 gives an 8 address range, 6 of which will be usable)." msgstr "" #: ../cli_admin_manage_ip_addresses.rst:88 msgid "" "Use :command:`nmap` to check a range's availability. For example, " "192.168.1.56/29 represents a small range of addresses (192.168.1.56-63, with " "57-62 usable), and you could run the command :command:`nmap -sn " "192.168.1.56/29` to check whether the entire range is currently unused." msgstr "" #: ../cli_admin_manage_ip_addresses.rst:95 msgid "Bulk delete floating IP addresses" msgstr "" #: ../cli_admin_manage_ip_addresses.rst:97 msgid "To delete a range of floating IP addresses, run:" msgstr "" #: ../cli_admin_manage_stacks.rst:3 msgid "Launch and manage stacks using the CLI" msgstr "" #: ../cli_admin_manage_stacks.rst:5 msgid "" "The Orchestration service provides a template-based orchestration engine. " "Administrators can use the orchestration engine to create and manage " "Openstack cloud infrastructure resources. For example, an administrator can " "define storage, networking, instances, and applications to use as a " "repeatable running environment." msgstr "" #: ../cli_admin_manage_stacks.rst:11 msgid "" "Templates are used to create stacks, which are collections of resources. For " "example, a stack might include instances, floating IPs, volumes, security " "groups, or users. The Orchestration service offers access to all OpenStack " "core services through a single modular template, with additional " "orchestration capabilities such as auto-scaling and basic high availability." msgstr "" # #-#-#-#-# cli_admin_manage_stacks.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_admin_manage_stacks.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_admin_manage_stacks.rst:19 ../dashboard_admin_manage_stacks.rst:20 msgid "For information about:" msgstr "" #: ../cli_admin_manage_stacks.rst:21 msgid "" "basic creation and deletion of Orchestration stacks, refer to the `OpenStack " "End User Guide `_" msgstr "" #: ../cli_admin_manage_stacks.rst:24 msgid "" "**heat** CLI commands, see the `OpenStack Command Line Interface Reference " "`_" msgstr "" #: ../cli_admin_manage_stacks.rst:27 msgid "" "As an administrator, you can also carry out stack functions on behalf of " "your users. For example, to resume, suspend, or delete a stack, run:" msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:3 msgid "Analyze log files" msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:5 msgid "" "Use the swift command-line client for Object Storage to analyze log files." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:7 msgid "The swift client is simple to use, scalable, and flexible." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:9 msgid "" "Use the swift client :option:`-o` or :option:`-output` option to get short " "answers to questions about logs." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:12 msgid "" "You can use the :option:`-o` or :option:`--output` option with a single " "object download to redirect the command output to a specific file or to " "STDOUT (``-``). The ability to redirect the output to STDOUT enables you to " "pipe (``|``) data without saving it to disk first." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:18 msgid "Upload and analyze log files" msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:20 msgid "" "This example assumes that ``logtest`` directory contains the following log " "files." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:31 msgid "Each file uses the following line format." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:39 msgid "Change into the ``logtest`` directory:" msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:45 msgid "Upload the log files into the ``logtest`` container:" msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:58 msgid "Get statistics for the account:" msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:72 msgid "Get statistics for the ``logtest`` container:" msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:88 msgid "List all objects in the logtest container:" msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:103 msgid "Download and analyze an object" msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:105 msgid "" "This example uses the :option:`-o` option and a hyphen (``-``) to get " "information about an object." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:108 msgid "" "Use the :command:`swift download` command to download the object. On this " "command, stream the output to ``awk`` to break down requests by return code " "and the date ``2200 on November 16th, 2010``." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:112 msgid "" "Using the log line format, find the request type in column 9 and the return " "code in column 12." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:115 msgid "" "After ``awk`` processes the output, it pipes it to ``sort`` and ``uniq -c`` " "to sum up the number of occurrences for each request type and return code " "combination." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:119 msgid "Download an object:" msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:160 msgid "Discover how many PUT requests are in each log file." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:162 msgid "" "Use a bash for loop with awk and swift with the :option:`-o` or :option:`--" "output` option and a hyphen (``-``) to discover how many PUT requests are in " "each log file." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:166 msgid "" "Run the :command:`swift list` command to list objects in the logtest " "container. Then, for each item in the list, run the :command:`swift download " "-o -` command. Pipe the output into grep to filter the PUT requests. " "Finally, pipe into ``wc -l`` to count the lines." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:187 msgid "List the object names that begin with a specified string." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:189 msgid "" "Run the :command:`swift list -p 2010-11-15` command to list objects in the " "logtest container that begin with the ``2010-11-15`` string." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:192 msgid "" "For each item in the list, run the :command:`swift download -o -` command." msgstr "" #: ../cli_analyzing-log-files-with-swift.rst:194 msgid "" "Pipe the output to :command:`grep` and :command:`wc`. Use the :command:" "`echo` command to display the object name." msgstr "" #: ../cli_cinder_quotas.rst:3 msgid "Manage Block Storage service quotas" msgstr "" #: ../cli_cinder_quotas.rst:5 msgid "" "As an administrative user, you can update the OpenStack Block Storage " "service quotas for a project. You can also update the quota defaults for a " "new project." msgstr "" #: ../cli_cinder_quotas.rst:9 msgid "**Block Storage quotas**" msgstr "" # #-#-#-#-# cli_cinder_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_cinder_quotas.rst:12 ../dashboard_set_quotas.rst:38 msgid "Defines the number of" msgstr "" #: ../cli_cinder_quotas.rst:12 msgid "Property name" msgstr "" # #-#-#-#-# cli_cinder_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_cinder_quotas.rst:14 ../dashboard_set_quotas.rst:40 msgid "Volume gigabytes allowed for each project." msgstr "" #: ../cli_cinder_quotas.rst:14 msgid "gigabytes" msgstr "" # #-#-#-#-# cli_cinder_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_cinder_quotas.rst:15 ../dashboard_set_quotas.rst:66 msgid "Volume snapshots allowed for each project." msgstr "" #: ../cli_cinder_quotas.rst:15 msgid "snapshots" msgstr "" # #-#-#-#-# cli_cinder_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_cinder_quotas.rst:16 ../dashboard_set_quotas.rst:72 msgid "Volumes allowed for each project." msgstr "" #: ../cli_cinder_quotas.rst:16 msgid "volumes" msgstr "" #: ../cli_cinder_quotas.rst:20 msgid "View Block Storage quotas" msgstr "" #: ../cli_cinder_quotas.rst:22 msgid "Administrative users can view Block Storage service quotas." msgstr "" #: ../cli_cinder_quotas.rst:24 msgid "Obtain the project ID." msgstr "" #: ../cli_cinder_quotas.rst:32 msgid "List the default quotas for a project (tenant):" msgstr "" #: ../cli_cinder_quotas.rst:51 msgid "View Block Storage service quotas for a project (tenant):" msgstr "" #: ../cli_cinder_quotas.rst:70 msgid "Show the current usage of a per-project quota:" msgstr "" #: ../cli_cinder_quotas.rst:90 msgid "Edit and update Block Storage service quotas" msgstr "" #: ../cli_cinder_quotas.rst:92 msgid "Administrative users can edit and update Block Storage service quotas." msgstr "" #: ../cli_cinder_quotas.rst:95 ../cli_cinder_quotas.rst:132 msgid "Clear per-project quota limits." msgstr "" #: ../cli_cinder_quotas.rst:101 msgid "" "To update a default value for a new project, update the property in the :" "guilabel:`cinder.quota` section of the ``/etc/cinder/cinder.conf`` file. For " "more information, see the `Block Storage Configuration Reference `_." msgstr "" #: ../cli_cinder_quotas.rst:107 msgid "To update Block Storage service quotas for an existing project (tenant)" msgstr "" #: ../cli_cinder_quotas.rst:113 msgid "" "Replace QUOTA_NAME with the quota that is to be updated, NEW_VALUE with the " "required new value, and PROJECT_ID with required project ID." msgstr "" #: ../cli_cinder_quotas.rst:139 msgid "Remove a service" msgstr "" #: ../cli_cinder_quotas.rst:141 msgid "Determine the binary and host of the service you want to remove." msgstr "" #: ../cli_cinder_quotas.rst:153 msgid "Disable the service." msgstr "" #: ../cli_cinder_quotas.rst:159 msgid "Remove the service from the database." msgstr "" #: ../cli_cinder_scheduling.rst:3 msgid "Manage Block Storage scheduling" msgstr "" #: ../cli_cinder_scheduling.rst:5 msgid "" "As an administrative user, you have some control over which volume back end " "your volumes reside on. You can specify affinity or anti-affinity between " "two volumes. Affinity between volumes means that they are stored on the same " "back end, whereas anti-affinity means that they are stored on different back " "ends." msgstr "" #: ../cli_cinder_scheduling.rst:11 msgid "" "For information on how to set up multiple back ends for Cinder, refer to :" "ref:`multi_backend`." msgstr "" #: ../cli_cinder_scheduling.rst:15 msgid "Example Usages" msgstr "" #: ../cli_cinder_scheduling.rst:17 msgid "Create a new volume on the same back end as Volume_A:" msgstr "" #: ../cli_cinder_scheduling.rst:23 msgid "Create a new volume on a different back end than Volume_A:" msgstr "" #: ../cli_cinder_scheduling.rst:29 msgid "Create a new volume on the same back end as Volume_A and Volume_B:" msgstr "" #: ../cli_cinder_scheduling.rst:35 ../cli_cinder_scheduling.rst:48 msgid "Or:" msgstr "" #: ../cli_cinder_scheduling.rst:41 msgid "" "Create a new volume on a different back end than both Volume_A and Volume_B:" msgstr "" #: ../cli_keystone_manage_services.rst:3 msgid "Create and manage services and service users" msgstr "" #: ../cli_keystone_manage_services.rst:5 msgid "The Identity service enables you to define services, as follows:" msgstr "" #: ../cli_keystone_manage_services.rst:8 msgid "" "Service catalog template. The Identity service acts as a service catalog of " "endpoints for other OpenStack services. The ``/etc/keystone/default_catalog." "templates`` template file defines the endpoints for services. When the " "Identity service uses a template file back end, any changes that are made to " "the endpoints are cached. These changes do not persist when you restart the " "service or reboot the machine." msgstr "" #: ../cli_keystone_manage_services.rst:16 msgid "" "An SQL back end for the catalog service. When the Identity service is " "online, you must add the services to the catalog. When you deploy a system " "for production, use the SQL back end." msgstr "" #: ../cli_keystone_manage_services.rst:21 msgid "" "The ``auth_token`` middleware supports the use of either a shared secret or " "users for each service." msgstr "" #: ../cli_keystone_manage_services.rst:25 msgid "" "To authenticate users against the Identity service, you must create a " "service user for each OpenStack service. For example, create a service user " "for the Compute, Block Storage, and Networking services." msgstr "" #: ../cli_keystone_manage_services.rst:30 msgid "" "To configure the OpenStack services with service users, create a project for " "all services and create users for each service. Assign the admin role to " "each service user and project pair. This role enables users to validate " "tokens and authenticate and authorize other user requests." msgstr "" #: ../cli_keystone_manage_services.rst:37 msgid "Create a service" msgstr "" #: ../cli_keystone_manage_services.rst:39 msgid "List the available services:" msgstr "" #: ../cli_keystone_manage_services.rst:58 msgid "To create a service, run this command:" msgstr "" #: ../cli_keystone_manage_services.rst:65 msgid "``service_name``: the unique name of the new service." msgstr "" #: ../cli_keystone_manage_services.rst:66 msgid "" "``service_type``: the service type, such as ``identity``, ``compute``, " "``network``, ``image``, ``object-store`` or any other service identifier " "string." msgstr "" #: ../cli_keystone_manage_services.rst:69 msgid "The arguments are:" msgstr "" #: ../cli_keystone_manage_services.rst:69 msgid "``service_description``: the description of the service." msgstr "" #: ../cli_keystone_manage_services.rst:71 msgid "" "For example, to create a ``swift`` service of type ``object-store``, run " "this command:" msgstr "" #: ../cli_keystone_manage_services.rst:87 msgid "To get details for a service, run this command:" msgstr "" #: ../cli_keystone_manage_services.rst:109 msgid "Create service users" msgstr "" #: ../cli_keystone_manage_services.rst:111 msgid "" "Create a project for the service users. Typically, this project is named " "``service``, but choose any name you like:" msgstr "" #: ../cli_keystone_manage_services.rst:127 msgid "Create service users for the relevant services for your deployment." msgstr "" #: ../cli_keystone_manage_services.rst:130 msgid "Assign the admin role to the user-project pair." msgstr "" #: ../cli_keystone_manage_services.rst:143 msgid "Delete a service" msgstr "" #: ../cli_keystone_manage_services.rst:145 msgid "To delete a specified service, specify its ID." msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:3 ../dashboard_manage_flavors.rst:3 msgid "Manage flavors" msgstr "" #: ../cli_manage_flavors.rst:5 msgid "" "In OpenStack, flavors define the compute, memory, and storage capacity of " "nova computing instances. To put it simply, a flavor is an available " "hardware configuration for a server. It defines the ``size`` of a virtual " "server that can be launched." msgstr "" #: ../cli_manage_flavors.rst:13 msgid "" "Flavors can also determine on which compute host a flavor can be used to " "launch an instance. For information about customizing flavors, refer to :ref:" "`compute-flavors`." msgstr "" #: ../cli_manage_flavors.rst:17 msgid "A flavor consists of the following parameters:" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:20 ../cli_manage_flavors.rst:89 #: ../dashboard_manage_flavors.rst:38 msgid "" "Unique ID (integer or UUID) for the new flavor. If specifying 'auto', a UUID " "will be automatically generated." msgstr "" #: ../cli_manage_flavors.rst:21 msgid "Flavor ID" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-eql-volume-size.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:24 ../compute-flavors.rst:36 #: ../compute-live-migration-usage.rst:35 ../telemetry-measurements.rst:100 #: ../telemetry-measurements.rst:434 ../telemetry-measurements.rst:489 #: ../telemetry-measurements.rst:533 ../telemetry-measurements.rst:595 #: ../telemetry-measurements.rst:671 ../telemetry-measurements.rst:705 #: ../telemetry-measurements.rst:771 ../telemetry-measurements.rst:821 #: ../telemetry-measurements.rst:859 ../telemetry-measurements.rst:931 #: ../telemetry-measurements.rst:995 ../telemetry-measurements.rst:1082 #: ../telemetry-measurements.rst:1162 ../telemetry-measurements.rst:1257 #: ../telemetry-measurements.rst:1323 ../telemetry-measurements.rst:1374 #: ../telemetry-measurements.rst:1401 ../telemetry-measurements.rst:1425 #: ../telemetry-measurements.rst:1446 ../ts-eql-volume-size.rst:109 msgid "Name" msgstr "" #: ../cli_manage_flavors.rst:24 msgid "Name for the new flavor." msgstr "" #: ../cli_manage_flavors.rst:27 msgid "Number of virtual CPUs to use." msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:27 ../cli_manage_flavors.rst:53 #: ../compute-flavors.rst:53 ../dashboard_manage_flavors.rst:12 #: ../dashboard_set_quotas.rst:69 msgid "VCPUs" msgstr "" #: ../cli_manage_flavors.rst:30 msgid "Amount of RAM to use (in megabytes)." msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:30 ../compute-flavors.rst:39 msgid "Memory MB" msgstr "" #: ../cli_manage_flavors.rst:33 msgid "Amount of disk space (in gigabytes) to use for the root (/) partition." msgstr "" #: ../cli_manage_flavors.rst:34 msgid "Root Disk GB" msgstr "" #: ../cli_manage_flavors.rst:37 msgid "" "Amount of disk space (in gigabytes) to use for the ephemeral partition. If " "unspecified, the value is 0 by default. Ephemeral disks offer machine local " "disk storage linked to the lifecycle of a VM instance. When a VM is " "terminated, all data on the ephemeral disk is lost. Ephemeral disks are not " "included in any snapshots." msgstr "" #: ../cli_manage_flavors.rst:44 msgid "Ephemeral Disk GB" msgstr "" #: ../cli_manage_flavors.rst:47 msgid "" "Amount of swap space (in megabytes) to use. If unspecified, the value is 0 " "by default." msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:48 ../compute-flavors.rst:51 msgid "Swap" msgstr "" #: ../cli_manage_flavors.rst:50 msgid "The default flavors are:" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:53 ../dashboard_manage_flavors.rst:12 msgid "Disk (in GB)" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:53 ../dashboard_manage_flavors.rst:12 msgid "Flavor" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:53 ../dashboard_manage_flavors.rst:12 msgid "RAM (in MB)" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# database.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:55 ../cli_manage_flavors.rst:56 #: ../dashboard_manage_flavors.rst:14 ../dashboard_manage_flavors.rst:15 #: ../database.rst:213 msgid "1" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:55 ../dashboard_manage_flavors.rst:14 msgid "512" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:55 ../compute-live-migration-usage.rst:86 #: ../dashboard_manage_flavors.rst:14 msgid "m1.tiny" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:56 ../dashboard_manage_flavors.rst:15 msgid "20" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:56 ../dashboard_manage_flavors.rst:15 msgid "2048" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:56 ../dashboard_manage_flavors.rst:15 msgid "m1.small" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:57 ../dashboard_manage_flavors.rst:16 msgid "2" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:57 ../dashboard_manage_flavors.rst:16 msgid "40" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:57 ../dashboard_manage_flavors.rst:16 msgid "4096" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:57 ../dashboard_manage_flavors.rst:16 msgid "m1.medium" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:58 ../dashboard_manage_flavors.rst:17 msgid "4" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:58 ../dashboard_manage_flavors.rst:17 msgid "80" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:58 ../dashboard_manage_flavors.rst:17 msgid "8192" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:58 ../dashboard_manage_flavors.rst:17 msgid "m1.large" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:59 ../dashboard_manage_flavors.rst:18 msgid "160" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:59 ../dashboard_manage_flavors.rst:18 msgid "16384" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:59 ../dashboard_manage_flavors.rst:18 msgid "8" msgstr "" # #-#-#-#-# cli_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_flavors.rst:59 ../dashboard_manage_flavors.rst:18 msgid "m1.xlarge" msgstr "" #: ../cli_manage_flavors.rst:62 msgid "" "You can create and manage flavors with the nova **flavor-*** commands " "provided by the ``python-novaclient`` package." msgstr "" #: ../cli_manage_flavors.rst:67 msgid "Create a flavor" msgstr "" #: ../cli_manage_flavors.rst:69 msgid "" "List flavors to show the ID and name, the amount of memory, the amount of " "disk space for the root partition and for the ephemeral partition, the swap, " "and the number of virtual CPUs for each flavor:" msgstr "" #: ../cli_manage_flavors.rst:79 msgid "" "To create a flavor, specify a name, ID, RAM size, disk size, and the number " "of VCPUs for the flavor, as follows:" msgstr "" #: ../cli_manage_flavors.rst:92 msgid "" "Here is an example with additional optional parameters filled in that " "creates a public ``extra tiny`` flavor that automatically gets an ID " "assigned, with 256 MB memory, no disk space, and one VCPU. The rxtx-factor " "indicates the slice of bandwidth that the instances with this flavor can use " "(through the Virtual Interface (vif) creation in the hypervisor):" msgstr "" #: ../cli_manage_flavors.rst:105 msgid "" "If an individual user or group of users needs a custom flavor that you do " "not want other tenants to have access to, you can change the flavor's access " "to make it a private flavor. See `Private Flavors in the OpenStack " "Operations Guide `_." msgstr "" #: ../cli_manage_flavors.rst:110 ../cli_manage_flavors.rst:136 msgid "For a list of optional parameters, run this command:" msgstr "" #: ../cli_manage_flavors.rst:116 msgid "" "After you create a flavor, assign it to a project by specifying the flavor " "name or ID and the tenant ID:" msgstr "" #: ../cli_manage_flavors.rst:124 msgid "" "In addition, you can set or unset ``extra_spec`` for the existing flavor. " "The ``extra_spec`` metadata keys can influence the instance directly when it " "is launched. If a flavor sets the ``extra_spec key/value quota:" "vif_outbound_peak=65536``, the instance's outbound peak bandwidth I/O should " "be LTE 512 Mbps. There are several aspects that can work for an instance " "including ``CPU limits``, ``Disk tuning``, ``Bandwidth I/O``, ``Watchdog " "behavior``, and ``Random-number generator``. For information about " "supporting metadata keys, see `Flavors `__." msgstr "" #: ../cli_manage_flavors.rst:143 msgid "Delete a flavor" msgstr "" #: ../cli_manage_flavors.rst:145 msgid "Delete a specified flavor, as follows:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:3 msgid "Manage projects, users, and roles" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:5 msgid "" "As an administrator, you manage projects, users, and roles. Projects are " "organizational units in the cloud to which you can assign users. Projects " "are also known as *tenants* or *accounts*. Users can be members of one or " "more projects. Roles define which actions users can perform. You assign " "roles to user-project pairs." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:12 msgid "" "You can define actions for OpenStack service roles in the ``/etc/PROJECT/" "policy.json`` files. For example, define actions for Compute service roles " "in the ``/etc/nova/policy.json`` file." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:16 msgid "" "You can manage projects, users, and roles independently from each other." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:18 msgid "" "During cloud set up, the operator defines at least one project, user, and " "role." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:21 msgid "" "You can add, update, and delete projects and users, assign users to one or " "more projects, and change or remove the assignment. To enable or temporarily " "disable a project or user, update that project or user. You can also change " "quotas at the project level." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:26 msgid "" "Before you can delete a user account, you must remove the user account from " "its primary project." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:29 msgid "" "Before you can run client commands, you must download and source an " "OpenStack RC file. See `Download and source the OpenStack RC file `_." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:34 msgid "Projects" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:36 msgid "" "A project is a group of zero or more users. In Compute, a project owns " "virtual machines. In Object Storage, a project owns containers. Users can be " "associated with more than one project. Each project and user pairing can " "have a role associated with it." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:42 msgid "List projects" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:44 msgid "" "List all projects with their ID, name, and whether they are enabled or " "disabled:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:62 msgid "Create a project" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:64 msgid "Create a project named ``new-project``:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:79 msgid "Update a project" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:81 msgid "" "Specify the project ID to update a project. You can update the name, " "description, and enabled status of a project." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:84 msgid "To temporarily disable a project:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:90 msgid "To enable a disabled project:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:96 msgid "To update the name of a project:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:102 msgid "To verify your changes, show information for the updated project:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:117 msgid "Delete a project" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:119 msgid "Specify the project ID to delete a project:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:126 msgid "Users" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:129 msgid "List users" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:131 msgid "List all users:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:146 msgid "Create a user" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:148 msgid "" "To create a user, you must specify a name. Optionally, you can specify a " "tenant ID, password, and email address. It is recommended that you include " "the tenant ID and password because the user cannot log in to the dashboard " "without this information." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:153 msgid "Create the ``new-user`` user:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:169 msgid "Update a user" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:171 msgid "You can update the name, email address, and enabled status for a user." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:173 msgid "To temporarily disable a user account:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:179 msgid "" "If you disable a user account, the user cannot log in to the dashboard. " "However, data for the user account is maintained, so you can enable the user " "at any time." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:183 msgid "To enable a disabled user account:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:189 msgid "To change the name and description for a user account:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:197 msgid "Delete a user" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:199 msgid "Delete a specified user account:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:206 msgid "Roles and role assignments" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:209 msgid "List available roles" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:211 msgid "List the available roles:" msgstr "" # #-#-#-#-# cli_manage_projects_users_and_roles.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_admin_manage_roles.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_projects_users_and_roles.rst:227 #: ../dashboard_admin_manage_roles.rst:22 msgid "Create a role" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:229 msgid "" "Users can be members of multiple projects. To assign users to multiple " "projects, define a role and assign that role to a user-project pair." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:232 msgid "Create the ``new-role`` role:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:245 msgid "Assign a role" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:247 msgid "" "To assign a user to a project, you must assign the role to a user-project " "pair. To do this, you need the user, role, and project IDs." msgstr "" #: ../cli_manage_projects_users_and_roles.rst:251 msgid "List users and note the user ID you want to assign to the role:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:267 msgid "List role IDs and note the role ID you want to assign:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:283 msgid "List projects and note the project ID you want to assign to the role:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:301 msgid "" "Assign a role to a user-project pair. In this example, assign the ``new-" "role`` role to the ``demo`` and ``test-project`` pair:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:308 msgid "Verify the role assignment:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:320 msgid "View role details" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:322 msgid "View details for a specified role:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:335 msgid "Remove a role" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:337 msgid "Remove a role from a user-project pair:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:339 msgid "Run the :command:`openstack role remove` command:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:345 msgid "Verify the role removal:" msgstr "" #: ../cli_manage_projects_users_and_roles.rst:351 msgid "If the role was removed, the command output omits the removed role." msgstr "" #: ../cli_manage_services.rst:3 msgid "Manage services" msgstr "" #: ../cli_manage_shares.rst:5 msgid "Manage shares" msgstr "" #: ../cli_manage_shares.rst:7 msgid "" "A share is provided by file storage. You can give access to a share to " "instances. To create and manage shares, use ``manila`` client commands." msgstr "" #: ../cli_manage_shares.rst:11 msgid "Migrate a share" msgstr "" #: ../cli_manage_shares.rst:13 msgid "" "As an administrator, you can migrate a share with its data from one location " "to another in a manner that is transparent to users and workloads." msgstr "" # #-#-#-#-# cli_manage_shares.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_manage_shares_cli.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_shares.rst:17 ../shared_file_systems_manage_shares_cli.rst:12 msgid "Possible use cases for data migration include:" msgstr "" # #-#-#-#-# cli_manage_shares.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_manage_shares_cli.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_shares.rst:19 ../shared_file_systems_manage_shares_cli.rst:14 msgid "" "Bring down a physical storage device for maintenance without disrupting " "workloads." msgstr "" # #-#-#-#-# cli_manage_shares.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_manage_shares_cli.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_shares.rst:22 ../shared_file_systems_manage_shares_cli.rst:17 msgid "Modify the properties of a share." msgstr "" # #-#-#-#-# cli_manage_shares.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_manage_shares_cli.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_shares.rst:24 ../shared_file_systems_manage_shares_cli.rst:19 msgid "Free up space in a thinly-provisioned back end." msgstr "" # #-#-#-#-# cli_manage_shares.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_manage_shares_cli.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_shares.rst:26 ../shared_file_systems_manage_shares_cli.rst:21 msgid "" "Migrate a share with the :command:`manila migrate` command, as shown in the " "following example:" msgstr "" # #-#-#-#-# cli_manage_shares.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_manage_shares_cli.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_shares.rst:33 ../shared_file_systems_manage_shares_cli.rst:28 msgid "" "In this example, :option:`--force-host-copy True` forces the generic host-" "based migration mechanism and bypasses any driver optimizations. " "``destinationHost`` is in this format ``host#pool`` which includes " "destination host and pool." msgstr "" # #-#-#-#-# cli_manage_shares.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_manage_shares_cli.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_manage_shares.rst:40 ../shared_file_systems_manage_shares_cli.rst:35 msgid "If the user is not an administrator, the migration fails." msgstr "" #: ../cli_networking_advanced_quotas.rst:3 msgid "Manage Networking service quotas" msgstr "" #: ../cli_networking_advanced_quotas.rst:5 msgid "" "A quota limits the number of available resources. A default quota might be " "enforced for all tenants. When you try to create more resources than the " "quota allows, an error occurs:" msgstr "" #: ../cli_networking_advanced_quotas.rst:14 msgid "" "Per-tenant quota configuration is also supported by the quota extension API. " "See :ref:`cfg_quotas_per_tenant` for details." msgstr "" #: ../cli_networking_advanced_quotas.rst:18 msgid "Basic quota configuration" msgstr "" #: ../cli_networking_advanced_quotas.rst:20 msgid "" "In the Networking default quota mechanism, all tenants have the same quota " "values, such as the number of resources that a tenant can create." msgstr "" #: ../cli_networking_advanced_quotas.rst:24 msgid "" "The quota value is defined in the OpenStack Networking ``neutron.conf`` " "configuration file. To disable quotas for a specific resource, such as " "network, subnet, or port, remove a corresponding item from ``quota_items``. " "This example shows the default quota values:" msgstr "" #: ../cli_networking_advanced_quotas.rst:48 msgid "" "OpenStack Networking also supports quotas for L3 resources: router and " "floating IP. Add these lines to the ``quotas`` section in the ``neutron." "conf`` file:" msgstr "" #: ../cli_networking_advanced_quotas.rst:63 #: ../cli_networking_advanced_quotas.rst:81 msgid "The ``quota_items`` option does not affect these quotas." msgstr "" #: ../cli_networking_advanced_quotas.rst:65 msgid "" "OpenStack Networking also supports quotas for security group resources: " "number of security groups and the number of rules for each security group. " "Add these lines to the ``quotas`` section in the ``neutron.conf`` file:" msgstr "" #: ../cli_networking_advanced_quotas.rst:86 msgid "Configure per-tenant quotas" msgstr "" #: ../cli_networking_advanced_quotas.rst:87 msgid "" "OpenStack Networking also supports per-tenant quota limit by quota extension " "API." msgstr "" #: ../cli_networking_advanced_quotas.rst:90 msgid "Use these commands to manage per-tenant quotas:" msgstr "" #: ../cli_networking_advanced_quotas.rst:93 msgid "Delete defined quotas for a specified tenant" msgstr "" #: ../cli_networking_advanced_quotas.rst:93 msgid "neutron quota-delete" msgstr "" #: ../cli_networking_advanced_quotas.rst:96 msgid "Lists defined quotas for all tenants" msgstr "" #: ../cli_networking_advanced_quotas.rst:96 msgid "neutron quota-list" msgstr "" #: ../cli_networking_advanced_quotas.rst:99 msgid "Shows quotas for a specified tenant" msgstr "" #: ../cli_networking_advanced_quotas.rst:99 msgid "neutron quota-show" msgstr "" #: ../cli_networking_advanced_quotas.rst:102 msgid "Updates quotas for a specified tenant" msgstr "" #: ../cli_networking_advanced_quotas.rst:102 msgid "neutron quota-update" msgstr "" #: ../cli_networking_advanced_quotas.rst:104 msgid "" "Only users with the ``admin`` role can change a quota value. By default, the " "default set of quotas are enforced for all tenants, so no :command:`quota-" "create` command exists." msgstr "" #: ../cli_networking_advanced_quotas.rst:108 msgid "Configure Networking to show per-tenant quotas" msgstr "" #: ../cli_networking_advanced_quotas.rst:110 msgid "Set the ``quota_driver`` option in the ``neutron.conf`` file." msgstr "" #: ../cli_networking_advanced_quotas.rst:116 msgid "" "When you set this option, the output for Networking commands shows " "``quotas``." msgstr "" #: ../cli_networking_advanced_quotas.rst:118 msgid "List Networking extensions." msgstr "" #: ../cli_networking_advanced_quotas.rst:120 msgid "To list the Networking extensions, run this command:" msgstr "" #: ../cli_networking_advanced_quotas.rst:126 msgid "" "The command shows the ``quotas`` extension, which provides per-tenant quota " "management support." msgstr "" #: ../cli_networking_advanced_quotas.rst:145 msgid "Show information for the quotas extension." msgstr "" #: ../cli_networking_advanced_quotas.rst:147 msgid "To show information for the ``quotas`` extension, run this command:" msgstr "" #: ../cli_networking_advanced_quotas.rst:165 msgid "" "Only some plug-ins support per-tenant quotas. Specifically, Open vSwitch, " "Linux Bridge, and VMware NSX support them, but new versions of other plug-" "ins might bring additional functionality. See the documentation for each " "plug-in." msgstr "" #: ../cli_networking_advanced_quotas.rst:171 msgid "List tenants who have per-tenant quota support." msgstr "" #: ../cli_networking_advanced_quotas.rst:173 msgid "" "The :command:`quota-list` command lists tenants for which the per-tenant " "quota is enabled. The command does not list tenants with default quota " "support. You must be an administrative user to run this command:" msgstr "" #: ../cli_networking_advanced_quotas.rst:187 msgid "Show per-tenant quota values." msgstr "" #: ../cli_networking_advanced_quotas.rst:189 msgid "" "The :command:`quota-show` command reports the current set of quota limits " "for the specified tenant. Non-administrative users can run this command " "without the :option:`--tenant_id` parameter. If per-tenant quota limits are " "not enabled for the tenant, the command shows the default set of quotas." msgstr "" #: ../cli_networking_advanced_quotas.rst:209 msgid "" "The following command shows the command output for a non-administrative user." msgstr "" #: ../cli_networking_advanced_quotas.rst:225 msgid "Update quota values for a specified tenant." msgstr "" #: ../cli_networking_advanced_quotas.rst:227 msgid "" "Use the :command:`quota-update` command to update a quota for a specified " "tenant." msgstr "" #: ../cli_networking_advanced_quotas.rst:243 msgid "You can update quotas for multiple resources through one command." msgstr "" #: ../cli_networking_advanced_quotas.rst:259 msgid "" "To update the limits for an L3 resource such as, router or floating IP, you " "must define new values for the quotas after the ``--`` directive." msgstr "" #: ../cli_networking_advanced_quotas.rst:263 msgid "" "This example updates the limit of the number of floating IPs for the " "specified tenant." msgstr "" #: ../cli_networking_advanced_quotas.rst:279 msgid "" "You can update the limits of multiple resources by including L2 resources " "and L3 resource through one command:" msgstr "" #: ../cli_networking_advanced_quotas.rst:297 msgid "Delete per-tenant quota values." msgstr "" #: ../cli_networking_advanced_quotas.rst:299 msgid "" "To clear per-tenant quota limits, use the :command:`quota-delete` command." msgstr "" #: ../cli_networking_advanced_quotas.rst:307 msgid "" "After you run this command, you can see that quota values for the tenant are " "reset to the default values." msgstr "" # #-#-#-#-# cli_nova_evacuate.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-node-down.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_nova_evacuate.rst:3 ../compute-node-down.rst:12 msgid "Evacuate instances" msgstr "" #: ../cli_nova_evacuate.rst:5 msgid "" "If a hardware malfunction or other error causes a cloud compute node to " "fail, you can evacuate instances to make them available again. You can " "optionally include the target host on the :command:`evacuate` command. If " "you omit the host, the scheduler chooses the target host." msgstr "" #: ../cli_nova_evacuate.rst:10 msgid "" "To preserve user data on the server disk, configure shared storage on the " "target host. When you evacuate the instance, Compute detects whether shared " "storage is available on the target host. Also, you must validate that the " "current VM host is not operational. Otherwise, the evacuation fails." msgstr "" #: ../cli_nova_evacuate.rst:15 msgid "To find a host for the evacuated instance, list all hosts:" msgstr "" #: ../cli_nova_evacuate.rst:21 msgid "" "Evacuate the instance. You can use the :option:`--password PWD` option to " "pass the instance password to the command. If you do not specify a password, " "the command generates and prints one after it finishes successfully. The " "following command evacuates a server from a failed host to HOST_B." msgstr "" #: ../cli_nova_evacuate.rst:31 msgid "" "The command rebuilds the instance from the original image or volume and " "returns a password. The command preserves the original configuration, which " "includes the instance ID, name, uid, IP address, and so on." msgstr "" #: ../cli_nova_evacuate.rst:43 msgid "" "To preserve the user disk data on the evacuated server, deploy Compute with " "a shared file system. To configure your system, see :ref:" "`section_configuring-compute-migrations`. The following example does not " "change the password." msgstr "" #: ../cli_nova_manage_projects_security.rst:3 msgid "Manage project security" msgstr "" #: ../cli_nova_manage_projects_security.rst:5 msgid "" "Security groups are sets of IP filter rules that are applied to all project " "instances, which define networking access to the instance. Group rules are " "project specific; project members can edit the default rules for their group " "and add new rule sets." msgstr "" #: ../cli_nova_manage_projects_security.rst:10 msgid "" "All projects have a ``default`` security group which is applied to any " "instance that has no other defined security group. Unless you change the " "default, this security group denies all incoming traffic and allows only " "outgoing traffic to your instance." msgstr "" #: ../cli_nova_manage_projects_security.rst:15 msgid "" "You can use the ``allow_same_net_traffic`` option in the ``/etc/nova/nova." "conf`` file to globally control whether the rules apply to hosts which share " "a network." msgstr "" #: ../cli_nova_manage_projects_security.rst:19 msgid "If set to:" msgstr "" #: ../cli_nova_manage_projects_security.rst:21 msgid "" "``True`` (default), hosts on the same subnet are not filtered and are " "allowed to pass all types of traffic between them. On a flat network, this " "allows all instances from all projects unfiltered communication. With VLAN " "networking, this allows access between instances within the same project. " "You can also simulate this setting by configuring the default security group " "to allow all traffic from the subnet." msgstr "" #: ../cli_nova_manage_projects_security.rst:28 msgid "``False``, security groups are enforced for all connections." msgstr "" #: ../cli_nova_manage_projects_security.rst:30 msgid "" "Additionally, the number of maximum rules per security group is controlled " "by the ``security_group_rules`` and the number of allowed security groups " "per project is controlled by the ``security_groups`` quota (see :ref:`manage-" "quotas`)." msgstr "" #: ../cli_nova_manage_projects_security.rst:36 msgid "List and view current security groups" msgstr "" #: ../cli_nova_manage_projects_security.rst:38 msgid "" "From the command-line you can get a list of security groups for the project, " "using the :command:`nova` command:" msgstr "" #: ../cli_nova_manage_projects_security.rst:41 msgid "" "Ensure your system variables are set for the user and tenant for which you " "are checking security group rules. For example:" msgstr "" #: ../cli_nova_manage_projects_security.rst:49 msgid "Output security groups, as follows:" msgstr "" #: ../cli_nova_manage_projects_security.rst:61 msgid "View the details of a group, as follows:" msgstr "" #: ../cli_nova_manage_projects_security.rst:80 msgid "" "These rules are allow type rules as the default is deny. The first column is " "the IP protocol (one of icmp, tcp, or udp). The second and third columns " "specify the affected port range. The third column specifies the IP range in " "CIDR format. This example shows the full port range for all protocols " "allowed from all IPs." msgstr "" #: ../cli_nova_manage_projects_security.rst:87 msgid "Create a security group" msgstr "" #: ../cli_nova_manage_projects_security.rst:89 msgid "" "When adding a new security group, you should pick a descriptive but brief " "name. This name shows up in brief descriptions of the instances that use it " "where the longer description field often does not. For example, seeing that " "an instance is using security group \"http\" is much easier to understand " "than \"bobs\\_group\" or \"secgrp1\"." msgstr "" #: ../cli_nova_manage_projects_security.rst:95 msgid "" "Ensure your system variables are set for the user and tenant for which you " "are creating security group rules." msgstr "" #: ../cli_nova_manage_projects_security.rst:98 msgid "Add the new security group, as follows:" msgstr "" #: ../cli_nova_manage_projects_security.rst:115 msgid "Add a new group rule, as follows:" msgstr "" #: ../cli_nova_manage_projects_security.rst:121 msgid "" "The arguments are positional, and the ``from-port`` and ``to-port`` " "arguments specify the local port range connections are allowed to access, " "not the source and destination ports of the connection. For example:" msgstr "" #: ../cli_nova_manage_projects_security.rst:135 msgid "" "You can create complex rule sets by creating additional rules. For example, " "if you want to pass both HTTP and HTTPS traffic, run:" msgstr "" #: ../cli_nova_manage_projects_security.rst:147 msgid "" "Despite only outputting the newly added rule, this operation is additive " "(both rules are created and enforced)." msgstr "" #: ../cli_nova_manage_projects_security.rst:150 msgid "View all rules for the new security group, as follows:" msgstr "" #: ../cli_nova_manage_projects_security.rst:163 msgid "Delete a security group" msgstr "" #: ../cli_nova_manage_projects_security.rst:165 msgid "" "Ensure your system variables are set for the user and tenant for which you " "are deleting a security group." msgstr "" #: ../cli_nova_manage_projects_security.rst:168 msgid "Delete the new security group, as follows:" msgstr "" #: ../cli_nova_manage_projects_security.rst:181 msgid "Create security group rules for a cluster of instances" msgstr "" #: ../cli_nova_manage_projects_security.rst:183 msgid "" "Source Groups are a special, dynamic way of defining the CIDR of allowed " "sources. The user specifies a Source Group (Security Group name), and all " "the user's other Instances using the specified Source Group are selected " "dynamically. This alleviates the need for individual rules to allow each new " "member of the cluster." msgstr "" #: ../cli_nova_manage_projects_security.rst:189 msgid "" "Make sure to set the system variables for the user and tenant for which you " "are creating a security group rule." msgstr "" #: ../cli_nova_manage_projects_security.rst:192 msgid "Add a source group, as follows:" msgstr "" #: ../cli_nova_manage_projects_security.rst:204 msgid "" "The ``cluster`` rule allows SSH access from any other instance that uses the " "``global_http`` group." msgstr "" #: ../cli_nova_manage_services.rst:3 msgid "Manage Compute services" msgstr "" #: ../cli_nova_manage_services.rst:5 msgid "" "You can enable and disable Compute services. The following examples disable " "and enable the ``nova-compute`` service." msgstr "" #: ../cli_nova_manage_services.rst:9 msgid "List the Compute services:" msgstr "" #: ../cli_nova_manage_services.rst:25 msgid "Disable a nova service:" msgstr "" #: ../cli_nova_manage_services.rst:36 ../cli_nova_manage_services.rst:63 msgid "Check the service list:" msgstr "" #: ../cli_nova_manage_services.rst:52 msgid "Enable the service:" msgstr "" #: ../cli_nova_migrate.rst:3 msgid "Migrate a single instance to another compute host" msgstr "" #: ../cli_nova_migrate.rst:5 msgid "" "When you want to move an instance from one compute host to another, you can " "use the :command:`nova migrate` command. The scheduler chooses the " "destination compute host based on its settings. This process does not assume " "that the instance has shared storage available on the target host. If you " "are using SSH tunneling, you must ensure that each node is configured with " "SSH key authentication so that the Compute service can use SSH to move disks " "to other nodes. For more information, see :ref:`clinovamigratecfgssh`." msgstr "" #: ../cli_nova_migrate.rst:14 msgid "To list the VMs you want to migrate, run:" msgstr "" #: ../cli_nova_migrate.rst:20 msgid "" "After selecting a VM from the list, run this command where :guilabel:`VM_ID` " "is set to the ID in the list returned in the previous step:" msgstr "" #: ../cli_nova_migrate.rst:27 msgid "Use the :command:`nova migrate` command." msgstr "" #: ../cli_nova_migrate.rst:33 msgid "To migrate an instance and watch the status, use this example script:" msgstr "" #: ../cli_nova_migrate.rst:72 msgid "" "If you see this error, it means you are either trying the command with the " "wrong credentials, such as a non-admin user, or the ``policy.json`` file " "prevents migration for your user:" msgstr "" #: ../cli_nova_migrate.rst:77 msgid "" "``ERROR (Forbidden): Policy doesn't allow compute_extension:admin_actions:" "migrate to be performed. (HTTP 403)``" msgstr "" #: ../cli_nova_migrate.rst:82 msgid "" "If you see an error similar to this message, SSH tunneling was not set up " "between the compute nodes:" msgstr "" #: ../cli_nova_migrate.rst:85 msgid "``ProcessExecutionError: Unexpected error while running command.``" msgstr "" #: ../cli_nova_migrate.rst:87 msgid "``Stderr: u Host key verification failed.\\r\\n``" msgstr "" #: ../cli_nova_migrate.rst:89 msgid "" "The instance is booted from a new host, but preserves its configuration " "including its ID, name, any metadata, IP address, and other properties." msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:5 msgid "Configure SSH between compute nodes" msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:7 msgid "" "If you are resizing or migrating an instance between hypervisors, you might " "encounter an SSH (Permission denied) error. Ensure that each node is " "configured with SSH key authentication so that the Compute service can use " "SSH to move disks to other nodes." msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:14 msgid "" "To share a key pair between compute nodes, complete the following steps:" msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:17 msgid "" "On the first node, obtain a key pair (public key and private key). Use the " "root key that is in the ``/root/.ssh/id_rsa`` and ``/root/.ssh/id_ras.pub`` " "directories or generate a new key pair." msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:23 msgid "Run :command:`setenforce 0` to put SELinux into permissive mode." msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:26 msgid "Enable login abilities for the nova user:" msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:32 msgid "Switch to the nova account." msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:38 msgid "" "As root, create the folder that is needed by SSH and place the private key " "that you obtained in step 1 into this folder:" msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:49 msgid "Repeat steps 2-4 on each node." msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:53 msgid "" "The nodes must share the same key pair, so do not generate a new key pair " "for any subsequent nodes." msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:56 msgid "From the first node, where you created the SSH key, run:" msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:62 msgid "" "This command installs your public key in a remote machine's " "``authorized_keys`` folder." msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:64 msgid "" "Ensure that the nova user can now log in to each node without using a " "password:" msgstr "" #: ../cli_nova_migrate_cfg_ssh.rst:73 msgid "As root on each node, restart both libvirt and the Compute services:" msgstr "" #: ../cli_nova_numa_libvirt.rst:3 msgid "Consider NUMA topology when booting instances" msgstr "" #: ../cli_nova_numa_libvirt.rst:5 msgid "" "NUMA topology can exist on both the physical hardware of the host, and the " "virtual hardware of the instance. OpenStack Compute uses libvirt to tune " "instances to take advantage of NUMA topologies. The libvirt driver boot " "process looks at the NUMA topology field of both the instance and the host " "it is being booted on, and uses that information to generate an appropriate " "configuration." msgstr "" #: ../cli_nova_numa_libvirt.rst:12 msgid "" "If the host is NUMA capable, but the instance has not requested a NUMA " "topology, Compute attempts to pack the instance into a single cell. If this " "fails, though, Compute will not continue to try." msgstr "" #: ../cli_nova_numa_libvirt.rst:16 msgid "" "If the host is NUMA capable, and the instance has requested a specific NUMA " "topology, Compute will try to pin the vCPUs of different NUMA cells on the " "instance to the corresponding NUMA cells on the host. It will also expose " "the NUMA topology of the instance to the guest OS." msgstr "" #: ../cli_nova_numa_libvirt.rst:21 msgid "" "If you want Compute to pin a particular vCPU as part of this process, set " "the ``vcpu_pin_set`` parameter in the ``nova.conf`` configuration file. For " "more information about the ``vcpu_pin_set`` parameter, see the Configuration " "Reference Guide." msgstr "" #: ../cli_nova_specify_host.rst:3 msgid "Select hosts where instances are launched" msgstr "" #: ../cli_nova_specify_host.rst:5 msgid "" "With the appropriate permissions, you can select which host instances are " "launched on and which roles can boot instances on this host." msgstr "" #: ../cli_nova_specify_host.rst:9 msgid "" "To select the host where instances are launched, use the :option:`--" "availability_zone ZONE:HOST` parameter on the :command:`nova boot` command." msgstr "" #: ../cli_nova_specify_host.rst:19 msgid "" "To specify which roles can launch an instance on a specified host, enable " "the ``create:forced_host`` option in the ``policy.json`` file. By default, " "this option is enabled for only the admin role." msgstr "" #: ../cli_nova_specify_host.rst:24 msgid "" "To view the list of valid compute hosts, use the :command:`nova hypervisor-" "list` command." msgstr "" #: ../cli_set_compute_quotas.rst:3 msgid "Manage Compute service quotas" msgstr "" #: ../cli_set_compute_quotas.rst:5 msgid "" "As an administrative user, you can use the :command:`nova quota-*` commands, " "which are provided by the ``python-novaclient`` package, to update the " "Compute service quotas for a specific tenant or tenant user, as well as " "update the quota defaults for a new tenant." msgstr "" #: ../cli_set_compute_quotas.rst:10 msgid "**Compute quota descriptions**" msgstr "" #: ../cli_set_compute_quotas.rst:16 msgid "Quota name" msgstr "" # #-#-#-#-# cli_set_compute_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-manage-volumes.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-networking-nova.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-remote-console-access.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-security.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# database.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_introduction.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_multi-dhcp-agents.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# objectstorage-troubleshoot.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_crud_share.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_set_compute_quotas.rst:17 ../compute-flavors.rst:34 #: ../compute-manage-volumes.rst:14 ../compute-networking-nova.rst:310 #: ../compute-networking-nova.rst:507 ../compute-remote-console-access.rst:50 #: ../compute-remote-console-access.rst:135 ../compute-security.rst:104 #: ../database.rst:119 ../database.rst:159 ../networking_adv-features.rst:49 #: ../networking_adv-features.rst:132 ../networking_adv-features.rst:714 #: ../networking_arch.rst:29 ../networking_introduction.rst:36 #: ../networking_introduction.rst:129 ../networking_multi-dhcp-agents.rst:40 #: ../objectstorage-troubleshoot.rst:88 #: ../shared_file_systems_crud_share.rst:44 #: ../shared_file_systems_crud_share.rst:431 ../telemetry-measurements.rst:31 msgid "Description" msgstr "" #: ../cli_set_compute_quotas.rst:18 msgid "cores" msgstr "" #: ../cli_set_compute_quotas.rst:19 msgid "Number of instance cores (VCPUs) allowed per tenant." msgstr "" #: ../cli_set_compute_quotas.rst:20 msgid "fixed-ips" msgstr "" #: ../cli_set_compute_quotas.rst:21 msgid "" "Number of fixed IP addresses allowed per tenant. This number must be equal " "to or greater than the number of allowed instances." msgstr "" #: ../cli_set_compute_quotas.rst:24 msgid "floating-ips" msgstr "" #: ../cli_set_compute_quotas.rst:25 msgid "Number of floating IP addresses allowed per tenant." msgstr "" #: ../cli_set_compute_quotas.rst:26 msgid "injected-file-content-bytes" msgstr "" #: ../cli_set_compute_quotas.rst:27 msgid "Number of content bytes allowed per injected file." msgstr "" #: ../cli_set_compute_quotas.rst:28 msgid "injected-file-path-bytes" msgstr "" #: ../cli_set_compute_quotas.rst:29 msgid "Length of injected file path." msgstr "" #: ../cli_set_compute_quotas.rst:30 msgid "injected-files" msgstr "" #: ../cli_set_compute_quotas.rst:31 msgid "Number of injected files allowed per tenant." msgstr "" #: ../cli_set_compute_quotas.rst:32 msgid "instances" msgstr "" #: ../cli_set_compute_quotas.rst:33 msgid "Number of instances allowed per tenant." msgstr "" #: ../cli_set_compute_quotas.rst:34 msgid "key-pairs" msgstr "" #: ../cli_set_compute_quotas.rst:35 msgid "Number of key pairs allowed per user." msgstr "" #: ../cli_set_compute_quotas.rst:36 msgid "metadata-items" msgstr "" #: ../cli_set_compute_quotas.rst:37 msgid "Number of metadata items allowed per instance." msgstr "" #: ../cli_set_compute_quotas.rst:38 msgid "ram" msgstr "" #: ../cli_set_compute_quotas.rst:39 msgid "Megabytes of instance ram allowed per tenant." msgstr "" #: ../cli_set_compute_quotas.rst:40 msgid "security-groups" msgstr "" #: ../cli_set_compute_quotas.rst:41 msgid "Number of security groups per tenant." msgstr "" #: ../cli_set_compute_quotas.rst:42 msgid "security-group-rules" msgstr "" #: ../cli_set_compute_quotas.rst:43 msgid "Number of rules per security group." msgstr "" #: ../cli_set_compute_quotas.rst:46 msgid "View and update Compute quotas for a tenant (project)" msgstr "" #: ../cli_set_compute_quotas.rst:49 msgid "To view and update default quota values" msgstr "" #: ../cli_set_compute_quotas.rst:50 msgid "List all default quotas for all tenants:" msgstr "" #: ../cli_set_compute_quotas.rst:78 msgid "Update a default value for a new tenant." msgstr "" #: ../cli_set_compute_quotas.rst:91 msgid "To view quota values for an existing tenant (project)" msgstr "" #: ../cli_set_compute_quotas.rst:93 msgid "Place the tenant ID in a usable variable." msgstr "" #: ../cli_set_compute_quotas.rst:99 msgid "List the currently set quota values for a tenant." msgstr "" #: ../cli_set_compute_quotas.rst:128 msgid "To update quota values for an existing tenant (project)" msgstr "" #: ../cli_set_compute_quotas.rst:130 msgid "Obtain the tenant ID." msgstr "" #: ../cli_set_compute_quotas.rst:136 msgid "Update a particular quota value." msgstr "" #: ../cli_set_compute_quotas.rst:167 ../cli_set_compute_quotas.rst:265 msgid "To view a list of options for the :command:`quota-update` command, run:" msgstr "" #: ../cli_set_compute_quotas.rst:174 msgid "View and update Compute quotas for a tenant user" msgstr "" #: ../cli_set_compute_quotas.rst:177 msgid "To view quota values for a tenant user" msgstr "" #: ../cli_set_compute_quotas.rst:179 ../cli_set_compute_quotas.rst:222 msgid "Place the user ID in a usable variable." msgstr "" #: ../cli_set_compute_quotas.rst:185 ../cli_set_compute_quotas.rst:228 msgid "Place the user's tenant ID in a usable variable, as follows:" msgstr "" #: ../cli_set_compute_quotas.rst:191 msgid "List the currently set quota values for a tenant user." msgstr "" #: ../cli_set_compute_quotas.rst:220 msgid "To update quota values for a tenant user" msgstr "" #: ../cli_set_compute_quotas.rst:234 msgid "Update a particular quota value, as follows:" msgstr "" #: ../cli_set_compute_quotas.rst:272 msgid "To display the current quota usage for a tenant user" msgstr "" #: ../cli_set_compute_quotas.rst:274 msgid "" "Use :command:`nova absolute-limits` to get a list of the current quota " "values and the current quota usage:" msgstr "" #: ../cli_set_quotas.rst:5 msgid "Manage quotas" msgstr "" # #-#-#-#-# cli_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# #: ../cli_set_quotas.rst:7 ../dashboard_set_quotas.rst:10 msgid "" "To prevent system capacities from being exhausted without notification, you " "can set up quotas. Quotas are operational limits. For example, the number of " "gigabytes allowed for each tenant can be controlled so that cloud resources " "are optimized. Quotas can be enforced at both the tenant (or project) and " "the tenant-user level." msgstr "" #: ../cli_set_quotas.rst:14 msgid "" "Using the command-line interface, you can manage quotas for the OpenStack " "Compute service, the OpenStack Block Storage service, and the OpenStack " "Networking service." msgstr "" #: ../cli_set_quotas.rst:18 msgid "" "The cloud operator typically changes default values because a tenant " "requires more than ten volumes or 1 TB on a compute node." msgstr "" #: ../cli_set_quotas.rst:24 msgid "To view all tenants (projects), run:" msgstr "" #: ../cli_set_quotas.rst:38 msgid "To display all current users for a tenant, run:" msgstr "" #: ../compute-admin-password-injection.rst:5 msgid "Injecting the administrator password" msgstr "" #: ../compute-admin-password-injection.rst:7 msgid "" "Compute can generate a random administrator (root) password and inject that " "password into an instance. If this feature is enabled, users can run :" "command:`ssh` to an instance without an :command:`ssh` keypair. The random " "password appears in the output of the :command:`nova boot` command. You can " "also view and set the admin password from the dashboard." msgstr "" #: ../compute-admin-password-injection.rst:13 msgid "**Password injection using the dashboard**" msgstr "" #: ../compute-admin-password-injection.rst:15 msgid "" "By default, the dashboard will display the ``admin`` password and allow the " "user to modify it." msgstr "" #: ../compute-admin-password-injection.rst:18 msgid "" "If you do not want to support password injection, disable the password " "fields by editing the dashboard's ``local_settings.py`` file." msgstr "" #: ../compute-admin-password-injection.rst:28 msgid "**Password injection on libvirt-based hypervisors**" msgstr "" #: ../compute-admin-password-injection.rst:30 msgid "" "For hypervisors that use the libvirt back end (such as KVM, QEMU, and LXC), " "admin password injection is disabled by default. To enable it, set this " "option in ``/etc/nova/nova.conf``:" msgstr "" #: ../compute-admin-password-injection.rst:39 msgid "" "When enabled, Compute will modify the password of the admin account by " "editing the ``/etc/shadow`` file inside the virtual machine instance." msgstr "" #: ../compute-admin-password-injection.rst:44 msgid "" "Users can only use :command:`ssh` to access the instance by using the admin " "password if the virtual machine image is a Linux distribution, and it has " "been configured to allow users to use :command:`ssh` as the root user. This " "is not the case for `Ubuntu cloud images `_ " "which, by default, does not allow users to use :command:`ssh` to access the " "root account." msgstr "" #: ../compute-admin-password-injection.rst:51 msgid "**Password injection and XenAPI (XenServer/XCP)**" msgstr "" #: ../compute-admin-password-injection.rst:53 msgid "" "When using the XenAPI hypervisor back end, Compute uses the XenAPI agent to " "inject passwords into guests. The virtual machine image must be configured " "with the agent for password injection to work." msgstr "" #: ../compute-admin-password-injection.rst:57 msgid "**Password injection and Windows images (all hypervisors)**" msgstr "" #: ../compute-admin-password-injection.rst:59 msgid "" "For Windows virtual machines, configure the Windows image to retrieve the " "admin password on boot by installing an agent such as `cloudbase-init " "`_." msgstr "" #: ../compute-configuring-migrations.rst:5 msgid "Configure migrations" msgstr "" #: ../compute-configuring-migrations.rst:12 msgid "" "Only administrators can perform live migrations. If your cloud is configured " "to use cells, you can perform live migration within but not between cells." msgstr "" #: ../compute-configuring-migrations.rst:16 msgid "" "Migration enables an administrator to move a virtual-machine instance from " "one compute host to another. This feature is useful when a compute host " "requires maintenance. Migration can also be useful to redistribute the load " "when many VM instances are running on a specific physical machine." msgstr "" #: ../compute-configuring-migrations.rst:22 msgid "The migration types are:" msgstr "" #: ../compute-configuring-migrations.rst:24 msgid "" "**Non-live migration** (sometimes referred to simply as 'migration'). The " "instance is shut down for a period of time to be moved to another " "hypervisor. In this case, the instance recognizes that it was rebooted." msgstr "" #: ../compute-configuring-migrations.rst:29 msgid "" "**Live migration** (or 'true live migration'). Almost no instance downtime. " "Useful when the instances must be kept running during the migration. The " "different types of live migration are:" msgstr "" #: ../compute-configuring-migrations.rst:33 msgid "" "**Shared storage-based live migration**. Both hypervisors have access to " "shared storage." msgstr "" #: ../compute-configuring-migrations.rst:36 msgid "" "**Block live migration**. No shared storage is required. Incompatible with " "read-only devices such as CD-ROMs and `Configuration Drive (config\\_drive) " "`_." msgstr "" #: ../compute-configuring-migrations.rst:40 msgid "" "**Volume-backed live migration**. Instances are backed by volumes rather " "than ephemeral disk, no shared storage is required, and migration is " "supported (currently only available for libvirt-based hypervisors)." msgstr "" #: ../compute-configuring-migrations.rst:45 msgid "" "The following sections describe how to configure your hosts and compute " "nodes for migrations by using the KVM and XenServer hypervisors." msgstr "" #: ../compute-configuring-migrations.rst:51 msgid "KVM-Libvirt" msgstr "" #: ../compute-configuring-migrations.rst:59 #: ../compute-configuring-migrations.rst:330 msgid "Shared storage" msgstr "" #: ../compute-configuring-migrations.rst:64 #: ../compute-configuring-migrations.rst:332 msgid "**Prerequisites**" msgstr "" #: ../compute-configuring-migrations.rst:66 msgid "**Hypervisor:** KVM with libvirt" msgstr "" #: ../compute-configuring-migrations.rst:68 msgid "" "**Shared storage:** ``NOVA-INST-DIR/instances/`` (for example, ``/var/lib/" "nova/instances``) has to be mounted by shared storage. This guide uses NFS " "but other options, including the `OpenStack Gluster Connector `_ are available." msgstr "" #: ../compute-configuring-migrations.rst:74 msgid "**Instances:** Instance can be migrated with iSCSI-based volumes." msgstr "" #: ../compute-configuring-migrations.rst:76 msgid "**Notes**" msgstr "" #: ../compute-configuring-migrations.rst:78 msgid "" "Because the Compute service does not use the libvirt live migration " "functionality by default, guests are suspended before migration and might " "experience several minutes of downtime. For details, see `Enabling true live " "migration`." msgstr "" #: ../compute-configuring-migrations.rst:83 msgid "" "Compute calculates the amount of downtime required using the RAM size of the " "disk being migrated, in accordance with the ``live_migration_downtime`` " "configuration parameters. Migration downtime is measured in steps, with an " "exponential backoff between each step. This means that the maximum downtime " "between each step starts off small, and is increased in ever larger amounts " "as Compute waits for the migration to complete. This gives the guest a " "chance to complete the migration successfully, with a minimum amount of " "downtime." msgstr "" #: ../compute-configuring-migrations.rst:92 msgid "" "This guide assumes the default value for ``instances_path`` in your ``nova." "conf`` file (``NOVA-INST-DIR/instances``). If you have changed the " "``state_path`` or ``instances_path`` variables, modify the commands " "accordingly." msgstr "" #: ../compute-configuring-migrations.rst:97 msgid "" "You must specify ``vncserver_listen=0.0.0.0`` or live migration will not " "work correctly." msgstr "" #: ../compute-configuring-migrations.rst:100 msgid "" "You must specify the ``instances_path`` in each node that runs ``nova-" "compute``. The mount point for ``instances_path`` must be the same value for " "each node, or live migration will not work correctly." msgstr "" #: ../compute-configuring-migrations.rst:108 msgid "Example Compute installation environment" msgstr "" #: ../compute-configuring-migrations.rst:110 msgid "" "Prepare at least three servers. In this example, we refer to the servers as " "``HostA``, ``HostB``, and ``HostC``:" msgstr "" #: ../compute-configuring-migrations.rst:113 msgid "" "``HostA`` is the Cloud Controller, and should run these services: ``nova-" "api``, ``nova-scheduler``, ``nova-network``, ``cinder-volume``, and ``nova-" "objectstore``." msgstr "" #: ../compute-configuring-migrations.rst:117 msgid "" "``HostB`` and ``HostC`` are the compute nodes that run ``nova-compute``." msgstr "" #: ../compute-configuring-migrations.rst:120 msgid "" "Ensure that ``NOVA-INST-DIR`` (set with ``state_path`` in the ``nova.conf`` " "file) is the same on all hosts." msgstr "" #: ../compute-configuring-migrations.rst:123 msgid "" "In this example, ``HostA`` is the NFSv4 server that exports ``NOVA-INST-DIR/" "instances`` directory. ``HostB`` and ``HostC`` are NFSv4 clients that mount " "``HostA``." msgstr "" #: ../compute-configuring-migrations.rst:127 msgid "**Configuring your system**" msgstr "" #: ../compute-configuring-migrations.rst:129 msgid "" "Configure your DNS or ``/etc/hosts`` and ensure it is consistent across all " "hosts. Make sure that the three hosts can perform name resolution with each " "other. As a test, use the :command:`ping` command to ping each host from one " "another:" msgstr "" #: ../compute-configuring-migrations.rst:140 msgid "" "Ensure that the UID and GID of your Compute and libvirt users are identical " "between each of your servers. This ensures that the permissions on the NFS " "mount works correctly." msgstr "" #: ../compute-configuring-migrations.rst:144 msgid "" "Ensure you can access SSH without a password and without " "StrictHostKeyChecking between ``HostB`` and ``HostC`` as ``nova`` user (set " "with the owner of ``nova-compute`` service). Direct access from one compute " "host to another is needed to copy the VM file across. It is also needed to " "detect if the source and target compute nodes share a storage subsystem." msgstr "" #: ../compute-configuring-migrations.rst:151 msgid "" "Export ``NOVA-INST-DIR/instances`` from ``HostA``, and ensure it is readable " "and writable by the Compute user on ``HostB`` and ``HostC``." msgstr "" #: ../compute-configuring-migrations.rst:154 msgid "" "For more information, see: `SettingUpNFSHowTo `_ or `CentOS/Red Hat: Setup NFS v4.0 File " "Server `_" msgstr "" #: ../compute-configuring-migrations.rst:157 msgid "" "Configure the NFS server at ``HostA`` by adding the following line to the ``/" "etc/exports`` file:" msgstr "" #: ../compute-configuring-migrations.rst:164 msgid "" "Change the subnet mask (``255.255.0.0``) to the appropriate value to include " "the IP addresses of ``HostB`` and ``HostC``. Then restart the ``NFS`` server:" msgstr "" #: ../compute-configuring-migrations.rst:173 msgid "" "On both compute nodes, enable the ``execute/search`` bit on your shared " "directory to allow qemu to be able to use the images within the directories. " "On all hosts, run the following command:" msgstr "" #: ../compute-configuring-migrations.rst:181 msgid "" "Configure NFS on ``HostB`` and ``HostC`` by adding the following line to the " "``/etc/fstab`` file" msgstr "" #: ../compute-configuring-migrations.rst:188 msgid "Ensure that you can mount the exported directory" msgstr "" #: ../compute-configuring-migrations.rst:194 msgid "Check that ``HostA`` can see the ``NOVA-INST-DIR/instances/`` directory" msgstr "" #: ../compute-configuring-migrations.rst:202 msgid "" "Perform the same check on ``HostB`` and ``HostC``, paying special attention " "to the permissions (Compute should be able to write)" msgstr "" #: ../compute-configuring-migrations.rst:220 msgid "" "Update the libvirt configurations so that the calls can be made securely. " "These methods enable remote access over TCP and are not documented here." msgstr "" #: ../compute-configuring-migrations.rst:224 msgid "SSH tunnel to libvirtd's UNIX socket" msgstr "" #: ../compute-configuring-migrations.rst:226 msgid "libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption" msgstr "" #: ../compute-configuring-migrations.rst:228 msgid "" "libvirtd TCP socket, with TLS for encryption and x509 client certs for " "authentication" msgstr "" #: ../compute-configuring-migrations.rst:231 msgid "" "libvirtd TCP socket, with TLS for encryption and Kerberos for authentication" msgstr "" #: ../compute-configuring-migrations.rst:234 msgid "" "Restart ``libvirt``. After you run the command, ensure that libvirt is " "successfully restarted" msgstr "" #: ../compute-configuring-migrations.rst:243 msgid "" "Configure your firewall to allow libvirt to communicate between nodes. By " "default, libvirt listens on TCP port 16509, and an ephemeral TCP range from " "49152 to 49261 is used for the KVM communications. Based on the secure " "remote access TCP configuration you chose, be careful which ports you open, " "and always understand who has access. For information about ports that are " "used with libvirt, see the `libvirt documentation `_." msgstr "" #: ../compute-configuring-migrations.rst:251 msgid "" "Configure the downtime required for the migration by adjusting these " "parameters in the ``nova.conf`` file:" msgstr "" #: ../compute-configuring-migrations.rst:260 msgid "" "The ``live_migration_downtime`` parameter sets the maximum permitted " "downtime for a live migration, in milliseconds. This setting defaults to 500 " "milliseconds." msgstr "" #: ../compute-configuring-migrations.rst:264 msgid "" "The ``live_migration_downtime_steps`` parameter sets the total number of " "incremental steps to reach the maximum downtime value. This setting defaults " "to 10 steps." msgstr "" #: ../compute-configuring-migrations.rst:268 msgid "" "The ``live_migration_downtime_delay`` parameter sets the amount of time to " "wait between each step, in seconds. This setting defaults to 75 seconds." msgstr "" #: ../compute-configuring-migrations.rst:271 msgid "" "You can now configure other options for live migration. In most cases, you " "will not need to configure any options. For advanced configuration options, " "see the `OpenStack Configuration Reference Guide `_." msgstr "" #: ../compute-configuring-migrations.rst:280 msgid "Enabling true live migration" msgstr "" #: ../compute-configuring-migrations.rst:282 msgid "" "Prior to the Kilo release, the Compute service did not use the libvirt live " "migration function by default. To enable this function, add the following " "line to the ``[libvirt]`` section of the ``nova.conf`` file:" msgstr "" #: ../compute-configuring-migrations.rst:290 msgid "" "On versions older than Kilo, the Compute service does not use libvirt's live " "migration by default because there is a risk that the migration process will " "never end. This can happen if the guest operating system uses blocks on the " "disk faster than they can be migrated." msgstr "" #: ../compute-configuring-migrations.rst:298 #: ../compute-configuring-migrations.rst:403 msgid "Block migration" msgstr "" #: ../compute-configuring-migrations.rst:300 msgid "" "Configuring KVM for block migration is exactly the same as the above " "configuration in :ref:`configuring-migrations-kvm-shared-storage` the " "section called shared storage, except that ``NOVA-INST-DIR/instances`` is " "local to each host rather than shared. No NFS client or server configuration " "is required." msgstr "" #: ../compute-configuring-migrations.rst:308 #: ../compute-configuring-migrations.rst:412 msgid "" "To use block migration, you must use the :option:`--block-migrate` parameter " "with the live migration command." msgstr "" #: ../compute-configuring-migrations.rst:311 msgid "" "Block migration is incompatible with read-only devices such as CD-ROMs and " "`Configuration Drive (config_drive) `_." msgstr "" #: ../compute-configuring-migrations.rst:314 msgid "" "Since the ephemeral drives are copied over the network in block migration, " "migrations of instances with heavy I/O loads may never complete if the " "drives are writing faster than the data can be copied over the network." msgstr "" #: ../compute-configuring-migrations.rst:322 msgid "XenServer" msgstr "" #: ../compute-configuring-migrations.rst:334 msgid "" "**Compatible XenServer hypervisors**. For more information, see the " "`Requirements for Creating Resource Pools `_ " "section of the XenServer Administrator's Guide." msgstr "" #: ../compute-configuring-migrations.rst:338 msgid "**Shared storage**. An NFS export, visible to all XenServer hosts." msgstr "" #: ../compute-configuring-migrations.rst:342 msgid "" "For the supported NFS versions, see the `NFS VHD `_ section of the " "XenServer Administrator's Guide." msgstr "" #: ../compute-configuring-migrations.rst:346 msgid "" "To use shared storage live migration with XenServer hypervisors, the hosts " "must be joined to a XenServer pool. To create that pool, a host aggregate " "must be created with specific metadata. This metadata is used by the XAPI " "plug-ins to establish the pool." msgstr "" #: ../compute-configuring-migrations.rst:351 msgid "**Using shared storage live migrations with XenServer Hypervisors**" msgstr "" #: ../compute-configuring-migrations.rst:353 msgid "" "Add an NFS VHD storage to your master XenServer, and set it as the default " "storage repository. For more information, see NFS VHD in the XenServer " "Administrator's Guide." msgstr "" #: ../compute-configuring-migrations.rst:357 msgid "" "Configure all compute nodes to use the default storage repository (``sr``) " "for pool operations. Add this line to your ``nova.conf`` configuration files " "on all compute nodes:" msgstr "" #: ../compute-configuring-migrations.rst:365 msgid "" "Create a host aggregate. This command creates the aggregate, and then " "displays a table that contains the ID of the new aggregate" msgstr "" #: ../compute-configuring-migrations.rst:372 msgid "Add metadata to the aggregate, to mark it as a hypervisor pool" msgstr "" #: ../compute-configuring-migrations.rst:380 msgid "Make the first compute node part of that aggregate" msgstr "" #: ../compute-configuring-migrations.rst:386 msgid "The host is now part of a XenServer pool." msgstr "" #: ../compute-configuring-migrations.rst:388 msgid "Add hosts to the pool" msgstr "" #: ../compute-configuring-migrations.rst:396 msgid "" "The added compute node and the host will shut down to join the host to the " "XenServer pool. The operation will fail if any server other than the compute " "node is running or suspended on the host." msgstr "" #: ../compute-configuring-migrations.rst:405 msgid "" "**Compatible XenServer hypervisors**. The hypervisors must support the " "Storage XenMotion feature. See your XenServer manual to make sure your " "edition has this feature." msgstr "" #: ../compute-configuring-migrations.rst:415 msgid "" "Block migration works only with EXT local storage storage repositories, and " "the server must not have any volumes attached." msgstr "" #: ../compute-default-ports.rst:5 msgid "Compute service node firewall requirements" msgstr "" #: ../compute-default-ports.rst:7 msgid "" "Console connections for virtual machines, whether direct or through a proxy, " "are received on ports ``5900`` to ``5999``. The firewall on each Compute " "service node must allow network traffic on these ports." msgstr "" #: ../compute-default-ports.rst:11 msgid "" "This procedure modifies the iptables firewall to allow incoming connections " "to the Compute services." msgstr "" #: ../compute-default-ports.rst:14 msgid "**Configuring the service-node firewall**" msgstr "" #: ../compute-default-ports.rst:16 msgid "Log in to the server that hosts the Compute service, as root." msgstr "" #: ../compute-default-ports.rst:18 msgid "" "Edit the ``/etc/sysconfig/iptables`` file, to add an INPUT rule that allows " "TCP traffic on ports from ``5900`` to ``5999``. Make sure the new rule " "appears before any INPUT rules that REJECT traffic:" msgstr "" #: ../compute-default-ports.rst:26 msgid "" "Save the changes to the ``/etc/sysconfig/iptables`` file, and restart the " "``iptables`` service to pick up the changes:" msgstr "" #: ../compute-default-ports.rst:33 msgid "Repeat this process for each Compute service node." msgstr "" #: ../compute-euca2ools.rst:5 msgid "Managing the cloud with euca2ools" msgstr "" #: ../compute-euca2ools.rst:7 msgid "" "The ``euca2ools`` command-line tool provides a command line interface to EC2 " "API calls. For more information, see the `Official Eucalyptus Documentation " "`_." msgstr "" #: ../compute-flavors.rst:5 msgid "Flavors" msgstr "" #: ../compute-flavors.rst:7 msgid "" "Admin users can use the :command:`openstack flavor` command to customize and " "manage flavors. To see information for this command, run:" msgstr "" #: ../compute-flavors.rst:23 msgid "" "Configuration rights can be delegated to additional users by redefining the " "access controls for ``compute_extension:flavormanage`` in ``/etc/nova/policy." "json`` on the ``nova-api`` server." msgstr "" #: ../compute-flavors.rst:28 msgid "" "You can modify an existing flavor from the :guilabel:`Edit Flavor` button in " "the Dashboard." msgstr "" #: ../compute-flavors.rst:31 msgid "Flavors define these elements:" msgstr "" #: ../compute-flavors.rst:34 msgid "Element" msgstr "" #: ../compute-flavors.rst:36 msgid "" "A descriptive name. XX.SIZE_NAME is typically not required, though some " "third party tools may rely on it." msgstr "" #: ../compute-flavors.rst:39 msgid "Instance memory in megabytes." msgstr "" #: ../compute-flavors.rst:41 msgid "Disk" msgstr "" #: ../compute-flavors.rst:41 msgid "" "Virtual root disk size in gigabytes. This is an ephemeral di\\ sk that the " "base image is copied into. When booting from a p\\ ersistent volume it is " "not used. The \"0\" size is a special c\\ ase which uses the native base " "image size as the size of the ephemeral root volume." msgstr "" #: ../compute-flavors.rst:47 msgid "Ephemeral" msgstr "" #: ../compute-flavors.rst:47 msgid "" "Specifies the size of a secondary ephemeral data disk. This is an empty, " "unformatted disk and exists only for the life o\\ f the instance." msgstr "" #: ../compute-flavors.rst:51 msgid "Optional swap space allocation for the instance." msgstr "" #: ../compute-flavors.rst:53 msgid "Number of virtual CPUs presented to the instance." msgstr "" #: ../compute-flavors.rst:55 msgid "" "Optional property allows created servers to have a different bandwidth cap " "than that defined in the network they are att\\ ached to. This factor is " "multiplied by the rxtx_base propert\\ y of the network. Default value is " "1.0. That is, the same as attached network. This parameter is only available " "for Xen or NSX based systems." msgstr "" #: ../compute-flavors.rst:55 msgid "RXTX Factor" msgstr "" #: ../compute-flavors.rst:62 msgid "" "Boolean value, whether flavor is available to all users or p\\ rivate to the " "tenant it was created in. Defaults to ``True``." msgstr "" #: ../compute-flavors.rst:62 ../compute-flavors.rst:79 msgid "Is Public" msgstr "" #: ../compute-flavors.rst:65 ../compute-flavors.rst:91 msgid "Extra Specs" msgstr "" #: ../compute-flavors.rst:65 msgid "" "Key and value pairs that define on which compute nodes a fla\\ vor can run. " "These pairs must match corresponding pairs on t\\ he compute nodes. Use to " "implement special resources, such a\\ s flavors that run on only compute " "nodes with GPU hardware." msgstr "" #: ../compute-flavors.rst:73 msgid "" "Flavor customization can be limited by the hypervisor in use. For example " "the libvirt driver enables quotas on CPUs available to a VM, disk tuning, " "bandwidth I/O, watchdog behavior, random number generator device control, " "and instance VIF traffic control." msgstr "" #: ../compute-flavors.rst:81 msgid "" "Flavors can be assigned to particular projects. By default, a flavor is " "public and available to all projects. Private flavors are only accessible to " "those on the access list and are invisible to other projects. To create and " "assign a private flavor to a project, run this command:" msgstr "" #: ../compute-flavors.rst:94 msgid "" "You can configure the CPU limits with control parameters with the ``nova`` " "client. For example, to configure the I/O limit, use:" msgstr "" #: ../compute-flavors.rst:103 msgid "" "Use these optional parameters to control weight shares, enforcement " "intervals for runtime quotas, and a quota for maximum allowed bandwidth:" msgstr "" #: ../compute-flavors.rst:107 msgid "" "``cpu_shares``: Specifies the proportional weighted share for the domain. If " "this element is omitted, the service defaults to the OS provided defaults. " "There is no unit for the value; it is a relative measure based on the " "setting of other VMs. For example, a VM configured with value 2048 gets " "twice as much CPU time as a VM configured with value 1024." msgstr "" #: ../compute-flavors.rst:114 msgid "" "``cpu_shares_level``: On VMware, specifies the allocation level. Can be " "``custom``, ``high``, ``normal``, or ``low``. If you choose ``custom``, set " "the number of shares using ``cpu_shares_share``." msgstr "" #: ../compute-flavors.rst:118 msgid "" "``cpu_period``: Specifies the enforcement interval (unit: microseconds) for " "QEMU and LXC hypervisors. Within a period, each VCPU of the domain is not " "allowed to consume more than the quota worth of runtime. The value should be " "in range ``[1000, 1000000]``. A period with value 0 means no value." msgstr "" #: ../compute-flavors.rst:124 msgid "" "``cpu_limit``: Specifies the upper limit for VMware machine CPU allocation " "in MHz. This parameter ensures that a machine never uses more than the " "defined amount of CPU time. It can be used to enforce a limit on the " "machine's CPU performance." msgstr "" #: ../compute-flavors.rst:129 msgid "" "``cpu_reservation``: Specifies the guaranteed minimum CPU reservation in MHz " "for VMware. This means that if needed, the machine will definitely get " "allocated the reserved amount of CPU cycles." msgstr "" #: ../compute-flavors.rst:134 msgid "" "``cpu_quota``: Specifies the maximum allowed bandwidth (unit: microseconds). " "A domain with a negative-value quota indicates that the domain has infinite " "bandwidth, which means that it is not bandwidth controlled. The value should " "be in range ``[1000, 18446744073709551]`` or less than 0. A quota with value " "0 means no value. You can use this feature to ensure that all vCPUs run at " "the same speed. For example:" msgstr "" #: ../compute-flavors.rst:148 msgid "" "In this example, an instance of ``FLAVOR-NAME`` can only consume a maximum " "of 50% CPU of a physical CPU computing capability." msgstr "" #: ../compute-flavors.rst:149 msgid "CPU limits" msgstr "" #: ../compute-flavors.rst:152 msgid "" "For VMware, you can configure the memory limits with control parameters." msgstr "" #: ../compute-flavors.rst:154 msgid "" "Use these optional parameters to limit the memory allocation, guarantee " "minimum memory reservation, and to specify shares used in case of resource " "contention:" msgstr "" #: ../compute-flavors.rst:158 msgid "" "``memory_limit``: Specifies the upper limit for VMware machine memory " "allocation in MB. The utilization of a virtual machine will not exceed this " "limit, even if there are available resources. This is typically used to " "ensure a consistent performance of virtual machines independent of available " "resources." msgstr "" #: ../compute-flavors.rst:164 msgid "" "``memory_reservation``: Specifies the guaranteed minimum memory reservation " "in MB for VMware. This means the specified amount of memory will definitely " "be allocated to the machine." msgstr "" #: ../compute-flavors.rst:168 msgid "" "``memory_shares_level``: On VMware, specifies the allocation level. This can " "be ``custom``, ``high``, ``normal`` or ``low``. If you choose ``custom``, " "set the number of shares using ``memory_shares_share``." msgstr "" #: ../compute-flavors.rst:172 msgid "" "``memory_shares_share``: Specifies the number of shares allocated in the " "event that ``custom`` is used. There is no unit for this value. It is a " "relative measure based on the settings for other VMs. For example:" msgstr "" #: ../compute-flavors.rst:181 msgid "Memory limits" msgstr "" #: ../compute-flavors.rst:184 msgid "" "For VMware, you can configure the resource limits for disk with control " "parameters." msgstr "" #: ../compute-flavors.rst:187 msgid "" "Use these optional parameters to limit the disk utilization, guarantee disk " "allocation, and to specify shares used in case of resource contention. This " "allows the VMware driver to enable disk allocations for the running instance." msgstr "" #: ../compute-flavors.rst:192 msgid "" "``disk_io_limit``: Specifies the upper limit for disk utilization in I/O per " "second. The utilization of a virtual machine will not exceed this limit, " "even if there are available resources. The default value is -1 which " "indicates unlimited usage." msgstr "" #: ../compute-flavors.rst:198 msgid "" "``disk_io_reservation``: Specifies the guaranteed minimum disk allocation in " "terms of :term:`IOPS`." msgstr "" #: ../compute-flavors.rst:201 msgid "" "``disk_io_shares_level``: Specifies the allocation level. This can be " "``custom``, ``high``, ``normal`` or ``low``. If you choose custom, set the " "number of shares using ``disk_io_shares_share``." msgstr "" #: ../compute-flavors.rst:206 msgid "" "``disk_io_shares_share``: Specifies the number of shares allocated in the " "event that ``custom`` is used. When there is resource contention, this value " "is used to determine the resource allocation." msgstr "" #: ../compute-flavors.rst:211 msgid "The example below sets the ``disk_io_reservation`` to 2000 IOPS." msgstr "" #: ../compute-flavors.rst:216 msgid "Disk I/O limits" msgstr "" #: ../compute-flavors.rst:219 msgid "" "Using disk I/O quotas, you can set maximum disk write to 10 MB per second " "for a VM user. For example:" msgstr "" #: ../compute-flavors.rst:227 msgid "The disk I/O options are:" msgstr "" #: ../compute-flavors.rst:229 msgid "``disk_read_bytes_sec``" msgstr "" #: ../compute-flavors.rst:230 msgid "``disk_read_iops_sec``" msgstr "" #: ../compute-flavors.rst:231 msgid "``disk_write_bytes_sec``" msgstr "" #: ../compute-flavors.rst:232 msgid "``disk_write_iops_sec``" msgstr "" #: ../compute-flavors.rst:233 msgid "``disk_total_bytes_sec``" msgstr "" #: ../compute-flavors.rst:234 msgid "Disk tuning" msgstr "" #: ../compute-flavors.rst:234 msgid "``disk_total_iops_sec``" msgstr "" #: ../compute-flavors.rst:237 msgid "The vif I/O options are:" msgstr "" #: ../compute-flavors.rst:239 msgid "``vif_inbound_average``" msgstr "" #: ../compute-flavors.rst:240 msgid "``vif_inbound_burst``" msgstr "" #: ../compute-flavors.rst:241 msgid "``vif_inbound_peak``" msgstr "" #: ../compute-flavors.rst:242 msgid "``vif_outbound_average``" msgstr "" #: ../compute-flavors.rst:243 msgid "``vif_outbound_burst``" msgstr "" #: ../compute-flavors.rst:244 msgid "``vif_outbound_peak``" msgstr "" #: ../compute-flavors.rst:246 msgid "" "Incoming and outgoing traffic can be shaped independently. The bandwidth " "element can have at most, one inbound and at most, one outbound child " "element. If you leave any of these child elements out, no quality of service " "(QoS) is applied on that traffic direction. So, if you want to shape only " "the network's incoming traffic, use inbound only (and vice versa). Each " "element has one mandatory attribute average, which specifies the average bit " "rate on the interface being shaped." msgstr "" #: ../compute-flavors.rst:255 msgid "" "There are also two optional attributes (integer): ``peak``, which specifies " "the maximum rate at which a bridge can send data (kilobytes/second), and " "``burst``, the amount of bytes that can be burst at peak speed (kilobytes). " "The rate is shared equally within domains connected to the network." msgstr "" #: ../compute-flavors.rst:261 msgid "" "The example below sets network traffic bandwidth limits for existing flavor " "as follows:" msgstr "" #: ../compute-flavors.rst:264 msgid "Outbound traffic:" msgstr "" #: ../compute-flavors.rst:266 ../compute-flavors.rst:274 msgid "average: 256 Mbps (32768 kilobytes/second)" msgstr "" #: ../compute-flavors.rst:268 ../compute-flavors.rst:276 msgid "peak: 512 Mbps (65536 kilobytes/second)" msgstr "" #: ../compute-flavors.rst:270 ../compute-flavors.rst:278 msgid "burst: 65536 kilobytes" msgstr "" #: ../compute-flavors.rst:272 msgid "Inbound traffic:" msgstr "" #: ../compute-flavors.rst:292 msgid "" "All the speed limit values in above example are specified in kilobytes/" "second. And burst values are in kilobytes." msgstr "" #: ../compute-flavors.rst:293 msgid "Bandwidth I/O" msgstr "" #: ../compute-flavors.rst:296 msgid "" "For the libvirt driver, you can enable and set the behavior of a virtual " "hardware watchdog device for each flavor. Watchdog devices keep an eye on " "the guest server, and carry out the configured action, if the server hangs. " "The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If " "``hw:watchdog_action`` is not specified, the watchdog is disabled." msgstr "" #: ../compute-flavors.rst:303 msgid "To set the behavior, use:" msgstr "" #: ../compute-flavors.rst:309 msgid "Valid ACTION values are:" msgstr "" #: ../compute-flavors.rst:311 msgid "``disabled``: (default) The device is not attached." msgstr "" #: ../compute-flavors.rst:312 msgid "``reset``: Forcefully reset the guest." msgstr "" #: ../compute-flavors.rst:313 msgid "``poweroff``: Forcefully power off the guest." msgstr "" #: ../compute-flavors.rst:314 msgid "``pause``: Pause the guest." msgstr "" #: ../compute-flavors.rst:315 msgid "``none``: Only enable the watchdog; do nothing if the server hangs." msgstr "" #: ../compute-flavors.rst:319 msgid "" "Watchdog behavior set using a specific image's properties will override " "behavior set using flavors." msgstr "" #: ../compute-flavors.rst:320 msgid "Watchdog behavior" msgstr "" #: ../compute-flavors.rst:323 msgid "" "If a random-number generator device has been added to the instance through " "its image properties, the device can be enabled and configured using:" msgstr "" #: ../compute-flavors.rst:336 msgid "" "RATE-BYTES: (integer) Allowed amount of bytes that the guest can read from " "the host's entropy per period." msgstr "" #: ../compute-flavors.rst:338 msgid "RATE-PERIOD: (integer) Duration of the read period in seconds." msgstr "" #: ../compute-flavors.rst:338 msgid "Random-number generator" msgstr "" #: ../compute-flavors.rst:341 msgid "" "For the libvirt driver, you can define the topology of the processors in the " "virtual machine using properties. The properties with ``max`` limit the " "number that can be selected by the user with image properties." msgstr "" #: ../compute-flavors.rst:357 msgid "" "FLAVOR-SOCKETS: (integer) The number of sockets for the guest VM. By this is " "set to the number of vCPUs requested." msgstr "" #: ../compute-flavors.rst:359 msgid "" "FLAVOR-CORES: (integer) The number of cores per socket for the guest VM. By " "this is set to 1." msgstr "" #: ../compute-flavors.rst:361 msgid "" "FLAVOR-THREADS: (integer) The number of threads per core for the guest VM. " "By this is set to 1." msgstr "" #: ../compute-flavors.rst:362 msgid "CPU topology" msgstr "" #: ../compute-flavors.rst:365 msgid "" "For the libvirt driver, you can pin the virtual CPUs (vCPUs) of instances to " "the host's physical CPU cores (pCPUs) using properties. You can further " "refine this by stating how hardware CPU threads in a simultaneous " "multithreading-based (SMT) architecture be used. These configurations will " "result in improved per-instance determinism and performance." msgstr "" #: ../compute-flavors.rst:373 msgid "" "SMT-based architectures include Intel processors with Hyper-Threading " "technology. In these architectures, processor cores share a number of " "components with one or more other cores. Cores in such architectures are " "commonly referred to as hardware threads, while the cores that a given core " "share components with are known as thread siblings." msgstr "" #: ../compute-flavors.rst:381 msgid "" "Host aggregates should be used to separate these pinned instances from " "unpinned instances as the latter will not respect the resourcing " "requirements of the former." msgstr "" #: ../compute-flavors.rst:391 msgid "Valid CPU-POLICY values are:" msgstr "" #: ../compute-flavors.rst:393 msgid "" "``shared``: (default) The guest vCPUs will be allowed to freely float across " "host pCPUs, albeit potentially constrained by NUMA policy." msgstr "" #: ../compute-flavors.rst:395 msgid "" "``dedicated``: The guest vCPUs will be strictly pinned to a set of host " "pCPUs. In the absence of an explicit vCPU topology request, the drivers " "typically expose all vCPUs as sockets with one core and one thread. When " "strict CPU pinning is in effect the guest CPU topology will be setup to " "match the topology of the CPUs to which it is pinned. This option implies an " "overcommit ratio of 1.0. For example, if a two vCPU guest is pinned to a " "single host core with two threads, then the guest will get a topology of one " "socket, one core, threads threads." msgstr "" #: ../compute-flavors.rst:404 msgid "Valid CPU-THREAD-POLICY values are:" msgstr "" #: ../compute-flavors.rst:406 msgid "" "``prefer``: (default) The host may or may not have an SMT architecture. " "Where an SMT architecture is present, thread siblings are preferred." msgstr "" #: ../compute-flavors.rst:408 msgid "" "``isolate``: The host must not have an SMT architecture or must emulate a " "non-SMT architecture. If the host does not have an SMT architecture, each " "vCPU is placed on a different core as expected. If the host does have an SMT " "architecture - that is, one or more cores have thread siblings - then each " "vCPU is placed on a different physical core. No vCPUs from other guests are " "placed on the same core. All but one thread sibling on each utilized core is " "therefore guaranteed to be unusable." msgstr "" #: ../compute-flavors.rst:415 msgid "" "``require``: The host must have an SMT architecture. Each vCPU is allocated " "on thread siblings. If the host does not have an SMT architecture, then it " "is not used. If the host has an SMT architecture, but not enough cores with " "free thread siblings are available, then scheduling fails." msgstr "" #: ../compute-flavors.rst:420 msgid "CPU pinning policy" msgstr "" #: ../compute-flavors.rst:423 msgid "You can configure the size of large pages used to back the VMs." msgstr "" #: ../compute-flavors.rst:430 msgid "Valid ``PAGE_SIZE`` values are:" msgstr "" #: ../compute-flavors.rst:432 msgid "" "``small``: (default) The smallest page size is used. Example: 4 KB on x86." msgstr "" #: ../compute-flavors.rst:434 msgid "" "``large``: Only use larger page sizes for guest RAM. Example: either 2 MB or " "1 GB on x86." msgstr "" #: ../compute-flavors.rst:436 msgid "" "``any``: It is left up to the compute driver to decide. In this case, the " "libvirt driver might try to find large pages, but fall back to small pages. " "Other drivers may choose alternate policies for ``any``." msgstr "" #: ../compute-flavors.rst:439 msgid "" "pagesize: (string) An explicit page size can be set if the workload has " "specific requirements. This value can be an integer value for the page size " "in KB, or can use any standard suffix. Example: ``4KB``, ``2MB``, ``2048``, " "``1GB``." msgstr "" #: ../compute-flavors.rst:446 msgid "" "Large pages can be enabled for guest RAM without any regard to whether the " "guest OS will use them or not. If the guest OS chooses not to use huge " "pages, it will merely see small pages as before. Conversely, if a guest OS " "does intend to use huge pages, it is very important that the guest RAM be " "backed by huge pages. Otherwise, the guest OS will not be getting the " "performance benefit it is expecting." msgstr "" #: ../compute-flavors.rst:450 msgid "Large pages allocation" msgstr "" #: ../compute-images-instances.rst:3 msgid "Images and instances" msgstr "" #: ../compute-images-instances.rst:5 msgid "" "Virtual machine images contain a virtual disk that holds a bootable " "operating system on it. Disk images provide templates for virtual machine " "file systems. The Image service controls image storage and management." msgstr "" #: ../compute-images-instances.rst:10 msgid "" "Instances are the individual virtual machines that run on physical compute " "nodes inside the cloud. Users can launch any number of instances from the " "same image. Each launched instance runs from a copy of the base image. Any " "changes made to the instance do not affect the base image. Snapshots capture " "the state of an instances running disk. Users can create a snapshot, and " "build a new image based on these snapshots. The Compute service controls " "instance, image, and snapshot storage and management." msgstr "" #: ../compute-images-instances.rst:19 msgid "" "When you launch an instance, you must choose a ``flavor``, which represents " "a set of virtual resources. Flavors define virtual CPU number, RAM amount " "available, and ephemeral disks size. Users must select from the set of " "available flavors defined on their cloud. OpenStack provides a number of " "predefined flavors that you can edit or add to." msgstr "" #: ../compute-images-instances.rst:28 msgid "" "For more information about creating and troubleshooting images, see the " "`OpenStack Virtual Machine Image Guide `__." msgstr "" #: ../compute-images-instances.rst:32 msgid "" "For more information about image configuration options, see the `Image " "services `__ section of the OpenStack Configuration Reference." msgstr "" #: ../compute-images-instances.rst:37 msgid "For more information about flavors, see :ref:`compute-flavors`." msgstr "" #: ../compute-images-instances.rst:41 msgid "" "You can add and remove additional resources from running instances, such as " "persistent volume storage, or public IP addresses. The example used in this " "chapter is of a typical virtual system within an OpenStack cloud. It uses " "the ``cinder-volume`` service, which provides persistent block storage, " "instead of the ephemeral storage provided by the selected instance flavor." msgstr "" #: ../compute-images-instances.rst:48 msgid "" "This diagram shows the system state prior to launching an instance. The " "image store has a number of predefined images, supported by the Image " "service. Inside the cloud, a compute node contains the available vCPU, " "memory, and local disk resources. Additionally, the ``cinder-volume`` " "service stores predefined volumes." msgstr "" #: ../compute-images-instances.rst:58 msgid "**The base image state with no running instances**" msgstr "" #: ../compute-images-instances.rst:65 msgid "Instance Launch" msgstr "" #: ../compute-images-instances.rst:67 msgid "" "To launch an instance, select an image, flavor, and any optional attributes. " "The selected flavor provides a root volume, labeled ``vda`` in this diagram, " "and additional ephemeral storage, labeled ``vdb``. In this example, the " "``cinder-volume`` store is mapped to the third virtual disk on this " "instance, ``vdc``." msgstr "" #: ../compute-images-instances.rst:77 msgid "**Instance creation from an image**" msgstr "" #: ../compute-images-instances.rst:83 msgid "" "The Image service copies the base image from the image store to the local " "disk. The local disk is the first disk that the instance accesses, which is " "the root volume labeled ``vda``. Smaller instances start faster. Less data " "needs to be copied across the network." msgstr "" #: ../compute-images-instances.rst:89 msgid "" "The new empty ephemeral disk is also created, labeled ``vdb``. This disk is " "deleted when you delete the instance." msgstr "" #: ../compute-images-instances.rst:92 msgid "" "The compute node connects to the attached ``cinder-volume`` using iSCSI. The " "``cinder-volume`` is mapped to the third disk, labeled ``vdc`` in this " "diagram. After the compute node provisions the vCPU and memory resources, " "the instance boots up from root volume ``vda``. The instance runs and " "changes data on the disks (highlighted in red on the diagram). If the volume " "store is located on a separate network, the ``my_block_storage_ip`` option " "specified in the storage node configuration file directs image traffic to " "the compute node." msgstr "" #: ../compute-images-instances.rst:103 msgid "" "Some details in this example scenario might be different in your " "environment. For example, you might use a different type of back-end " "storage, or different network protocols. One common variant is that the " "ephemeral storage used for volumes ``vda`` and ``vdb`` could be backed by " "network storage rather than a local disk." msgstr "" #: ../compute-images-instances.rst:109 msgid "" "When you delete an instance, the state is reclaimed with the exception of " "the persistent volume. The ephemeral storage is purged. Memory and vCPU " "resources are released. The image remains unchanged throughout this process." msgstr "" #: ../compute-images-instances.rst:118 msgid "**The end state of an image and volume after the instance exits**" msgstr "" #: ../compute-images-instances.rst:126 msgid "Image properties and property protection" msgstr "" #: ../compute-images-instances.rst:128 msgid "" "An image property is a key and value pair that the administrator or the " "image owner attaches to an OpenStack Image service image, as follows:" msgstr "" #: ../compute-images-instances.rst:132 msgid "The administrator defines core properties, such as the image name." msgstr "" #: ../compute-images-instances.rst:135 msgid "" "The administrator and the image owner can define additional properties, such " "as licensing and billing information." msgstr "" #: ../compute-images-instances.rst:138 msgid "" "The administrator can configure any property as protected, which limits " "which policies or user roles can perform CRUD operations on that property. " "Protected properties are generally additional properties to which only " "administrators have access." msgstr "" #: ../compute-images-instances.rst:143 msgid "" "For unprotected image properties, the administrator can manage core " "properties and the image owner can manage additional properties." msgstr "" #: ../compute-images-instances.rst:146 msgid "**To configure property protection**" msgstr "" #: ../compute-images-instances.rst:148 msgid "" "To configure property protection, edit the ``policy.json`` file. This file " "can also be used to set policies for Image service actions." msgstr "" #: ../compute-images-instances.rst:151 msgid "Define roles or policies in the ``policy.json`` file:" msgstr "" #: ../compute-images-instances.rst:217 msgid "" "For each parameter, use ``\"rule:restricted\"`` to restrict access to all " "users or ``\"role:admin\"`` to limit access to administrator roles. For " "example:" msgstr "" #: ../compute-images-instances.rst:226 msgid "" "Define which roles or policies can manage which properties in a property " "protections configuration file. For example:" msgstr "" #: ../compute-images-instances.rst:249 msgid "A value of ``@`` allows the corresponding operation for a property." msgstr "" #: ../compute-images-instances.rst:251 msgid "A value of ``!`` disallows the corresponding operation for a property." msgstr "" #: ../compute-images-instances.rst:254 msgid "" "In the ``glance-api.conf`` file, define the location of a property " "protections configuration file." msgstr "" #: ../compute-images-instances.rst:261 msgid "" "This file contains the rules for property protections and the roles and " "policies associated with it." msgstr "" #: ../compute-images-instances.rst:264 msgid "By default, property protections are not enforced." msgstr "" #: ../compute-images-instances.rst:266 msgid "" "If you specify a file name value and the file is not found, the ``glance-" "api`` service does not start." msgstr "" #: ../compute-images-instances.rst:269 ../compute-images-instances.rst:282 msgid "" "To view a sample configuration file, see `glance-api.conf `__." msgstr "" #: ../compute-images-instances.rst:273 msgid "" "Optionally, in the ``glance-api.conf`` file, specify whether roles or " "policies are used in the property protections configuration file" msgstr "" #: ../compute-images-instances.rst:280 msgid "The default is ``roles``." msgstr "" #: ../compute-images-instances.rst:287 msgid "Image download: how it works" msgstr "" #: ../compute-images-instances.rst:289 msgid "" "Prior to starting a virtual machine, transfer the virtual machine image to " "the compute node from the Image service. How this works can change depending " "on the settings chosen for the compute node and the Image service." msgstr "" #: ../compute-images-instances.rst:294 msgid "" "Typically, the Compute service will use the image identifier passed to it by " "the scheduler service and request the image from the Image API. Though " "images are not stored in glance—rather in a back end, which could be Object " "Storage, a filesystem or any other supported method—the connection is made " "from the compute node to the Image service and the image is transferred over " "this connection. The Image service streams the image from the back end to " "the compute node." msgstr "" #: ../compute-images-instances.rst:302 msgid "" "It is possible to set up the Object Storage node on a separate network, and " "still allow image traffic to flow between the Compute and Object Storage " "nodes. Configure the ``my_block_storage_ip`` option in the storage node " "configuration file to allow block storage traffic to reach the Compute node." msgstr "" #: ../compute-images-instances.rst:308 msgid "" "Certain back ends support a more direct method, where on request the Image " "service will return a URL that links directly to the back-end store. You can " "download the image using this approach. Currently, the only store to support " "the direct download approach is the filesystem store. Configured the " "approach using the ``filesystems`` option in the ``image_file_url`` section " "of the ``nova.conf`` file on compute nodes." msgstr "" #: ../compute-images-instances.rst:316 msgid "" "Compute nodes also implement caching of images, meaning that if an image has " "been used before it won't necessarily be downloaded every time. Information " "on the configuration options for caching on compute nodes can be found in " "the `Configuration Reference `__." msgstr "" #: ../compute-images-instances.rst:323 msgid "Instance building blocks" msgstr "" #: ../compute-images-instances.rst:325 msgid "" "In OpenStack, the base operating system is usually copied from an image " "stored in the OpenStack Image service. This results in an ephemeral instance " "that starts from a known template state and loses all accumulated states on " "shutdown." msgstr "" #: ../compute-images-instances.rst:330 msgid "" "You can also put an operating system on a persistent volume in Compute or " "the Block Storage volume system. This gives a more traditional, persistent " "system that accumulates states that are preserved across restarts. To get a " "list of available images on your system, run:" msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-images-instances.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-images-instances.rst:351 ../compute_arch.rst:264 msgid "The displayed image attributes are:" msgstr "" #: ../compute-images-instances.rst:354 msgid "Automatically generated UUID of the image." msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-images-instances.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-images-instances.rst:354 ../compute_arch.rst:267 msgid "``ID``" msgstr "" #: ../compute-images-instances.rst:357 msgid "Free form, human-readable name for the image." msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-images-instances.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-images-instances.rst:357 ../compute_arch.rst:270 msgid "``Name``" msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-images-instances.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-images-instances.rst:360 ../compute_arch.rst:273 msgid "" "The status of the image. Images marked ``ACTIVE`` are available for use." msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-images-instances.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-images-instances.rst:361 ../compute_arch.rst:274 msgid "``Status``" msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-images-instances.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-images-instances.rst:364 ../compute_arch.rst:277 msgid "" "For images that are created as snapshots of running instances, this is the " "UUID of the instance the snapshot derives from. For uploaded images, this " "field is blank." msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-images-instances.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-images-instances.rst:366 ../compute_arch.rst:279 msgid "``Server``" msgstr "" #: ../compute-images-instances.rst:368 msgid "" "Virtual hardware templates are called ``flavors``. The default installation " "provides five predefined flavors." msgstr "" #: ../compute-images-instances.rst:371 msgid "For a list of flavors that are available on your system, run:" msgstr "" #: ../compute-images-instances.rst:386 msgid "" "By default, administrative users can configure the flavors. You can change " "this behavior by redefining the access controls for ``compute_extension:" "flavormanage`` in ``/etc/nova/policy.json`` on the ``compute-api`` server." msgstr "" #: ../compute-images-instances.rst:393 msgid "Instance management tools" msgstr "" #: ../compute-images-instances.rst:395 msgid "" "OpenStack provides command-line, web interface, and API-based instance " "management tools. Third-party management tools are also available, using " "either the native API or the provided EC2-compatible API." msgstr "" #: ../compute-images-instances.rst:399 msgid "" "The OpenStack python-novaclient package provides a basic command-line " "utility, which uses the :command:`nova` command. This is available as a " "native package for most Linux distributions, or you can install the latest " "version using the pip python package installer:" msgstr "" #: ../compute-images-instances.rst:408 msgid "" "For more information about python-novaclient and other command-line tools, " "see the `OpenStack End User Guide `__." msgstr "" #: ../compute-images-instances.rst:414 msgid "Control where instances run" msgstr "" #: ../compute-images-instances.rst:416 msgid "" "The `OpenStack Configuration Reference `__ provides detailed information on controlling where " "your instances run, including ensuring a set of instances run on different " "compute nodes for service resiliency or on the same node for high " "performance inter-instance communications." msgstr "" #: ../compute-images-instances.rst:423 msgid "" "Administrative users can specify which compute node their instances run on. " "To do this, specify the ``--availability-zone AVAILABILITY_ZONE:" "COMPUTE_HOST`` parameter." msgstr "" #: ../compute-images-instances.rst:429 msgid "Launch instances with UEFI" msgstr "" #: ../compute-images-instances.rst:431 msgid "" "Unified Extensible Firmware Interface (UEFI) is a standard firmware designed " "to replace legacy BIOS. There is a slow but steady trend for operating " "systems to move to the UEFI format and, in some cases, make it their only " "format." msgstr "" #: ../compute-images-instances.rst:436 msgid "**To configure UEFI environment**" msgstr "" #: ../compute-images-instances.rst:438 msgid "" "To successfully launch an instance from an UEFI image in QEMU/KVM " "environment, the administrator has to install the following packages on " "compute node:" msgstr "" #: ../compute-images-instances.rst:442 msgid "OVMF, a port of Intel's tianocore firmware to QEMU virtual machine." msgstr "" #: ../compute-images-instances.rst:444 msgid "libvirt, which has been supporting UEFI boot since version 1.2.9." msgstr "" #: ../compute-images-instances.rst:446 msgid "" "Because default UEFI loader path is ``/usr/share/OVMF/OVMF_CODE.fd``, the " "administrator must create one link to this location after UEFI package is " "installed." msgstr "" #: ../compute-images-instances.rst:450 msgid "**To upload UEFI images**" msgstr "" #: ../compute-images-instances.rst:452 msgid "" "To launch instances from a UEFI image, the administrator first has to upload " "one UEFI image. To do so, ``hw_firmware_type`` property must be set to " "``uefi`` when the image is created. For example:" msgstr "" #: ../compute-images-instances.rst:461 msgid "After that, you can launch instances from this UEFI image." msgstr "" #: ../compute-live-migration-usage.rst:0 msgid "**nova service-list**" msgstr "" #: ../compute-live-migration-usage.rst:5 msgid "Migrate instances" msgstr "" #: ../compute-live-migration-usage.rst:7 msgid "" "This section discusses how to migrate running instances from one OpenStack " "Compute server to another OpenStack Compute server." msgstr "" #: ../compute-live-migration-usage.rst:10 msgid "" "Before starting a migration, review the Configure migrations section. :ref:" "`section_configuring-compute-migrations`." msgstr "" #: ../compute-live-migration-usage.rst:15 msgid "" "Although the :command:`nova` command is called :command:`live-migration`, " "under the default Compute configuration options, the instances are suspended " "before migration. For more information, see `Configure migrations `_. " "in the OpenStack Configuration Reference." msgstr "" #: ../compute-live-migration-usage.rst:22 msgid "**Migrating instances**" msgstr "" #: ../compute-live-migration-usage.rst:24 msgid "Check the ID of the instance to be migrated:" msgstr "" #: ../compute-live-migration-usage.rst:34 msgid "ID" msgstr "" # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_crud_share.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-eql-volume-size.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-live-migration-usage.rst:36 #: ../compute-live-migration-usage.rst:109 #: ../shared_file_systems_crud_share.rst:431 ../ts-eql-volume-size.rst:109 msgid "Status" msgstr "" #: ../compute-live-migration-usage.rst:37 msgid "Networks" msgstr "" #: ../compute-live-migration-usage.rst:38 #: ../compute-live-migration-usage.rst:88 msgid "d1df1b5a-70c4-4fed-98b7-423362f2c47c" msgstr "" #: ../compute-live-migration-usage.rst:39 #: ../compute-live-migration-usage.rst:90 msgid "vm1" msgstr "" #: ../compute-live-migration-usage.rst:40 #: ../compute-live-migration-usage.rst:44 #: ../compute-live-migration-usage.rst:94 msgid "ACTIVE" msgstr "" #: ../compute-live-migration-usage.rst:41 msgid "private=a.b.c.d" msgstr "" #: ../compute-live-migration-usage.rst:42 msgid "d693db9e-a7cf-45ef-a7c9-b3ecb5f22645" msgstr "" #: ../compute-live-migration-usage.rst:43 msgid "vm2" msgstr "" #: ../compute-live-migration-usage.rst:45 msgid "private=e.f.g.h" msgstr "" #: ../compute-live-migration-usage.rst:47 msgid "" "Check the information associated with the instance. In this example, ``vm1`` " "is running on ``HostB``:" msgstr "" #: ../compute-live-migration-usage.rst:58 msgid "Property" msgstr "" # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_adv-config.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-live-migration-usage.rst:59 ../networking_adv-config.rst:26 msgid "Value" msgstr "" #: ../compute-live-migration-usage.rst:60 #: ../compute-live-migration-usage.rst:64 #: ../compute-live-migration-usage.rst:77 #: ../compute-live-migration-usage.rst:80 #: ../compute-live-migration-usage.rst:84 #: ../compute-live-migration-usage.rst:96 msgid "..." msgstr "" #: ../compute-live-migration-usage.rst:62 msgid "OS-EXT-SRV-ATTR:host" msgstr "" #: ../compute-live-migration-usage.rst:66 msgid "flavor" msgstr "" # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# database.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-live-migration-usage.rst:71 ../database.rst:124 #: ../telemetry-data-collection.rst:711 msgid "name" msgstr "" #: ../compute-live-migration-usage.rst:73 msgid "private network" msgstr "" # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_multi-dhcp-agents.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-live-migration-usage.rst:82 #: ../compute-live-migration-usage.rst:135 #: ../networking_multi-dhcp-agents.rst:48 msgid "HostB" msgstr "" #: ../compute-live-migration-usage.rst:92 msgid "a.b.c.d" msgstr "" #: ../compute-live-migration-usage.rst:98 msgid "" "Select the compute node the instance will be migrated to. In this example, " "we will migrate the instance to ``HostC``, because ``nova-compute`` is " "running on it:" msgstr "" #: ../compute-live-migration-usage.rst:106 msgid "Binary" msgstr "" # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_multi-dhcp-agents.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-live-migration-usage.rst:107 #: ../networking_multi-dhcp-agents.rst:39 msgid "Host" msgstr "" #: ../compute-live-migration-usage.rst:108 msgid "Zone" msgstr "" #: ../compute-live-migration-usage.rst:110 msgid "State" msgstr "" #: ../compute-live-migration-usage.rst:111 msgid "Updated_at" msgstr "" #: ../compute-live-migration-usage.rst:112 msgid "Disabled Reason" msgstr "" #: ../compute-live-migration-usage.rst:113 msgid "nova-consoleauth" msgstr "" # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_multi-dhcp-agents.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-live-migration-usage.rst:114 #: ../compute-live-migration-usage.rst:121 #: ../compute-live-migration-usage.rst:128 #: ../compute-live-migration-usage.rst:149 #: ../networking_multi-dhcp-agents.rst:46 msgid "HostA" msgstr "" #: ../compute-live-migration-usage.rst:115 #: ../compute-live-migration-usage.rst:122 #: ../compute-live-migration-usage.rst:129 #: ../compute-live-migration-usage.rst:150 msgid "internal" msgstr "" # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-alarms.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-live-migration-usage.rst:116 #: ../compute-live-migration-usage.rst:123 #: ../compute-live-migration-usage.rst:130 #: ../compute-live-migration-usage.rst:137 #: ../compute-live-migration-usage.rst:144 #: ../compute-live-migration-usage.rst:151 ../telemetry-alarms.rst:191 msgid "enabled" msgstr "" #: ../compute-live-migration-usage.rst:117 #: ../compute-live-migration-usage.rst:124 #: ../compute-live-migration-usage.rst:131 #: ../compute-live-migration-usage.rst:138 #: ../compute-live-migration-usage.rst:145 #: ../compute-live-migration-usage.rst:152 msgid "up" msgstr "" #: ../compute-live-migration-usage.rst:118 #: ../compute-live-migration-usage.rst:125 msgid "2014-03-25T10:33:25.000000" msgstr "" #: ../compute-live-migration-usage.rst:120 msgid "nova-scheduler" msgstr "" #: ../compute-live-migration-usage.rst:127 msgid "nova-conductor" msgstr "" #: ../compute-live-migration-usage.rst:132 msgid "2014-03-25T10:33:27.000000" msgstr "" #: ../compute-live-migration-usage.rst:134 #: ../compute-live-migration-usage.rst:141 msgid "nova-compute" msgstr "" #: ../compute-live-migration-usage.rst:136 #: ../compute-live-migration-usage.rst:143 msgid "nova" msgstr "" #: ../compute-live-migration-usage.rst:139 #: ../compute-live-migration-usage.rst:146 #: ../compute-live-migration-usage.rst:153 msgid "2014-03-25T10:33:31.000000" msgstr "" #: ../compute-live-migration-usage.rst:142 #: ../compute-live-migration-usage.rst:171 #: ../compute-live-migration-usage.rst:176 #: ../compute-live-migration-usage.rst:181 #: ../compute-live-migration-usage.rst:186 #: ../compute-live-migration-usage.rst:191 msgid "HostC" msgstr "" #: ../compute-live-migration-usage.rst:148 msgid "nova-cert" msgstr "" #: ../compute-live-migration-usage.rst:156 msgid "Check that ``HostC`` has enough resources for migration:" msgstr "" #: ../compute-live-migration-usage.rst:166 msgid "HOST" msgstr "" #: ../compute-live-migration-usage.rst:167 msgid "PROJECT" msgstr "" # #-#-#-#-# compute-live-migration-usage.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-live-migration-usage.rst:168 ../telemetry-measurements.rst:122 msgid "cpu" msgstr "" #: ../compute-live-migration-usage.rst:169 msgid "memory_mb" msgstr "" #: ../compute-live-migration-usage.rst:170 msgid "disk_gb" msgstr "" #: ../compute-live-migration-usage.rst:172 msgid "(total)" msgstr "" #: ../compute-live-migration-usage.rst:173 msgid "16" msgstr "" #: ../compute-live-migration-usage.rst:174 msgid "32232" msgstr "" #: ../compute-live-migration-usage.rst:175 msgid "878" msgstr "" #: ../compute-live-migration-usage.rst:177 msgid "(used_now)" msgstr "" #: ../compute-live-migration-usage.rst:178 #: ../compute-live-migration-usage.rst:183 #: ../compute-live-migration-usage.rst:188 #: ../compute-live-migration-usage.rst:193 msgid "22" msgstr "" #: ../compute-live-migration-usage.rst:179 #: ../compute-live-migration-usage.rst:184 #: ../compute-live-migration-usage.rst:189 #: ../compute-live-migration-usage.rst:194 msgid "21284" msgstr "" #: ../compute-live-migration-usage.rst:180 msgid "442" msgstr "" #: ../compute-live-migration-usage.rst:182 msgid "(used_max)" msgstr "" #: ../compute-live-migration-usage.rst:185 #: ../compute-live-migration-usage.rst:190 #: ../compute-live-migration-usage.rst:195 msgid "422" msgstr "" #: ../compute-live-migration-usage.rst:187 msgid "p1" msgstr "" #: ../compute-live-migration-usage.rst:192 msgid "p2" msgstr "" #: ../compute-live-migration-usage.rst:197 msgid "``cpu``: Number of CPUs" msgstr "" #: ../compute-live-migration-usage.rst:199 msgid "``memory_mb``: Total amount of memory, in MB" msgstr "" #: ../compute-live-migration-usage.rst:201 msgid "``disk_gb``: Total amount of space for NOVA-INST-DIR/instances, in GB" msgstr "" #: ../compute-live-migration-usage.rst:203 msgid "" "In this table, the first row shows the total amount of resources available " "on the physical server. The second line shows the currently used resources. " "The third line shows the maximum used resources. The fourth line and below " "shows the resources available for each project." msgstr "" #: ../compute-live-migration-usage.rst:208 msgid "Migrate the instance using the :command:`nova live-migration` command:" msgstr "" #: ../compute-live-migration-usage.rst:214 msgid "" "In this example, SERVER can be the ID or name of the instance. Another " "example:" msgstr "" #: ../compute-live-migration-usage.rst:224 msgid "" "When using live migration to move workloads between Icehouse and Juno " "compute nodes, it may cause data loss because libvirt live migration with " "shared block storage was buggy (potential loss of data) before version 3.32. " "This issue can be solved when we upgrade to RPC API version 4.0." msgstr "" #: ../compute-live-migration-usage.rst:230 msgid "" "Check that the instance has been migrated successfully, using :command:`nova " "list`. If the instance is still running on ``HostB``, check the log files at " "``src/dest`` for ``nova-compute`` and ``nova-scheduler`` to determine why." msgstr "" # #-#-#-#-# compute-manage-logs.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# identity_keystone_usage_and_features.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-manage-logs.rst:5 ../identity_keystone_usage_and_features.rst:36 msgid "Logging" msgstr "" #: ../compute-manage-logs.rst:8 msgid "Logging module" msgstr "" #: ../compute-manage-logs.rst:10 msgid "" "Logging behavior can be changed by creating a configuration file. To specify " "the configuration file, add this line to the ``/etc/nova/nova.conf`` file:" msgstr "" #: ../compute-manage-logs.rst:18 msgid "" "To change the logging level, add ``DEBUG``, ``INFO``, ``WARNING``, or " "``ERROR`` as a parameter." msgstr "" #: ../compute-manage-logs.rst:21 msgid "" "The logging configuration file is an INI-style configuration file, which " "must contain a section called ``logger_nova``. This controls the behavior of " "the logging facility in the ``nova-*`` services. For example:" msgstr "" #: ../compute-manage-logs.rst:33 msgid "" "This example sets the debugging level to ``INFO`` (which is less verbose " "than the default ``DEBUG`` setting)." msgstr "" #: ../compute-manage-logs.rst:36 msgid "" "For more about the logging configuration syntax, including the ``handlers`` " "and ``quaname`` variables, see the `Python documentation `__ on " "logging configuration files." msgstr "" #: ../compute-manage-logs.rst:41 msgid "" "For an example of the ``logging.conf`` file with various defined handlers, " "see the `OpenStack Configuration Reference `__." msgstr "" #: ../compute-manage-logs.rst:45 msgid "Syslog" msgstr "" #: ../compute-manage-logs.rst:47 msgid "" "OpenStack Compute services can send logging information to syslog. This is " "useful if you want to use rsyslog to forward logs to a remote machine. " "Separately configure the Compute service (nova), the Identity service " "(keystone), the Image service (glance), and, if you are using it, the Block " "Storage service (cinder) to send log messages to syslog. Open these " "configuration files:" msgstr "" #: ../compute-manage-logs.rst:54 msgid "``/etc/nova/nova.conf``" msgstr "" #: ../compute-manage-logs.rst:56 msgid "``/etc/keystone/keystone.conf``" msgstr "" #: ../compute-manage-logs.rst:58 msgid "``/etc/glance/glance-api.conf``" msgstr "" #: ../compute-manage-logs.rst:60 msgid "``/etc/glance/glance-registry.conf``" msgstr "" #: ../compute-manage-logs.rst:62 msgid "``/etc/cinder/cinder.conf``" msgstr "" #: ../compute-manage-logs.rst:64 msgid "In each configuration file, add these lines:" msgstr "" #: ../compute-manage-logs.rst:72 msgid "" "In addition to enabling syslog, these settings also turn off debugging " "output from the log." msgstr "" #: ../compute-manage-logs.rst:77 msgid "" "Although this example uses the same local facility for each service " "(``LOG_LOCAL0``, which corresponds to syslog facility ``LOCAL0``), we " "recommend that you configure a separate local facility for each service, as " "this provides better isolation and more flexibility. For example, you can " "capture logging information at different severity levels for different " "services. syslog allows you to define up to eight local facilities, " "``LOCAL0, LOCAL1, ..., LOCAL7``. For more information, see the syslog " "documentation." msgstr "" #: ../compute-manage-logs.rst:87 msgid "Rsyslog" msgstr "" #: ../compute-manage-logs.rst:89 msgid "" "rsyslog is useful for setting up a centralized log server across multiple " "machines. This section briefly describe the configuration to set up an " "rsyslog server. A full treatment of rsyslog is beyond the scope of this " "book. This section assumes rsyslog has already been installed on your hosts " "(it is installed by default on most Linux distributions)." msgstr "" #: ../compute-manage-logs.rst:96 msgid "" "This example provides a minimal configuration for ``/etc/rsyslog.conf`` on " "the log server host, which receives the log files" msgstr "" #: ../compute-manage-logs.rst:105 msgid "" "Add a filter rule to ``/etc/rsyslog.conf`` which looks for a host name. This " "example uses COMPUTE_01 as the compute host name:" msgstr "" #: ../compute-manage-logs.rst:112 msgid "" "On each compute host, create a file named ``/etc/rsyslog.d/60-nova.conf``, " "with the following content:" msgstr "" #: ../compute-manage-logs.rst:122 msgid "" "Once you have created the file, restart the ``rsyslog`` service. Error-level " "log messages on the compute hosts should now be sent to the log server." msgstr "" #: ../compute-manage-logs.rst:126 msgid "Serial console" msgstr "" #: ../compute-manage-logs.rst:128 msgid "" "The serial console provides a way to examine kernel output and other system " "messages during troubleshooting if the instance lacks network connectivity." msgstr "" #: ../compute-manage-logs.rst:132 msgid "" "Read-only access from server serial console is possible using the ``os-" "GetSerialOutput`` server action. Most cloud images enable this feature by " "default. For more information, see :ref:`compute-common-errors-and-fixes`." msgstr "" #: ../compute-manage-logs.rst:137 msgid "" "OpenStack Juno and later supports read-write access using the serial console " "using the ``os-GetSerialConsole`` server action. This feature also requires " "a websocket client to access the serial console." msgstr "" #: ../compute-manage-logs.rst:141 msgid "**Configuring read-write serial console access**" msgstr "" #: ../compute-manage-logs.rst:143 msgid "On a compute node, edit the ``/etc/nova/nova.conf`` file:" msgstr "" #: ../compute-manage-logs.rst:145 msgid "In the ``[serial_console]`` section, enable the serial console:" msgstr "" #: ../compute-manage-logs.rst:153 msgid "" "In the ``[serial_console]`` section, configure the serial console proxy " "similar to graphical console proxies:" msgstr "" #: ../compute-manage-logs.rst:164 msgid "" "The ``base_url`` option specifies the base URL that clients receive from the " "API upon requesting a serial console. Typically, this refers to the host " "name of the controller node." msgstr "" #: ../compute-manage-logs.rst:168 msgid "" "The ``listen`` option specifies the network interface nova-compute should " "listen on for virtual console connections. Typically, 0.0.0.0 will enable " "listening on all interfaces." msgstr "" #: ../compute-manage-logs.rst:172 msgid "" "The ``proxyclient_address`` option specifies which network interface the " "proxy should connect to. Typically, this refers to the IP address of the " "management interface." msgstr "" #: ../compute-manage-logs.rst:176 msgid "" "When you enable read-write serial console access, Compute will add serial " "console information to the Libvirt XML file for the instance. For example:" msgstr "" #: ../compute-manage-logs.rst:189 msgid "**Accessing the serial console on an instance**" msgstr "" #: ../compute-manage-logs.rst:191 msgid "" "Use the :command:`nova get-serial-proxy` command to retrieve the websocket " "URL for the serial console on the instance:" msgstr "" #: ../compute-manage-logs.rst:203 msgid "Url" msgstr "" #: ../compute-manage-logs.rst:204 msgid "serial" msgstr "" #: ../compute-manage-logs.rst:205 msgid "ws://127.0.0.1:6083/?token=18510769-71ad-4e5a-8348-4218b5613b3d" msgstr "" #: ../compute-manage-logs.rst:207 msgid "Alternatively, use the API directly:" msgstr "" #: ../compute-manage-logs.rst:220 msgid "" "Use Python websocket with the URL to generate ``.send``, ``.recv``, and ``." "fileno`` methods for serial console access. For example:" msgstr "" #: ../compute-manage-logs.rst:230 msgid "" "Alternatively, use a `Python websocket client `__." msgstr "" #: ../compute-manage-logs.rst:234 msgid "" "When you enable the serial console, typical instance logging using the :" "command:`nova console-log` command is disabled. Kernel output and other " "system messages will not be visible unless you are actively viewing the " "serial console." msgstr "" #: ../compute-manage-the-cloud.rst:5 msgid "Manage the cloud" msgstr "" #: ../compute-manage-the-cloud.rst:12 msgid "" "System administrators can use :command:`nova` client and :command:" "`euca2ools` commands to manage their clouds." msgstr "" #: ../compute-manage-the-cloud.rst:15 msgid "" "``nova`` client and ``euca2ools`` can be used by all users, though specific " "commands might be restricted by Role Based Access Control in the Identity " "service." msgstr "" #: ../compute-manage-the-cloud.rst:19 msgid "**Managing the cloud with nova client**" msgstr "" #: ../compute-manage-the-cloud.rst:21 msgid "" "The ``python-novaclient`` package provides a ``nova`` shell that enables " "Compute API interactions from the command line. Install the client, and " "provide your user name and password (which can be set as environment " "variables for convenience), for the ability to administer the cloud from the " "command line." msgstr "" #: ../compute-manage-the-cloud.rst:27 msgid "" "To install python-novaclient, download the tarball from `http://pypi.python." "org/pypi/python-novaclient/#downloads `__ and then install it in your favorite Python " "environment:" msgstr "" #: ../compute-manage-the-cloud.rst:37 msgid "As root, run:" msgstr "" #: ../compute-manage-the-cloud.rst:43 msgid "Confirm the installation was successful:" msgstr "" #: ../compute-manage-the-cloud.rst:62 msgid "" "Running :command:`nova help` returns a list of ``nova`` commands and " "parameters. To get help for a subcommand, run:" msgstr "" #: ../compute-manage-the-cloud.rst:69 msgid "" "For a complete list of ``nova`` commands and parameters, see the `OpenStack " "Command-Line Reference `__." msgstr "" #: ../compute-manage-the-cloud.rst:73 msgid "" "Set the required parameters as environment variables to make running " "commands easier. For example, you can add :option:`--os-username` as a " "``nova`` option, or set it as an environment variable. To set the user name, " "password, and tenant as environment variables, use:" msgstr "" #: ../compute-manage-the-cloud.rst:84 msgid "" "The Identity service will give you an authentication endpoint, which Compute " "recognizes as ``OS_AUTH_URL``:" msgstr "" #: ../compute-manage-users.rst:5 msgid "Manage Compute users" msgstr "" #: ../compute-manage-users.rst:7 msgid "" "Access to the Euca2ools (ec2) API is controlled by an access key and a " "secret key. The user's access key needs to be included in the request, and " "the request must be signed with the secret key. Upon receipt of API " "requests, Compute verifies the signature and runs commands on behalf of the " "user." msgstr "" #: ../compute-manage-users.rst:13 msgid "" "To begin using Compute, you must create a user with the Identity service." msgstr "" #: ../compute-manage-volumes.rst:0 msgid "**nova volume commands**" msgstr "" #: ../compute-manage-volumes.rst:5 msgid "" "Depending on the setup of your cloud provider, they may give you an endpoint " "to use to manage volumes, or there may be an extension under the covers. In " "either case, you can use the ``nova`` CLI to manage volumes." msgstr "" # #-#-#-#-# compute-manage-volumes.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_config-agents.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_use.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-manage-volumes.rst:13 ../networking_adv-features.rst:210 #: ../networking_adv-features.rst:378 ../networking_adv-features.rst:513 #: ../networking_adv-features.rst:805 ../networking_config-agents.rst:490 #: ../networking_use.rst:47 ../networking_use.rst:123 #: ../networking_use.rst:183 ../networking_use.rst:241 msgid "Command" msgstr "" #: ../compute-manage-volumes.rst:15 msgid "volume-attach" msgstr "" #: ../compute-manage-volumes.rst:16 msgid "Attach a volume to a server." msgstr "" #: ../compute-manage-volumes.rst:17 msgid "volume-create" msgstr "" #: ../compute-manage-volumes.rst:18 msgid "Add a new volume." msgstr "" #: ../compute-manage-volumes.rst:19 msgid "volume-delete" msgstr "" #: ../compute-manage-volumes.rst:20 msgid "Remove a volume." msgstr "" #: ../compute-manage-volumes.rst:21 msgid "volume-detach" msgstr "" #: ../compute-manage-volumes.rst:22 msgid "Detach a volume from a server." msgstr "" #: ../compute-manage-volumes.rst:23 msgid "volume-list" msgstr "" #: ../compute-manage-volumes.rst:24 msgid "List all the volumes." msgstr "" #: ../compute-manage-volumes.rst:25 msgid "volume-show" msgstr "" #: ../compute-manage-volumes.rst:26 msgid "Show details about a volume." msgstr "" #: ../compute-manage-volumes.rst:27 msgid "volume-snapshot-create" msgstr "" #: ../compute-manage-volumes.rst:28 msgid "Add a new snapshot." msgstr "" #: ../compute-manage-volumes.rst:29 msgid "volume-snapshot-delete" msgstr "" #: ../compute-manage-volumes.rst:30 msgid "Remove a snapshot." msgstr "" #: ../compute-manage-volumes.rst:31 msgid "volume-snapshot-list" msgstr "" #: ../compute-manage-volumes.rst:32 msgid "List all the snapshots." msgstr "" #: ../compute-manage-volumes.rst:33 msgid "volume-snapshot-show" msgstr "" #: ../compute-manage-volumes.rst:34 msgid "Show details about a snapshot." msgstr "" #: ../compute-manage-volumes.rst:35 msgid "volume-type-create" msgstr "" #: ../compute-manage-volumes.rst:36 msgid "Create a new volume type." msgstr "" #: ../compute-manage-volumes.rst:37 msgid "volume-type-delete" msgstr "" #: ../compute-manage-volumes.rst:38 msgid "Delete a specific flavor" msgstr "" #: ../compute-manage-volumes.rst:39 msgid "volume-type-list" msgstr "" #: ../compute-manage-volumes.rst:40 msgid "Print a list of available 'volume types'." msgstr "" #: ../compute-manage-volumes.rst:41 msgid "volume-update" msgstr "" #: ../compute-manage-volumes.rst:42 msgid "Update an attached volume." msgstr "" #: ../compute-manage-volumes.rst:46 msgid "For example, to list IDs and names of Compute volumes, run:" msgstr "" #: ../compute-networking-nova.rst:0 msgid "Description of IPv6 configuration options" msgstr "" #: ../compute-networking-nova.rst:0 msgid "Description of metadata configuration options" msgstr "" #: ../compute-networking-nova.rst:3 msgid "Networking with nova-network" msgstr "" #: ../compute-networking-nova.rst:5 msgid "" "Understanding the networking configuration options helps you design the best " "configuration for your Compute instances." msgstr "" #: ../compute-networking-nova.rst:8 msgid "" "You can choose to either install and configure ``nova-network`` or use the " "OpenStack Networking service (neutron). This section contains a brief " "overview of ``nova-network``. For more information about OpenStack " "Networking, see :ref:`networking`." msgstr "" #: ../compute-networking-nova.rst:14 msgid "Networking concepts" msgstr "" #: ../compute-networking-nova.rst:16 msgid "" "Compute assigns a private IP address to each VM instance. Compute makes a " "distinction between fixed IPs and floating IP. Fixed IPs are IP addresses " "that are assigned to an instance on creation and stay the same until the " "instance is explicitly terminated. Floating IPs are addresses that can be " "dynamically associated with an instance. A floating IP address can be " "disassociated and associated with another instance at any time. A user can " "reserve a floating IP for their project." msgstr "" #: ../compute-networking-nova.rst:26 msgid "" "Currently, Compute with ``nova-network`` only supports Linux bridge " "networking that allows virtual interfaces to connect to the outside network " "through the physical interface." msgstr "" #: ../compute-networking-nova.rst:30 msgid "" "The network controller with ``nova-network`` provides virtual networks to " "enable compute servers to interact with each other and with the public " "network. Compute with ``nova-network`` supports the following network modes, " "which are implemented as Network Manager types:" msgstr "" #: ../compute-networking-nova.rst:36 msgid "" "In this mode, a network administrator specifies a subnet. IP addresses for " "VM instances are assigned from the subnet, and then injected into the image " "on launch. Each instance receives a fixed IP address from the pool of " "available addresses. A system administrator must create the Linux networking " "bridge (typically named ``br100``, although this is configurable) on the " "systems running the ``nova-network`` service. All instances of the system " "are attached to the same bridge, which is configured manually by the network " "administrator." msgstr "" #: ../compute-networking-nova.rst:44 msgid "Flat Network Manager" msgstr "" #: ../compute-networking-nova.rst:48 msgid "" "Configuration injection currently only works on Linux-style systems that " "keep networking configuration in ``/etc/network/interfaces``." msgstr "" #: ../compute-networking-nova.rst:53 msgid "" "In this mode, OpenStack starts a DHCP server (dnsmasq) to allocate IP " "addresses to VM instances from the specified subnet, in addition to manually " "configuring the networking bridge. IP addresses for VM instances are " "assigned from a subnet specified by the network administrator." msgstr "" #: ../compute-networking-nova.rst:59 msgid "" "Like flat mode, all instances are attached to a single bridge on the compute " "node. Additionally, a DHCP server configures instances depending on single-/" "multi-host mode, alongside each ``nova-network``. In this mode, Compute does " "a bit more configuration. It attempts to bridge into an Ethernet device " "(``flat_interface``, eth0 by default). For every instance, Compute allocates " "a fixed IP address and configures dnsmasq with the MAC ID and IP address for " "the VM. Dnsmasq does not take part in the IP address allocation process, it " "only hands out IPs according to the mapping done by Compute. Instances " "receive their fixed IPs with the :command:`dhcpdiscover` command. These IPs " "are not assigned to any of the host's network interfaces, only to the guest-" "side interface for the VM." msgstr "" #: ../compute-networking-nova.rst:72 msgid "" "In any setup with flat networking, the hosts providing the ``nova-network`` " "service are responsible for forwarding traffic from the private network. " "They also run and configure dnsmasq as a DHCP server listening on this " "bridge, usually on IP address 10.0.0.1 (see :ref:`compute-dnsmasq`). Compute " "can determine the NAT entries for each network, although sometimes NAT is " "not used, such as when the network has been configured with all public IPs, " "or if a hardware router is used (which is a high availability option). In " "this case, hosts need to have ``br100`` configured and physically connected " "to any other nodes that are hosting VMs. You must set the " "``flat_network_bridge`` option or create networks with the bridge parameter " "in order to avoid raising an error. Compute nodes have iptables or ebtables " "entries created for each project and instance to protect against MAC ID or " "IP address spoofing and ARP poisoning." msgstr "" #: ../compute-networking-nova.rst:86 msgid "Flat DHCP Network Manager" msgstr "" #: ../compute-networking-nova.rst:90 msgid "" "In single-host Flat DHCP mode you will be able to ping VMs through their " "fixed IP from the ``nova-network`` node, but you cannot ping them from the " "compute nodes. This is expected behavior." msgstr "" #: ../compute-networking-nova.rst:96 msgid "" "This is the default mode for OpenStack Compute. In this mode, Compute " "creates a VLAN and bridge for each tenant. For multiple-machine " "installations, the VLAN Network Mode requires a switch that supports VLAN " "tagging (IEEE 802.1Q). The tenant gets a range of private IPs that are only " "accessible from inside the VLAN. In order for a user to access the instances " "in their tenant, a special VPN instance (code named ``cloudpipe``) needs to " "be created. Compute generates a certificate and key for the user to access " "the VPN and starts the VPN automatically. It provides a private network " "segment for each tenant's instances that can be accessed through a dedicated " "VPN connection from the internet. In this mode, each tenant gets its own " "VLAN, Linux networking bridge, and subnet." msgstr "" #: ../compute-networking-nova.rst:109 msgid "" "The subnets are specified by the network administrator, and are assigned " "dynamically to a tenant when required. A DHCP server is started for each " "VLAN to pass out IP addresses to VM instances from the subnet assigned to " "the tenant. All instances belonging to one tenant are bridged into the same " "VLAN for that tenant. OpenStack Compute creates the Linux networking bridges " "and VLANs when required." msgstr "" #: ../compute-networking-nova.rst:115 msgid "VLAN Network Manager" msgstr "" #: ../compute-networking-nova.rst:117 msgid "" "These network managers can co-exist in a cloud system. However, because you " "cannot select the type of network for a given tenant, you cannot configure " "multiple network types in a single Compute installation." msgstr "" #: ../compute-networking-nova.rst:121 msgid "" "All network managers configure the network using network drivers. For " "example, the Linux L3 driver (``l3.py`` and ``linux_net.py``), which makes " "use of ``iptables``, ``route`` and other network management facilities, and " "the libvirt `network filtering facilities `__. The driver is not tied to any particular network manager; all " "network managers use the same driver. The driver usually initializes only " "when the first VM lands on this host node." msgstr "" #: ../compute-networking-nova.rst:130 msgid "" "All network managers operate in either single-host or multi-host mode. This " "choice greatly influences the network configuration. In single-host mode, a " "single ``nova-network`` service provides a default gateway for VMs and hosts " "a single DHCP server (dnsmasq). In multi-host mode, each compute node runs " "its own ``nova-network`` service. In both cases, all traffic between VMs and " "the internet flows through ``nova-network``. Each mode has benefits and " "drawbacks. For more on this, see the Network Topology section in the " "`OpenStack Operations Guide `__." msgstr "" #: ../compute-networking-nova.rst:140 msgid "" "All networking options require network connectivity to be already set up " "between OpenStack physical nodes. OpenStack does not configure any physical " "network interfaces. All network managers automatically create VM virtual " "interfaces. Some network managers can also create network bridges such as " "``br100``." msgstr "" #: ../compute-networking-nova.rst:146 msgid "" "The internal network interface is used for communication with VMs. The " "interface should not have an IP address attached to it before OpenStack " "installation, it serves only as a fabric where the actual endpoints are VMs " "and dnsmasq. Additionally, the internal network interface must be in " "``promiscuous`` mode, so that it can receive packets whose target MAC " "address is the guest VM, not the host." msgstr "" #: ../compute-networking-nova.rst:153 msgid "" "All machines must have a public and internal network interface (controlled " "by these options: ``public_interface`` for the public interface, and " "``flat_interface`` and ``vlan_interface`` for the internal interface with " "flat or VLAN managers). This guide refers to the public network as the " "external network and the private network as the internal or tenant network." msgstr "" #: ../compute-networking-nova.rst:160 msgid "" "For flat and flat DHCP modes, use the :command:`nova network-create` command " "to create a network:" msgstr "" #: ../compute-networking-nova.rst:169 msgid "specifies the network subnet." msgstr "" #: ../compute-networking-nova.rst:170 msgid "" "specifies a range of fixed IP addresses to allocate, and can be a subset of " "the ``--fixed-range-v4`` argument." msgstr "" #: ../compute-networking-nova.rst:173 msgid "" "specifies the bridge device to which this network is connected on every " "compute node." msgstr "" #: ../compute-networking-nova.rst:174 msgid "This example uses the following parameters:" msgstr "" #: ../compute-networking-nova.rst:179 msgid "DHCP server: dnsmasq" msgstr "" #: ../compute-networking-nova.rst:181 msgid "" "The Compute service uses `dnsmasq `__ as the DHCP server when using either Flat DHCP Network Manager or " "VLAN Network Manager. For Compute to operate in IPv4/IPv6 dual-stack mode, " "use at least dnsmasq v2.63. The ``nova-network`` service is responsible for " "starting dnsmasq processes." msgstr "" #: ../compute-networking-nova.rst:188 msgid "" "The behavior of dnsmasq can be customized by creating a dnsmasq " "configuration file. Specify the configuration file using the " "``dnsmasq_config_file`` configuration option:" msgstr "" #: ../compute-networking-nova.rst:196 msgid "" "For more information about creating a dnsmasq configuration file, see the " "`OpenStack Configuration Reference `__, and `the dnsmasq documentation `__." msgstr "" #: ../compute-networking-nova.rst:202 msgid "" "Dnsmasq also acts as a caching DNS server for instances. You can specify the " "DNS server that dnsmasq uses by setting the ``dns_server`` configuration " "option in ``/etc/nova/nova.conf``. This example configures dnsmasq to use " "Google's public DNS server:" msgstr "" #: ../compute-networking-nova.rst:211 msgid "" "Dnsmasq logs to syslog (typically ``/var/log/syslog`` or ``/var/log/" "messages``, depending on Linux distribution). Logs can be useful for " "troubleshooting, especially in a situation where VM instances boot " "successfully but are not reachable over the network." msgstr "" #: ../compute-networking-nova.rst:216 msgid "" "Administrators can specify the starting point IP address to reserve with the " "DHCP server (in the format n.n.n.n) with this command:" msgstr "" #: ../compute-networking-nova.rst:223 msgid "" "This reservation only affects which IP address the VMs start at, not the " "fixed IP addresses that ``nova-network`` places on the bridges." msgstr "" #: ../compute-networking-nova.rst:228 msgid "Configure Compute to use IPv6 addresses" msgstr "" #: ../compute-networking-nova.rst:230 msgid "" "If you are using OpenStack Compute with ``nova-network``, you can put " "Compute into dual-stack mode, so that it uses both IPv4 and IPv6 addresses " "for communication. In dual-stack mode, instances can acquire their IPv6 " "global unicast addresses by using a stateless address auto-configuration " "mechanism [RFC 4862/2462]. IPv4/IPv6 dual-stack mode works with both " "``VlanManager`` and ``FlatDHCPManager`` networking modes." msgstr "" #: ../compute-networking-nova.rst:238 msgid "" "In ``VlanManager`` networking mode, each project uses a different 64-bit " "global routing prefix. In ``FlatDHCPManager`` mode, all instances use one 64-" "bit global routing prefix." msgstr "" #: ../compute-networking-nova.rst:242 msgid "" "This configuration was tested with virtual machine images that have an IPv6 " "stateless address auto-configuration capability. This capability is required " "for any VM to run with an IPv6 address. You must use an EUI-64 address for " "stateless address auto-configuration. Each node that executes a ``nova-*`` " "service must have ``python-netaddr`` and ``radvd`` installed." msgstr "" #: ../compute-networking-nova.rst:249 msgid "**Switch into IPv4/IPv6 dual-stack mode**" msgstr "" #: ../compute-networking-nova.rst:251 msgid "For every node running a ``nova-*`` service, install python-netaddr:" msgstr "" #: ../compute-networking-nova.rst:257 msgid "" "For every node running ``nova-network``, install ``radvd`` and configure " "IPv6 networking:" msgstr "" #: ../compute-networking-nova.rst:266 msgid "" "On all nodes, edit the ``nova.conf`` file and specify ``use_ipv6 = True``." msgstr "" #: ../compute-networking-nova.rst:269 msgid "Restart all ``nova-*`` services." msgstr "" #: ../compute-networking-nova.rst:271 msgid "**IPv6 configuration options**" msgstr "" #: ../compute-networking-nova.rst:273 msgid "" "You can use the following options with the :command:`nova network-create` " "command:" msgstr "" #: ../compute-networking-nova.rst:276 msgid "" "Add a fixed range for IPv6 addresses to the :command:`nova network-create` " "command. Specify ``public`` or ``private`` after the ``network-create`` " "parameter." msgstr "" #: ../compute-networking-nova.rst:285 msgid "" "Set the IPv6 global routing prefix by using the ``--fixed_range_v6`` " "parameter. The default value for the parameter is ``fd00::/48``." msgstr "" #: ../compute-networking-nova.rst:289 msgid "" "When you use ``FlatDHCPManager``, the command uses the original ``--" "fixed_range_v6`` value. For example:" msgstr "" #: ../compute-networking-nova.rst:297 msgid "" "When you use ``VlanManager``, the command increments the subnet ID to create " "subnet prefixes. Guest VMs use this prefix to generate their IPv6 global " "unicast addresses. For example:" msgstr "" # #-#-#-#-# compute-networking-nova.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-remote-console-access.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# compute-security.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# objectstorage-troubleshoot.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-networking-nova.rst:309 ../compute-networking-nova.rst:506 #: ../compute-remote-console-access.rst:134 ../compute-security.rst:103 #: ../objectstorage-troubleshoot.rst:87 msgid "Configuration option = Default value" msgstr "" #: ../compute-networking-nova.rst:311 ../compute-networking-nova.rst:508 msgid "[DEFAULT]" msgstr "" #: ../compute-networking-nova.rst:313 msgid "fixed_range_v6 = fd00::/48" msgstr "" #: ../compute-networking-nova.rst:314 msgid "(StrOpt) Fixed IPv6 address block" msgstr "" #: ../compute-networking-nova.rst:315 msgid "gateway_v6 = None" msgstr "" #: ../compute-networking-nova.rst:316 msgid "(StrOpt) Default IPv6 gateway" msgstr "" #: ../compute-networking-nova.rst:317 msgid "ipv6_backend = rfc2462" msgstr "" #: ../compute-networking-nova.rst:318 msgid "(StrOpt) Backend to use for IPv6 generation" msgstr "" #: ../compute-networking-nova.rst:319 msgid "use_ipv6 = False" msgstr "" #: ../compute-networking-nova.rst:320 msgid "(BoolOpt) Use IPv6" msgstr "" #: ../compute-networking-nova.rst:323 msgid "Metadata service" msgstr "" #: ../compute-networking-nova.rst:325 msgid "" "Compute uses a metadata service for virtual machine instances to retrieve " "instance-specific data. Instances access the metadata service at " "``http://169.254.169.254``. The metadata service supports two sets of APIs: " "an OpenStack metadata API and an EC2-compatible API. Both APIs are versioned " "by date." msgstr "" #: ../compute-networking-nova.rst:331 msgid "" "To retrieve a list of supported versions for the OpenStack metadata API, " "make a GET request to ``http://169.254.169.254/openstack``:" msgstr "" #: ../compute-networking-nova.rst:342 msgid "" "To list supported versions for the EC2-compatible metadata API, make a GET " "request to ``http://169.254.169.254``:" msgstr "" #: ../compute-networking-nova.rst:359 msgid "" "If you write a consumer for one of these APIs, always attempt to access the " "most recent API version supported by your consumer first, then fall back to " "an earlier version if the most recent one is not available." msgstr "" #: ../compute-networking-nova.rst:363 msgid "" "Metadata from the OpenStack API is distributed in JSON format. To retrieve " "the metadata, make a GET request to ``http://169.254.169.254/" "openstack/2012-08-10/meta_data.json``:" msgstr "" #: ../compute-networking-nova.rst:392 msgid "" "Instances also retrieve user data (passed as the ``user_data`` parameter in " "the API call or by the :option:`--user_data` flag in the :command:`nova " "boot` command) through the metadata service, by making a GET request to " "``http://169.254.169.254/openstack/2012-08-10/user_data``:" msgstr "" #: ../compute-networking-nova.rst:403 msgid "" "The metadata service has an API that is compatible with version 2009-04-04 " "of the `Amazon EC2 metadata service `__. This means " "that virtual machine images designed for EC2 will work properly with " "OpenStack." msgstr "" #: ../compute-networking-nova.rst:409 msgid "" "The EC2 API exposes a separate URL for each metadata element. Retrieve a " "listing of these elements by making a GET query to " "``http://169.254.169.254/2009-04-04/meta-data/``:" msgstr "" #: ../compute-networking-nova.rst:450 msgid "" "Instances can retrieve the public SSH key (identified by keypair name when a " "user requests a new instance) by making a GET request to " "``http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key``:" msgstr "" #: ../compute-networking-nova.rst:462 msgid "" "Instances can retrieve user data by making a GET request to " "``http://169.254.169.254/2009-04-04/user-data``:" msgstr "" #: ../compute-networking-nova.rst:471 msgid "" "The metadata service is implemented by either the ``nova-api`` service or " "the ``nova-api-metadata`` service. Note that the ``nova-api-metadata`` " "service is generally only used when running in multi-host mode, as it " "retrieves instance-specific metadata. If you are running the ``nova-api`` " "service, you must have ``metadata`` as one of the elements listed in the " "``enabled_apis`` configuration option in ``/etc/nova/nova.conf``. The " "default ``enabled_apis`` configuration setting includes the metadata " "service, so you do not need to modify it." msgstr "" #: ../compute-networking-nova.rst:480 msgid "" "Hosts access the service at ``169.254.169.254:80``, and this is translated " "to ``metadata_host:metadata_port`` by an iptables rule established by the " "``nova-network`` service. In multi-host mode, you can set ``metadata_host`` " "to ``127.0.0.1``." msgstr "" #: ../compute-networking-nova.rst:485 msgid "" "For instances to reach the metadata service, the ``nova-network`` service " "must configure iptables to NAT port ``80`` of the ``169.254.169.254`` " "address to the IP address specified in ``metadata_host`` (this defaults to ``" "$my_ip``, which is the IP address of the ``nova-network`` service) and port " "specified in ``metadata_port`` (which defaults to ``8775``) in ``/etc/nova/" "nova.conf``." msgstr "" #: ../compute-networking-nova.rst:494 msgid "" "The ``metadata_host`` configuration option must be an IP address, not a host " "name." msgstr "" #: ../compute-networking-nova.rst:497 msgid "" "The default Compute service settings assume that ``nova-network`` and ``nova-" "api`` are running on the same host. If this is not the case, in the ``/etc/" "nova/nova.conf`` file on the host running ``nova-network``, set the " "``metadata_host`` configuration option to the IP address of the host where " "``nova-api`` is running." msgstr "" #: ../compute-networking-nova.rst:510 msgid "metadata_cache_expiration = 15" msgstr "" #: ../compute-networking-nova.rst:511 msgid "" "(IntOpt) Time in seconds to cache metadata; 0 to disable metadata caching " "entirely (not recommended). Increasing this should improve response times of " "the metadata API when under heavy load. Higher values may increase memory " "usage and result in longer times for host metadata changes to take effect." msgstr "" #: ../compute-networking-nova.rst:516 msgid "metadata_host = $my_ip" msgstr "" #: ../compute-networking-nova.rst:517 msgid "(StrOpt) The IP address for the metadata API server" msgstr "" #: ../compute-networking-nova.rst:518 msgid "metadata_listen = 0.0.0.0" msgstr "" #: ../compute-networking-nova.rst:519 msgid "(StrOpt) The IP address on which the metadata API will listen." msgstr "" #: ../compute-networking-nova.rst:520 msgid "metadata_listen_port = 8775" msgstr "" #: ../compute-networking-nova.rst:521 msgid "(IntOpt) The port on which the metadata API will listen." msgstr "" #: ../compute-networking-nova.rst:522 msgid "metadata_manager = nova.api.manager.MetadataManager" msgstr "" #: ../compute-networking-nova.rst:523 msgid "(StrOpt) OpenStack metadata service manager" msgstr "" #: ../compute-networking-nova.rst:524 msgid "metadata_port = 8775" msgstr "" #: ../compute-networking-nova.rst:525 msgid "(IntOpt) The port for the metadata API port" msgstr "" #: ../compute-networking-nova.rst:526 msgid "metadata_workers = None" msgstr "" #: ../compute-networking-nova.rst:527 msgid "" "(IntOpt) Number of workers for metadata service. The default will be the " "number of CPUs available." msgstr "" #: ../compute-networking-nova.rst:528 msgid "" "vendordata_driver = nova.api.metadata.vendordata_json.JsonFileVendorData" msgstr "" #: ../compute-networking-nova.rst:529 msgid "(StrOpt) Driver to use for vendor data" msgstr "" #: ../compute-networking-nova.rst:530 msgid "vendordata_jsonfile_path = None" msgstr "" #: ../compute-networking-nova.rst:531 msgid "(StrOpt) File to load JSON formatted vendor data from" msgstr "" #: ../compute-networking-nova.rst:534 msgid "Enable ping and SSH on VMs" msgstr "" #: ../compute-networking-nova.rst:536 msgid "" "You need to enable ``ping`` and ``ssh`` on your VMs for network access. This " "can be done with either the :command:`nova` or :command:`euca2ools` commands." msgstr "" #: ../compute-networking-nova.rst:542 msgid "" "Run these commands as root only if the credentials used to interact with " "``nova-api`` are in ``/root/.bashrc``. If the EC2 credentials in the ``." "bashrc`` file are for an unprivileged user, you must run these commands as " "that user instead." msgstr "" #: ../compute-networking-nova.rst:547 msgid "Enable ping and SSH with :command:`nova` commands:" msgstr "" #: ../compute-networking-nova.rst:554 msgid "Enable ping and SSH with ``euca2ools``:" msgstr "" #: ../compute-networking-nova.rst:561 msgid "" "If you have run these commands and still cannot ping or SSH your instances, " "check the number of running ``dnsmasq`` processes, there should be two. If " "not, kill the processes and restart the service with these commands:" msgstr "" #: ../compute-networking-nova.rst:572 msgid "Configure public (floating) IP addresses" msgstr "" #: ../compute-networking-nova.rst:574 msgid "" "This section describes how to configure floating IP addresses with ``nova-" "network``. For information about doing this with OpenStack Networking, see :" "ref:`L3-routing-and-NAT`." msgstr "" #: ../compute-networking-nova.rst:579 msgid "Private and public IP addresses" msgstr "" #: ../compute-networking-nova.rst:581 msgid "" "In this section, the term floating IP address is used to refer to an IP " "address, usually public, that you can dynamically add to a running virtual " "instance." msgstr "" #: ../compute-networking-nova.rst:585 msgid "" "Every virtual instance is automatically assigned a private IP address. You " "can choose to assign a public (or floating) IP address instead. OpenStack " "Compute uses network address translation (NAT) to assign floating IPs to " "virtual instances." msgstr "" #: ../compute-networking-nova.rst:590 msgid "" "To be able to assign a floating IP address, edit the ``/etc/nova/nova.conf`` " "file to specify which interface the ``nova-network`` service should bind " "public IP addresses to:" msgstr "" #: ../compute-networking-nova.rst:598 msgid "" "If you make changes to the ``/etc/nova/nova.conf`` file while the ``nova-" "network`` service is running, you will need to restart the service to pick " "up the changes." msgstr "" #: ../compute-networking-nova.rst:604 msgid "" "Floating IPs are implemented by using a source NAT (SNAT rule in iptables), " "so security groups can sometimes display inconsistent behavior if VMs use " "their floating IP to communicate with other VMs, particularly on the same " "physical host. Traffic from VM to VM across the fixed network does not have " "this issue, and so this is the recommended setup. To ensure that traffic " "does not get SNATed to the floating range, explicitly set:" msgstr "" #: ../compute-networking-nova.rst:616 msgid "" "The ``x.x.x.x/y`` value specifies the range of floating IPs for each pool of " "floating IPs that you define. This configuration is also required if the VMs " "in the source group have floating IPs." msgstr "" #: ../compute-networking-nova.rst:621 msgid "Enable IP forwarding" msgstr "" #: ../compute-networking-nova.rst:623 msgid "" "IP forwarding is disabled by default on most Linux distributions. You will " "need to enable it in order to use floating IPs." msgstr "" #: ../compute-networking-nova.rst:628 msgid "" "IP forwarding only needs to be enabled on the nodes that run ``nova-" "network``. However, you will need to enable it on all compute nodes if you " "use ``multi_host`` mode." msgstr "" #: ../compute-networking-nova.rst:632 msgid "To check if IP forwarding is enabled, run:" msgstr "" #: ../compute-networking-nova.rst:639 ../compute-networking-nova.rst:654 msgid "Alternatively, run:" msgstr "" #: ../compute-networking-nova.rst:646 msgid "In these examples, IP forwarding is disabled." msgstr "" #: ../compute-networking-nova.rst:648 msgid "To enable IP forwarding dynamically, run:" msgstr "" #: ../compute-networking-nova.rst:660 msgid "" "To make the changes permanent, edit the ``/etc/sysctl.conf`` file and update " "the IP forwarding setting:" msgstr "" #: ../compute-networking-nova.rst:667 msgid "Save the file and run this command to apply the changes:" msgstr "" #: ../compute-networking-nova.rst:673 msgid "You can also apply the changes by restarting the network service:" msgstr "" #: ../compute-networking-nova.rst:675 msgid "on Ubuntu, Debian:" msgstr "" #: ../compute-networking-nova.rst:681 msgid "on RHEL, Fedora, CentOS, openSUSE and SLES:" msgstr "" #: ../compute-networking-nova.rst:688 msgid "Create a list of available floating IP addresses" msgstr "" #: ../compute-networking-nova.rst:690 msgid "" "Compute maintains a list of floating IP addresses that are available for " "assigning to instances. Use the :command:`nova-manage floating` commands to " "perform floating IP operations:" msgstr "" #: ../compute-networking-nova.rst:694 msgid "Add entries to the list:" msgstr "" #: ../compute-networking-nova.rst:700 msgid "List the floating IP addresses in the pool:" msgstr "" #: ../compute-networking-nova.rst:706 msgid "Create specific floating IPs for either a single address or a subnet:" msgstr "" #: ../compute-networking-nova.rst:713 msgid "" "Remove floating IP addresses using the same parameters as the create command:" msgstr "" #: ../compute-networking-nova.rst:720 msgid "" "For more information about how administrators can associate floating IPs " "with instances, see `Manage IP addresses `__ in the OpenStack Administrator " "Guide." msgstr "" #: ../compute-networking-nova.rst:726 msgid "Automatically add floating IPs" msgstr "" #: ../compute-networking-nova.rst:728 msgid "" "You can configure ``nova-network`` to automatically allocate and assign a " "floating IP address to virtual instances when they are launched. Add this " "line to the ``/etc/nova/nova.conf`` file:" msgstr "" #: ../compute-networking-nova.rst:736 msgid "Save the file, and restart ``nova-network``" msgstr "" #: ../compute-networking-nova.rst:740 msgid "" "If this option is enabled, but all floating IP addresses have already been " "allocated, the :command:`nova boot` command will fail." msgstr "" #: ../compute-networking-nova.rst:744 msgid "Remove a network from a project" msgstr "" #: ../compute-networking-nova.rst:746 msgid "" "You cannot delete a network that has been associated to a project. This " "section describes the procedure for dissociating it so that it can be " "deleted." msgstr "" #: ../compute-networking-nova.rst:750 msgid "" "In order to disassociate the network, you will need the ID of the project it " "has been associated to. To get the project ID, you will need to be an " "administrator." msgstr "" #: ../compute-networking-nova.rst:754 msgid "" "Disassociate the network from the project using the :command:`scrub` " "command, with the project ID as the final parameter:" msgstr "" #: ../compute-networking-nova.rst:762 msgid "Multiple interfaces for instances (multinic)" msgstr "" #: ../compute-networking-nova.rst:764 msgid "" "The multinic feature allows you to use more than one interface with your " "instances. This is useful in several scenarios:" msgstr "" #: ../compute-networking-nova.rst:767 msgid "SSL Configurations (VIPs)" msgstr "" #: ../compute-networking-nova.rst:769 msgid "Services failover/HA" msgstr "" #: ../compute-networking-nova.rst:771 msgid "Bandwidth Allocation" msgstr "" #: ../compute-networking-nova.rst:773 msgid "Administrative/Public access to your instances" msgstr "" #: ../compute-networking-nova.rst:775 msgid "" "Each VIP represents a separate network with its own IP block. Every network " "mode has its own set of changes regarding multinic usage:" msgstr "" #: ../compute-networking-nova.rst:788 msgid "Using multinic" msgstr "" #: ../compute-networking-nova.rst:790 msgid "" "In order to use multinic, create two networks, and attach them to the tenant " "(named ``project`` on the command line):" msgstr "" #: ../compute-networking-nova.rst:798 msgid "" "Each new instance will now receive two IP addresses from their respective " "DHCP servers:" msgstr "" #: ../compute-networking-nova.rst:813 msgid "" "Make sure you start the second interface on the instance, or it won't be " "reachable through the second IP." msgstr "" #: ../compute-networking-nova.rst:816 msgid "" "This example demonstrates how to set up the interfaces within the instance. " "This is the configuration that needs to be applied inside the image." msgstr "" #: ../compute-networking-nova.rst:820 msgid "Edit the ``/etc/network/interfaces`` file:" msgstr "" #: ../compute-networking-nova.rst:834 msgid "" "If the Virtual Network Service Neutron is installed, you can specify the " "networks to attach to the interfaces by using the :option:`--nic` flag with " "the :command:`nova` command:" msgstr "" #: ../compute-networking-nova.rst:843 msgid "Troubleshooting Networking" msgstr "" #: ../compute-networking-nova.rst:846 msgid "Cannot reach floating IPs" msgstr "" #: ../compute-networking-nova.rst:851 msgid "You cannot reach your instances through the floating IP address." msgstr "" #: ../compute-networking-nova.rst:856 msgid "" "Check that the default security group allows ICMP (ping) and SSH (port 22), " "so that you can reach the instances:" msgstr "" #: ../compute-networking-nova.rst:869 msgid "" "Check the NAT rules have been added to iptables on the node that is running " "``nova-network``:" msgstr "" #: ../compute-networking-nova.rst:878 msgid "" "Check that the public address (`68.99.26.170 <68.99.26.170>`__ in this " "example), has been added to your public interface. You should see the " "address in the listing when you use the :command:`ip addr` command:" msgstr "" #: ../compute-networking-nova.rst:894 msgid "" "You cannot use ``SSH`` to access an instance with a public IP from within " "the same server because the routing configuration does not allow it." msgstr "" #: ../compute-networking-nova.rst:898 msgid "" "Use ``tcpdump`` to identify if packets are being routed to the inbound " "interface on the compute host. If the packets are reaching the compute hosts " "but the connection is failing, the issue may be that the packet is being " "dropped by reverse path filtering. Try disabling reverse-path filtering on " "the inbound interface. For example, if the inbound interface is ``eth2``, " "run:" msgstr "" #: ../compute-networking-nova.rst:909 msgid "" "If this solves the problem, add the following line to ``/etc/sysctl.conf`` " "so that the reverse-path filter is persistent:" msgstr "" #: ../compute-networking-nova.rst:917 msgid "Temporarily disable firewall" msgstr "" #: ../compute-networking-nova.rst:922 msgid "" "Networking issues prevent administrators accessing or reaching VM's through " "various pathways." msgstr "" #: ../compute-networking-nova.rst:928 msgid "" "You can disable the firewall by setting this option in ``/etc/nova/nova." "conf``:" msgstr "" #: ../compute-networking-nova.rst:941 msgid "Packet loss from instances to nova-network server (VLANManager mode)" msgstr "" #: ../compute-networking-nova.rst:946 msgid "" "If you can access your instances with ``SSH`` but the network to your " "instance is slow, or if you find that running certain operations are slower " "than they should be (for example, ``sudo``), packet loss could be occurring " "on the connection to the instance." msgstr "" #: ../compute-networking-nova.rst:951 msgid "" "Packet loss can be caused by Linux networking configuration settings related " "to bridges. Certain settings can cause packets to be dropped between the " "VLAN interface (for example, ``vlan100``) and the associated bridge " "interface (for example, ``br100``) on the host running ``nova-network``." msgstr "" #: ../compute-networking-nova.rst:960 msgid "" "One way to check whether this is the problem is to open three terminals and " "run the following commands:" msgstr "" #: ../compute-networking-nova.rst:963 msgid "" "In the first terminal, on the host running ``nova-network``, use ``tcpdump`` " "on the VLAN interface to monitor DNS-related traffic (UDP, port 53). As " "root, run:" msgstr "" #: ../compute-networking-nova.rst:971 msgid "" "In the second terminal, also on the host running ``nova-network``, use " "``tcpdump`` to monitor DNS-related traffic on the bridge interface. As root, " "run:" msgstr "" #: ../compute-networking-nova.rst:979 msgid "" "In the third terminal, use ``SSH`` to access the instance and generate DNS " "requests by using the :command:`nslookup` command:" msgstr "" #: ../compute-networking-nova.rst:986 msgid "" "The symptoms may be intermittent, so try running :command:`nslookup` " "multiple times. If the network configuration is correct, the command should " "return immediately each time. If it is not correct, the command hangs for " "several seconds before returning." msgstr "" #: ../compute-networking-nova.rst:991 msgid "" "If the :command:`nslookup` command sometimes hangs, and there are packets " "that appear in the first terminal but not the second, then the problem may " "be due to filtering done on the bridges. Try disabling filtering, and " "running these commands as root:" msgstr "" #: ../compute-networking-nova.rst:1002 msgid "" "If this solves your issue, add the following line to ``/etc/sysctl.conf`` so " "that the changes are persistent:" msgstr "" #: ../compute-networking-nova.rst:1012 msgid "KVM: Network connectivity works initially, then fails" msgstr "" #: ../compute-networking-nova.rst:1017 msgid "" "With KVM hypervisors, instances running Ubuntu 12.04 sometimes lose network " "connectivity after functioning properly for a period of time." msgstr "" #: ../compute-networking-nova.rst:1023 msgid "" "Try loading the ``vhost_net`` kernel module as a workaround for this issue " "(see `bug #997978 `__) . This kernel module may also `improve network performance " "`__ on KVM. To load the kernel " "module:" msgstr "" #: ../compute-networking-nova.rst:1036 msgid "Loading the module has no effect on running instances." msgstr "" #: ../compute-node-down.rst:5 msgid "Recover from a failed compute node" msgstr "" #: ../compute-node-down.rst:7 msgid "" "If you deploy Compute with a shared file system, you can use several methods " "to quickly recover from a node failure. This section discusses manual " "recovery." msgstr "" #: ../compute-node-down.rst:14 msgid "" "If a hardware malfunction or other error causes the cloud compute node to " "fail, you can use the :command:`nova evacuate` command to evacuate " "instances. See the `Administrator Guide `__." msgstr "" #: ../compute-node-down.rst:21 msgid "Manual recovery" msgstr "" #: ../compute-node-down.rst:22 msgid "To manually recover a failed compute node:" msgstr "" #: ../compute-node-down.rst:24 msgid "" "Identify the VMs on the affected hosts by using a combination of the :" "command:`nova list` and :command:`nova show` commands or the :command:`euca-" "describe-instances` command." msgstr "" #: ../compute-node-down.rst:28 msgid "" "For example, this command displays information about the i-000015b9 instance " "that runs on the np-rcc54 node:" msgstr "" #: ../compute-node-down.rst:36 msgid "" "Query the Compute database for the status of the host. This example converts " "an EC2 API instance ID to an OpenStack ID. If you use the :command:`nova` " "commands, you can substitute the ID directly. This example output is " "truncated:" msgstr "" #: ../compute-node-down.rst:64 msgid "Find the credentials for your database in ``/etc/nova.conf`` file." msgstr "" #: ../compute-node-down.rst:66 msgid "" "Decide to which compute host to move the affected VM. Run this database " "command to move the VM to that host:" msgstr "" #: ../compute-node-down.rst:73 msgid "" "If you use a hypervisor that relies on libvirt, such as KVM, update the " "``libvirt.xml`` file in ``/var/lib/nova/instances/[instance ID]`` with these " "changes:" msgstr "" #: ../compute-node-down.rst:77 msgid "" "Change the ``DHCPSERVER`` value to the host IP address of the new compute " "host." msgstr "" #: ../compute-node-down.rst:80 msgid "Update the VNC IP to ``0.0.0.0``." msgstr "" #: ../compute-node-down.rst:82 msgid "Reboot the VM:" msgstr "" #: ../compute-node-down.rst:88 msgid "" "Typically, the database update and :command:`nova reboot` command recover a " "VM from a failed host. However, if problems persist, try one of these " "actions:" msgstr "" #: ../compute-node-down.rst:91 msgid "Use :command:`virsh` to recreate the network filter configuration." msgstr "" #: ../compute-node-down.rst:92 msgid "Restart Compute services." msgstr "" #: ../compute-node-down.rst:93 msgid "" "Update the ``vm_state`` and ``power_state`` fields in the Compute database." msgstr "" #: ../compute-node-down.rst:98 msgid "Recover from a UID/GID mismatch" msgstr "" #: ../compute-node-down.rst:100 msgid "" "Sometimes when you run Compute with a shared file system or an automated " "configuration tool, files on your compute node might use the wrong UID or " "GID. This UID or GID mismatch can prevent you from running live migrations " "or starting virtual machines." msgstr "" #: ../compute-node-down.rst:105 msgid "" "This procedure runs on ``nova-compute`` hosts, based on the KVM hypervisor:" msgstr "" #: ../compute-node-down.rst:107 msgid "" "Set the nova UID to the same number in ``/etc/passwd`` on all hosts. For " "example, set the UID to ``112``." msgstr "" #: ../compute-node-down.rst:112 msgid "Choose UIDs or GIDs that are not in use for other users or groups." msgstr "" #: ../compute-node-down.rst:114 msgid "" "Set the ``libvirt-qemu`` UID to the same number in the ``/etc/passwd`` file " "on all hosts. For example, set the UID to ``119``." msgstr "" #: ../compute-node-down.rst:117 msgid "" "Set the ``nova`` group to the same number in the ``/etc/group`` file on all " "hosts. For example, set the group to ``120``." msgstr "" #: ../compute-node-down.rst:120 msgid "" "Set the ``libvirtd`` group to the same number in the ``/etc/group`` file on " "all hosts. For example, set the group to ``119``." msgstr "" #: ../compute-node-down.rst:123 msgid "Stop the services on the compute node." msgstr "" #: ../compute-node-down.rst:125 msgid "Change all files that the nova user or group owns. For example:" msgstr "" #: ../compute-node-down.rst:133 msgid "Repeat all steps for the ``libvirt-qemu`` files, if required." msgstr "" #: ../compute-node-down.rst:135 msgid "Restart the services." msgstr "" #: ../compute-node-down.rst:137 msgid "" "To verify that all files use the correct IDs, run the :command:`find` " "command." msgstr "" #: ../compute-node-down.rst:143 msgid "Recover cloud after disaster" msgstr "" #: ../compute-node-down.rst:145 msgid "" "This section describes how to manage your cloud after a disaster and back up " "persistent storage volumes. Backups are mandatory, even outside of disaster " "scenarios." msgstr "" #: ../compute-node-down.rst:149 msgid "" "For a definition of a disaster recovery plan (DRP), see `http://en.wikipedia." "org/wiki/Disaster\\_Recovery\\_Plan `_." msgstr "" #: ../compute-node-down.rst:152 msgid "" "A disk crash, network loss, or power failure can affect several components " "in your cloud architecture. The worst disaster for a cloud is a power loss. " "A power loss affects these components:" msgstr "" #: ../compute-node-down.rst:156 msgid "" "A cloud controller (``nova-api``, ``nova-objectstore``, ``nova-network``)" msgstr "" #: ../compute-node-down.rst:158 msgid "A compute node (``nova-compute``)" msgstr "" #: ../compute-node-down.rst:160 msgid "" "A storage area network (SAN) used by OpenStack Block Storage (``cinder-" "volumes``)" msgstr "" #: ../compute-node-down.rst:163 msgid "Before a power loss:" msgstr "" #: ../compute-node-down.rst:165 msgid "" "Create an active iSCSI session from the SAN to the cloud controller (used " "for the ``cinder-volumes`` LVM's VG)." msgstr "" #: ../compute-node-down.rst:168 msgid "" "Create an active iSCSI session from the cloud controller to the compute node " "(managed by ``cinder-volume``)." msgstr "" #: ../compute-node-down.rst:171 msgid "" "Create an iSCSI session for every volume (so 14 EBS volumes requires 14 " "iSCSI sessions)." msgstr "" #: ../compute-node-down.rst:174 msgid "" "Create ``iptables`` or ``ebtables`` rules from the cloud controller to the " "compute node. This allows access from the cloud controller to the running " "instance." msgstr "" #: ../compute-node-down.rst:178 msgid "" "Save the current state of the database, the current state of the running " "instances, and the attached volumes (mount point, volume ID, volume status, " "etc), at least from the cloud controller to the compute node." msgstr "" #: ../compute-node-down.rst:182 msgid "After power resumes and all hardware components restart:" msgstr "" #: ../compute-node-down.rst:184 msgid "The iSCSI session from the SAN to the cloud no longer exists." msgstr "" #: ../compute-node-down.rst:186 msgid "" "The iSCSI session from the cloud controller to the compute node no longer " "exists." msgstr "" #: ../compute-node-down.rst:189 msgid "" "nova-network reapplies configurations on boot and, as a result, recreates " "the iptables and ebtables from the cloud controller to the compute node." msgstr "" #: ../compute-node-down.rst:192 msgid "Instances stop running." msgstr "" #: ../compute-node-down.rst:194 msgid "" "Instances are not lost because neither ``destroy`` nor ``terminate`` ran. " "The files for the instances remain on the compute node." msgstr "" #: ../compute-node-down.rst:197 msgid "The database does not update." msgstr "" #: ../compute-node-down.rst:199 msgid "**Begin recovery**" msgstr "" #: ../compute-node-down.rst:203 msgid "Do not add any steps or change the order of steps in this procedure." msgstr "" #: ../compute-node-down.rst:205 msgid "" "Check the current relationship between the volume and its instance, so that " "you can recreate the attachment." msgstr "" #: ../compute-node-down.rst:208 msgid "" "Use the :command:`nova volume-list` command to get this information. Note " "that the :command:`nova` client can get volume information from OpenStack " "Block Storage." msgstr "" #: ../compute-node-down.rst:212 msgid "" "Update the database to clean the stalled state. Do this for every volume by " "using these queries:" msgstr "" #: ../compute-node-down.rst:223 msgid "Use :command:`nova volume-list` command to list all volumes." msgstr "" #: ../compute-node-down.rst:225 msgid "" "Restart the instances by using the :command:`nova reboot INSTANCE` command." msgstr "" #: ../compute-node-down.rst:229 msgid "" "Some instances completely reboot and become reachable, while some might stop " "at the plymouth stage. This is expected behavior. DO NOT reboot a second " "time." msgstr "" #: ../compute-node-down.rst:233 msgid "" "Instance state at this stage depends on whether you added an `/etc/fstab` " "entry for that volume. Images built with the cloud-init package remain in a " "``pending`` state, while others skip the missing volume and start. You " "perform this step to ask Compute to reboot every instance so that the stored " "state is preserved. It does not matter if not all instances come up " "successfully. For more information about cloud-init, see `help.ubuntu.com/" "community/CloudInit/ `__." msgstr "" #: ../compute-node-down.rst:242 msgid "" "If required, run the :command:`nova volume-attach` command to reattach the " "volumes to their respective instances. This example uses a file of listed " "volumes to reattach them:" msgstr "" #: ../compute-node-down.rst:259 msgid "" "Instances that were stopped at the plymouth stage now automatically continue " "booting and start normally. Instances that previously started successfully " "can now see the volume." msgstr "" #: ../compute-node-down.rst:263 msgid "Log in to the instances with SSH and reboot them." msgstr "" #: ../compute-node-down.rst:265 msgid "" "If some services depend on the volume or if a volume has an entry in fstab, " "you can now restart the instance. Restart directly from the instance itself " "and not through :command:`nova`:" msgstr "" #: ../compute-node-down.rst:273 msgid "When you plan for and complete a disaster recovery, follow these tips:" msgstr "" #: ../compute-node-down.rst:275 msgid "" "Use the ``errors=remount`` option in the ``fstab`` file to prevent data " "corruption." msgstr "" #: ../compute-node-down.rst:278 msgid "" "In the event of an I/O error, this option prevents writes to the disk. Add " "this configuration option into the cinder-volume server that performs the " "iSCSI connection to the SAN and into the instances' ``fstab`` files." msgstr "" #: ../compute-node-down.rst:282 msgid "" "Do not add the entry for the SAN's disks to the cinder-volume's ``fstab`` " "file." msgstr "" #: ../compute-node-down.rst:285 msgid "" "Some systems hang on that step, which means you could lose access to your " "cloud-controller. To re-run the session manually, run this command before " "performing the mount:" msgstr "" #: ../compute-node-down.rst:293 msgid "" "On your instances, if you have the whole ``/home/`` directory on the disk, " "leave a user's directory with the user's bash files and the " "``authorized_keys`` file instead of emptying the ``/home/`` directory and " "mapping the disk on it." msgstr "" #: ../compute-node-down.rst:298 msgid "" "This action enables you to connect to the instance without the volume " "attached, if you allow only connections through public keys." msgstr "" #: ../compute-node-down.rst:301 msgid "" "To script the disaster recovery plan (DRP), use the `https://github.com/" "Razique `_ bash script." msgstr "" #: ../compute-node-down.rst:304 msgid "This script completes these steps:" msgstr "" #: ../compute-node-down.rst:306 msgid "Creates an array for instances and their attached volumes." msgstr "" #: ../compute-node-down.rst:308 msgid "Updates the MySQL database." msgstr "" #: ../compute-node-down.rst:310 msgid "Restarts all instances with euca2ools." msgstr "" #: ../compute-node-down.rst:312 msgid "Reattaches the volumes." msgstr "" #: ../compute-node-down.rst:314 msgid "Uses Compute credentials to make an SSH connection into every instance." msgstr "" #: ../compute-node-down.rst:316 msgid "" "The script includes a ``test mode``, which enables you to perform the " "sequence for only one instance." msgstr "" #: ../compute-node-down.rst:319 msgid "" "To reproduce the power loss, connect to the compute node that runs that " "instance and close the iSCSI session. Do not detach the volume by using the :" "command:`nova volume-detach` command. You must manually close the iSCSI " "session. This example closes an iSCSI session with the number ``15``:" msgstr "" #: ../compute-node-down.rst:328 msgid "Do not forget the :option:`-r` option. Otherwise, all sessions close." msgstr "" #: ../compute-node-down.rst:332 msgid "" "There is potential for data loss while running instances during this " "procedure. If you are using Liberty or earlier, ensure you have the correct " "patch and set the options appropriately." msgstr "" #: ../compute-remote-console-access.rst:0 msgid "**Description of SPICE configuration options**" msgstr "" #: ../compute-remote-console-access.rst:0 msgid "**Description of VNC configuration options**" msgstr "" #: ../compute-remote-console-access.rst:3 msgid "Configure remote console access" msgstr "" #: ../compute-remote-console-access.rst:5 msgid "" "To provide a remote console or remote desktop access to guest virtual " "machines, use VNC or SPICE HTML5 through either the OpenStack dashboard or " "the command line. Best practice is to select one or the other to run." msgstr "" #: ../compute-remote-console-access.rst:10 msgid "About nova-consoleauth" msgstr "" #: ../compute-remote-console-access.rst:12 msgid "" "Both client proxies leverage a shared service to manage token authentication " "called ``nova-consoleauth``. This service must be running for either proxy " "to work. Many proxies of either type can be run against a single ``nova-" "consoleauth`` service in a cluster configuration." msgstr "" #: ../compute-remote-console-access.rst:17 msgid "" "Do not confuse the ``nova-consoleauth`` shared service with ``nova-" "console``, which is a XenAPI-specific service that most recent VNC proxy " "architectures do not use." msgstr "" #: ../compute-remote-console-access.rst:22 msgid "SPICE console" msgstr "" #: ../compute-remote-console-access.rst:24 msgid "" "OpenStack Compute supports VNC consoles to guests. The VNC protocol is " "fairly limited, lacking support for multiple monitors, bi-directional audio, " "reliable cut-and-paste, video streaming and more. SPICE is a new protocol " "that aims to address the limitations in VNC and provide good remote desktop " "support." msgstr "" #: ../compute-remote-console-access.rst:30 msgid "" "SPICE support in OpenStack Compute shares a similar architecture to the VNC " "implementation. The OpenStack dashboard uses a SPICE-HTML5 widget in its " "console tab that communicates to the ``nova-spicehtml5proxy`` service by " "using SPICE-over-websockets. The ``nova-spicehtml5proxy`` service " "communicates directly with the hypervisor process by using SPICE." msgstr "" #: ../compute-remote-console-access.rst:36 msgid "" "VNC must be explicitly disabled to get access to the SPICE console. Set the " "``vnc_enabled`` option to ``False`` in the ``[DEFAULT]`` section to disable " "the VNC console." msgstr "" #: ../compute-remote-console-access.rst:40 msgid "" "Use the following options to configure SPICE as the console for OpenStack " "Compute:" msgstr "" #: ../compute-remote-console-access.rst:47 msgid "**[spice]**" msgstr "" #: ../compute-remote-console-access.rst:49 msgid "Spice configuration option = Default value" msgstr "" #: ../compute-remote-console-access.rst:51 msgid "``agent_enabled = True``" msgstr "" #: ../compute-remote-console-access.rst:52 msgid "(BoolOpt) Enable spice guest agent support" msgstr "" #: ../compute-remote-console-access.rst:53 msgid "``enabled = False``" msgstr "" #: ../compute-remote-console-access.rst:54 msgid "(BoolOpt) Enable spice related features" msgstr "" #: ../compute-remote-console-access.rst:55 msgid "``html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html``" msgstr "" #: ../compute-remote-console-access.rst:56 msgid "" "(StrOpt) Location of spice HTML5 console proxy, in the form " "\"http://127.0.0.1:6082/spice_auto.html\"" msgstr "" #: ../compute-remote-console-access.rst:58 msgid "``html5proxy_host = 0.0.0.0``" msgstr "" #: ../compute-remote-console-access.rst:59 #: ../compute-remote-console-access.rst:143 msgid "(StrOpt) Host on which to listen for incoming requests" msgstr "" #: ../compute-remote-console-access.rst:60 msgid "``html5proxy_port = 6082``" msgstr "" #: ../compute-remote-console-access.rst:61 #: ../compute-remote-console-access.rst:145 msgid "(IntOpt) Port on which to listen for incoming requests" msgstr "" #: ../compute-remote-console-access.rst:62 msgid "``keymap = en-us``" msgstr "" #: ../compute-remote-console-access.rst:63 msgid "(StrOpt) Keymap for spice" msgstr "" #: ../compute-remote-console-access.rst:64 msgid "``server_listen = 127.0.0.1``" msgstr "" #: ../compute-remote-console-access.rst:65 msgid "(StrOpt) IP address on which instance spice server should listen" msgstr "" #: ../compute-remote-console-access.rst:66 msgid "``server_proxyclient_address = 127.0.0.1``" msgstr "" #: ../compute-remote-console-access.rst:67 msgid "" "(StrOpt) The address to which proxy clients (like nova-spicehtml5proxy) " "should connect" msgstr "" #: ../compute-remote-console-access.rst:71 msgid "VNC console proxy" msgstr "" #: ../compute-remote-console-access.rst:73 msgid "" "The VNC proxy is an OpenStack component that enables compute service users " "to access their instances through VNC clients." msgstr "" #: ../compute-remote-console-access.rst:78 msgid "" "The web proxy console URLs do not support the websocket protocol scheme " "(ws://) on python versions less than 2.7.4." msgstr "" #: ../compute-remote-console-access.rst:81 msgid "The VNC console connection works as follows:" msgstr "" #: ../compute-remote-console-access.rst:83 msgid "" "A user connects to the API and gets an ``access_url`` such as, ``http://ip:" "port/?token=xyz``." msgstr "" #: ../compute-remote-console-access.rst:86 msgid "The user pastes the URL in a browser or uses it as a client parameter." msgstr "" #: ../compute-remote-console-access.rst:89 msgid "The browser or client connects to the proxy." msgstr "" #: ../compute-remote-console-access.rst:91 msgid "" "The proxy talks to ``nova-consoleauth`` to authorize the token for the user, " "and maps the token to the *private* host and port of the VNC server for an " "instance." msgstr "" #: ../compute-remote-console-access.rst:95 msgid "" "The compute host specifies the address that the proxy should use to connect " "through the ``nova.conf`` file option, ``vncserver_proxyclient_address``. In " "this way, the VNC proxy works as a bridge between the public network and " "private host network." msgstr "" #: ../compute-remote-console-access.rst:100 msgid "" "The proxy initiates the connection to VNC server and continues to proxy " "until the session ends." msgstr "" #: ../compute-remote-console-access.rst:103 msgid "" "The proxy also tunnels the VNC protocol over WebSockets so that the " "``noVNC`` client can talk to VNC servers. In general, the VNC proxy:" msgstr "" #: ../compute-remote-console-access.rst:106 msgid "" "Bridges between the public network where the clients live and the private " "network where VNC servers live." msgstr "" #: ../compute-remote-console-access.rst:109 msgid "Mediates token authentication." msgstr "" #: ../compute-remote-console-access.rst:111 msgid "" "Transparently deals with hypervisor-specific connection details to provide a " "uniform client experience." msgstr "" #: ../compute-remote-console-access.rst:119 msgid "VNC configuration options" msgstr "" #: ../compute-remote-console-access.rst:121 msgid "" "To customize the VNC console, use the following configuration options in " "your ``nova.conf`` file:" msgstr "" #: ../compute-remote-console-access.rst:126 msgid "" "To support :ref:`live migration `, " "you cannot specify a specific IP address for ``vncserver_listen``, because " "that IP address does not exist on the destination host." msgstr "" #: ../compute-remote-console-access.rst:136 msgid "**[DEFAULT]**" msgstr "" #: ../compute-remote-console-access.rst:138 msgid "``daemon = False``" msgstr "" #: ../compute-remote-console-access.rst:139 msgid "(BoolOpt) Become a daemon (background process)" msgstr "" #: ../compute-remote-console-access.rst:140 msgid "``key = None``" msgstr "" #: ../compute-remote-console-access.rst:141 msgid "(StrOpt) SSL key file (if separate from cert)" msgstr "" #: ../compute-remote-console-access.rst:142 msgid "``novncproxy_host = 0.0.0.0``" msgstr "" #: ../compute-remote-console-access.rst:144 msgid "``novncproxy_port = 6080``" msgstr "" #: ../compute-remote-console-access.rst:146 msgid "``record = False``" msgstr "" #: ../compute-remote-console-access.rst:147 msgid "(BoolOpt) Record sessions to FILE.[session_number]" msgstr "" #: ../compute-remote-console-access.rst:148 msgid "``source_is_ipv6 = False``" msgstr "" #: ../compute-remote-console-access.rst:149 msgid "(BoolOpt) Source is ipv6" msgstr "" #: ../compute-remote-console-access.rst:150 msgid "``ssl_only = False``" msgstr "" #: ../compute-remote-console-access.rst:151 msgid "(BoolOpt) Disallow non-encrypted connections" msgstr "" #: ../compute-remote-console-access.rst:152 msgid "``web = /usr/share/spice-html5``" msgstr "" #: ../compute-remote-console-access.rst:153 msgid "(StrOpt) Run webserver on same port. Serve files from DIR." msgstr "" #: ../compute-remote-console-access.rst:154 msgid "**[vmware]**" msgstr "" #: ../compute-remote-console-access.rst:156 msgid "``vnc_port = 5900``" msgstr "" #: ../compute-remote-console-access.rst:157 msgid "(IntOpt) VNC starting port" msgstr "" #: ../compute-remote-console-access.rst:158 msgid "``vnc_port_total = 10000``" msgstr "" #: ../compute-remote-console-access.rst:159 msgid "vnc_port_total = 10000" msgstr "" #: ../compute-remote-console-access.rst:160 msgid "**[vnc]**" msgstr "" #: ../compute-remote-console-access.rst:162 msgid "enabled = True" msgstr "" #: ../compute-remote-console-access.rst:163 msgid "(BoolOpt) Enable VNC related features" msgstr "" #: ../compute-remote-console-access.rst:164 msgid "novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html" msgstr "" #: ../compute-remote-console-access.rst:165 msgid "" "(StrOpt) Location of VNC console proxy, in the form \"http://127.0.0.1:6080/" "vnc_auto.html\"" msgstr "" #: ../compute-remote-console-access.rst:167 msgid "vncserver_listen = 127.0.0.1" msgstr "" #: ../compute-remote-console-access.rst:168 msgid "(StrOpt) IP address on which instance vncservers should listen" msgstr "" #: ../compute-remote-console-access.rst:169 msgid "vncserver_proxyclient_address = 127.0.0.1" msgstr "" #: ../compute-remote-console-access.rst:170 msgid "" "(StrOpt) The address to which proxy clients (like nova-xvpvncproxy) should " "connect" msgstr "" #: ../compute-remote-console-access.rst:172 msgid "xvpvncproxy_base_url = http://127.0.0.1:6081/console" msgstr "" #: ../compute-remote-console-access.rst:173 msgid "" "(StrOpt) Location of nova xvp VNC console proxy, in the form " "\"http://127.0.0.1:6081/console\"" msgstr "" #: ../compute-remote-console-access.rst:178 msgid "" "The ``vncserver_proxyclient_address`` defaults to ``127.0.0.1``, which is " "the address of the compute host that Compute instructs proxies to use when " "connecting to instance servers." msgstr "" #: ../compute-remote-console-access.rst:182 msgid "For all-in-one XenServer domU deployments, set this to ``169.254.0.1.``" msgstr "" #: ../compute-remote-console-access.rst:185 msgid "" "For multi-host XenServer domU deployments, set to a ``dom0 management IP`` " "on the same network as the proxies." msgstr "" #: ../compute-remote-console-access.rst:188 msgid "" "For multi-host libvirt deployments, set to a host management IP on the same " "network as the proxies." msgstr "" #: ../compute-remote-console-access.rst:192 msgid "Typical deployment" msgstr "" #: ../compute-remote-console-access.rst:194 msgid "A typical deployment has the following components:" msgstr "" #: ../compute-remote-console-access.rst:196 msgid "A ``nova-consoleauth`` process. Typically runs on the controller host." msgstr "" #: ../compute-remote-console-access.rst:198 msgid "" "One or more ``nova-novncproxy`` services. Supports browser-based noVNC " "clients. For simple deployments, this service typically runs on the same " "machine as ``nova-api`` because it operates as a proxy between the public " "network and the private compute host network." msgstr "" #: ../compute-remote-console-access.rst:203 msgid "" "One or more ``nova-xvpvncproxy`` services. Supports the special Java client " "discussed here. For simple deployments, this service typically runs on the " "same machine as ``nova-api`` because it acts as a proxy between the public " "network and the private compute host network." msgstr "" #: ../compute-remote-console-access.rst:208 msgid "" "One or more compute hosts. These compute hosts must have correctly " "configured options, as follows." msgstr "" #: ../compute-remote-console-access.rst:212 msgid "nova-novncproxy (noVNC)" msgstr "" #: ../compute-remote-console-access.rst:214 msgid "" "You must install the noVNC package, which contains the ``nova-novncproxy`` " "service. As root, run the following command:" msgstr "" #: ../compute-remote-console-access.rst:221 msgid "The service starts automatically on installation." msgstr "" #: ../compute-remote-console-access.rst:223 msgid "To restart the service, run:" msgstr "" #: ../compute-remote-console-access.rst:229 msgid "" "The configuration option parameter should point to your ``nova.conf`` file, " "which includes the message queue server address and credentials." msgstr "" #: ../compute-remote-console-access.rst:232 msgid "By default, ``nova-novncproxy`` binds on ``0.0.0.0:6080``." msgstr "" #: ../compute-remote-console-access.rst:234 msgid "" "To connect the service to your Compute deployment, add the following " "configuration options to your ``nova.conf`` file:" msgstr "" #: ../compute-remote-console-access.rst:237 msgid "``vncserver_listen=0.0.0.0``" msgstr "" #: ../compute-remote-console-access.rst:239 msgid "" "Specifies the address on which the VNC service should bind. Make sure it is " "assigned one of the compute node interfaces. This address is the one used by " "your domain file." msgstr "" #: ../compute-remote-console-access.rst:249 msgid "To use live migration, use the 0.0.0.0 address." msgstr "" #: ../compute-remote-console-access.rst:251 msgid "``vncserver_proxyclient_address=127.0.0.1``" msgstr "" #: ../compute-remote-console-access.rst:253 msgid "" "The address of the compute host that Compute instructs proxies to use when " "connecting to instance ``vncservers``." msgstr "" #: ../compute-remote-console-access.rst:257 msgid "Frequently asked questions about VNC access to virtual machines" msgstr "" #: ../compute-remote-console-access.rst:259 msgid "" "**Q: What is the difference between ``nova-xvpvncproxy`` and ``nova-" "novncproxy``?**" msgstr "" #: ../compute-remote-console-access.rst:262 msgid "" "A: ``nova-xvpvncproxy``, which ships with OpenStack Compute, is a proxy that " "supports a simple Java client. nova-novncproxy uses noVNC to provide VNC " "support through a web browser." msgstr "" #: ../compute-remote-console-access.rst:266 msgid "" "**Q: I want VNC support in the OpenStack dashboard. What services do I need?" "**" msgstr "" #: ../compute-remote-console-access.rst:269 msgid "" "A: You need ``nova-novncproxy``, ``nova-consoleauth``, and correctly " "configured compute hosts." msgstr "" #: ../compute-remote-console-access.rst:272 msgid "" "**Q: When I use ``nova get-vnc-console`` or click on the VNC tab of the " "OpenStack dashboard, it hangs. Why?**" msgstr "" #: ../compute-remote-console-access.rst:275 msgid "" "A: Make sure you are running ``nova-consoleauth`` (in addition to ``nova-" "novncproxy``). The proxies rely on ``nova-consoleauth`` to validate tokens, " "and waits for a reply from them until a timeout is reached." msgstr "" #: ../compute-remote-console-access.rst:279 msgid "" "**Q: My VNC proxy worked fine during my all-in-one test, but now it doesn't " "work on multi host. Why?**" msgstr "" #: ../compute-remote-console-access.rst:282 msgid "" "A: The default options work for an all-in-one install, but changes must be " "made on your compute hosts once you start to build a cluster. As an example, " "suppose you have two servers:" msgstr "" #: ../compute-remote-console-access.rst:291 msgid "Your ``nova-compute`` configuration file must set the following values:" msgstr "" #: ../compute-remote-console-access.rst:306 msgid "" "``novncproxy_base_url`` and ``xvpvncproxy_base_url`` use a public IP; this " "is the URL that is ultimately returned to clients, which generally do not " "have access to your private network. Your PROXYSERVER must be able to reach " "``vncserver_proxyclient_address``, because that is the address over which " "the VNC connection is proxied." msgstr "" #: ../compute-remote-console-access.rst:312 msgid "" "**Q: My noVNC does not work with recent versions of web browsers. Why?**" msgstr "" #: ../compute-remote-console-access.rst:314 msgid "" "A: Make sure you have installed ``python-numpy``, which is required to " "support a newer version of the WebSocket protocol (HyBi-07+)." msgstr "" #: ../compute-remote-console-access.rst:317 msgid "" "**Q: How do I adjust the dimensions of the VNC window image in the OpenStack " "dashboard?**" msgstr "" #: ../compute-remote-console-access.rst:320 msgid "" "A: These values are hard-coded in a Django HTML template. To alter them, " "edit the ``_detail_vnc.html`` template file. The location of this file " "varies based on Linux distribution. On Ubuntu 14.04, the file is at ``/usr/" "share/pyshared/horizon/dashboards/nova/instances/templates/instances/" "_detail_vnc.html``." msgstr "" #: ../compute-remote-console-access.rst:326 msgid "Modify the ``width`` and ``height`` options, as follows:" msgstr "" #: ../compute-remote-console-access.rst:332 msgid "" "**Q: My noVNC connections failed with ValidationError: Origin header " "protocol does not match. Why?**" msgstr "" #: ../compute-remote-console-access.rst:335 msgid "" "A: Make sure the ``base_url`` match your TLS setting. If you are using https " "console connections, make sure that the value of ``novncproxy_base_url`` is " "set explicitly where the ``nova-novncproxy`` service is running." msgstr "" #: ../compute-root-wrap-reference.rst:0 msgid "**Filters configuration options**" msgstr "" #: ../compute-root-wrap-reference.rst:0 msgid "**rootwrap.conf configuration options**" msgstr "" #: ../compute-root-wrap-reference.rst:5 msgid "Secure with rootwrap" msgstr "" #: ../compute-root-wrap-reference.rst:7 msgid "" "Rootwrap allows unprivileged users to safely run Compute actions as the root " "user. Compute previously used :command:`sudo` for this purpose, but this was " "difficult to maintain, and did not allow advanced filters. The :command:" "`rootwrap` command replaces :command:`sudo` for Compute." msgstr "" #: ../compute-root-wrap-reference.rst:12 msgid "" "To use rootwrap, prefix the Compute command with :command:`nova-rootwrap`. " "For example:" msgstr "" #: ../compute-root-wrap-reference.rst:19 msgid "" "A generic ``sudoers`` entry lets the Compute user run :command:`nova-" "rootwrap` as root. The :command:`nova-rootwrap` code looks for filter " "definition directories in its configuration file, and loads command filters " "from them. It then checks if the command requested by Compute matches one of " "those filters and, if so, executes the command (as root). If no filter " "matches, it denies the request." msgstr "" #: ../compute-root-wrap-reference.rst:28 msgid "" "Be aware of issues with using NFS and root-owned files. The NFS share must " "be configured with the ``no_root_squash`` option enabled, in order for " "rootwrap to work correctly." msgstr "" #: ../compute-root-wrap-reference.rst:32 msgid "" "Rootwrap is fully controlled by the root user. The root user owns the " "sudoers entry which allows Compute to run a specific rootwrap executable as " "root, and only with a specific configuration file (which should also be " "owned by root). The :command:`nova-rootwrap` command imports the Python " "modules it needs from a cleaned, system-default PYTHONPATH. The root-owned " "configuration file points to root-owned filter definition directories, which " "contain root-owned filters definition files. This chain ensures that the " "Compute user itself is not in control of the configuration or modules used " "by the :command:`nova-rootwrap` executable." msgstr "" #: ../compute-root-wrap-reference.rst:44 msgid "" "Configure rootwrap in the ``rootwrap.conf`` file. Because it is in the " "trusted security path, it must be owned and writable by only the root user. " "The ``rootwrap_config=entry`` parameter specifies the file's location in the " "sudoers entry and in the ``nova.conf`` configuration file." msgstr "" #: ../compute-root-wrap-reference.rst:50 msgid "" "The ``rootwrap.conf`` file uses an INI file format with these sections and " "parameters:" msgstr "" #: ../compute-root-wrap-reference.rst:56 ../compute-root-wrap-reference.rst:95 msgid "Configuration option=Default value" msgstr "" #: ../compute-root-wrap-reference.rst:57 ../compute-root-wrap-reference.rst:96 msgid "(Type) Description" msgstr "" #: ../compute-root-wrap-reference.rst:58 msgid "[DEFAULT] filters\\_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap" msgstr "" #: ../compute-root-wrap-reference.rst:60 msgid "" "(ListOpt) Comma-separated list of directories containing filter definition " "files. Defines where rootwrap filters are stored. Directories defined on " "this line should all exist, and be owned and writable only by the root user." msgstr "" #: ../compute-root-wrap-reference.rst:67 msgid "" "If the root wrapper is not performing correctly, you can add a workaround " "option into the ``nova.conf`` configuration file. This workaround re-" "configures the root wrapper configuration to fall back to running commands " "as ``sudo``, and is a Kilo release feature." msgstr "" #: ../compute-root-wrap-reference.rst:72 msgid "" "Including this workaround in your configuration file safeguards your " "environment from issues that can impair root wrapper performance. Tool " "changes that have impacted `Python Build Reasonableness (PBR) `__ for example, are a known issue " "that affects root wrapper performance." msgstr "" #: ../compute-root-wrap-reference.rst:78 msgid "" "To set up this workaround, configure the ``disable_rootwrap`` option in the " "``[workaround]`` section of the ``nova.conf`` configuration file." msgstr "" #: ../compute-root-wrap-reference.rst:81 msgid "" "The filters definition files contain lists of filters that rootwrap will use " "to allow or deny a specific command. They are generally suffixed by ``." "filters`` . Since they are in the trusted security path, they need to be " "owned and writable only by the root user. Their location is specified in the " "``rootwrap.conf`` file." msgstr "" #: ../compute-root-wrap-reference.rst:87 msgid "" "Filter definition files use an INI file format with a ``[Filters]`` section " "and several lines, each with a unique parameter name, which should be " "different for each filter you define:" msgstr "" #: ../compute-root-wrap-reference.rst:97 msgid "[Filters] filter\\_name=kpartx: CommandFilter, /sbin/kpartx, root" msgstr "" #: ../compute-root-wrap-reference.rst:99 msgid "" "(ListOpt) Comma-separated list containing the filter class to use, followed " "by the Filter arguments (which vary depending on the Filter class selected)." msgstr "" #: ../compute-security.rst:0 msgid "**Description of trusted computing configuration options**" msgstr "" #: ../compute-security.rst:5 msgid "Security hardening" msgstr "" #: ../compute-security.rst:7 msgid "" "OpenStack Compute can be integrated with various third-party technologies to " "increase security. For more information, see the `OpenStack Security Guide " "`_." msgstr "" #: ../compute-security.rst:12 msgid "Trusted compute pools" msgstr "" #: ../compute-security.rst:14 msgid "" "Administrators can designate a group of compute hosts as trusted using " "trusted compute pools. The trusted hosts use hardware-based security " "features, such as the Intel Trusted Execution Technology (TXT), to provide " "an additional level of security. Combined with an external stand-alone, web-" "based remote attestation server, cloud providers can ensure that the compute " "node runs only software with verified measurements and can ensure a secure " "cloud stack." msgstr "" #: ../compute-security.rst:22 msgid "" "Trusted compute pools provide the ability for cloud subscribers to request " "services run only on verified compute nodes." msgstr "" #: ../compute-security.rst:25 msgid "The remote attestation server performs node verification like this:" msgstr "" #: ../compute-security.rst:27 msgid "Compute nodes boot with Intel TXT technology enabled." msgstr "" #: ../compute-security.rst:29 msgid "The compute node BIOS, hypervisor, and operating system are measured." msgstr "" #: ../compute-security.rst:31 msgid "" "When the attestation server challenges the compute node, the measured data " "is sent to the attestation server." msgstr "" #: ../compute-security.rst:34 msgid "" "The attestation server verifies the measurements against a known good " "database to determine node trustworthiness." msgstr "" #: ../compute-security.rst:37 msgid "" "A description of how to set up an attestation service is beyond the scope of " "this document. For an open source project that you can use to implement an " "attestation service, see the `Open Attestation `__ project." msgstr "" #: ../compute-security.rst:46 msgid "**Configuring Compute to use trusted compute pools**" msgstr "" #: ../compute-security.rst:48 msgid "" "Enable scheduling support for trusted compute pools by adding these lines to " "the ``DEFAULT`` section of the ``/etc/nova/nova.conf`` file:" msgstr "" #: ../compute-security.rst:58 msgid "" "Specify the connection information for your attestation service by adding " "these lines to the ``trusted_computing`` section of the ``/etc/nova/nova." "conf`` file:" msgstr "" # #-#-#-#-# compute-security.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# database.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-security.rst:76 ../database.rst:120 ../database.rst:160 msgid "In this example:" msgstr "" #: ../compute-security.rst:79 msgid "Host name or IP address of the host that runs the attestation service" msgstr "" #: ../compute-security.rst:80 msgid "server" msgstr "" #: ../compute-security.rst:83 msgid "HTTPS port for the attestation service" msgstr "" # #-#-#-#-# compute-security.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute-security.rst:83 ../telemetry-measurements.rst:955 #: ../telemetry-measurements.rst:958 ../telemetry-measurements.rst:961 #: ../telemetry-measurements.rst:1002 msgid "port" msgstr "" #: ../compute-security.rst:86 msgid "Certificate file used to verify the attestation server's identity" msgstr "" #: ../compute-security.rst:86 msgid "server_ca_file" msgstr "" #: ../compute-security.rst:89 msgid "The attestation service's URL path" msgstr "" #: ../compute-security.rst:89 msgid "api_url" msgstr "" #: ../compute-security.rst:92 msgid "An authentication blob, required by the attestation service." msgstr "" #: ../compute-security.rst:92 msgid "auth_blob" msgstr "" #: ../compute-security.rst:94 msgid "" "Save the file, and restart the ``nova-compute`` and ``nova-scheduler`` " "service to pick up the changes." msgstr "" #: ../compute-security.rst:97 msgid "" "To customize the trusted compute pools, use these configuration option " "settings:" msgstr "" #: ../compute-security.rst:105 msgid "[trusted_computing]" msgstr "" #: ../compute-security.rst:107 msgid "attestation_api_url = /OpenAttestationWebServices/V1.0" msgstr "" #: ../compute-security.rst:108 msgid "(StrOpt) Attestation web API URL" msgstr "" #: ../compute-security.rst:109 msgid "attestation_auth_blob = None" msgstr "" #: ../compute-security.rst:110 msgid "(StrOpt) Attestation authorization blob - must change" msgstr "" #: ../compute-security.rst:111 msgid "attestation_auth_timeout = 60" msgstr "" #: ../compute-security.rst:112 msgid "(IntOpt) Attestation status cache valid period length" msgstr "" #: ../compute-security.rst:113 msgid "attestation_insecure_ssl = False" msgstr "" #: ../compute-security.rst:114 msgid "(BoolOpt) Disable SSL cert verification for Attestation service" msgstr "" #: ../compute-security.rst:115 msgid "attestation_port = 8443" msgstr "" #: ../compute-security.rst:116 msgid "(StrOpt) Attestation server port" msgstr "" #: ../compute-security.rst:117 msgid "attestation_server = None" msgstr "" #: ../compute-security.rst:118 msgid "(StrOpt) Attestation server HTTP" msgstr "" #: ../compute-security.rst:119 msgid "attestation_server_ca_file = None" msgstr "" #: ../compute-security.rst:120 msgid "(StrOpt) Attestation server Cert file for Identity verification" msgstr "" #: ../compute-security.rst:122 msgid "**Specifying trusted flavors**" msgstr "" #: ../compute-security.rst:124 msgid "" "Flavors can be designated as trusted using the :command:`nova flavor-key " "set` command. In this example, the ``m1.tiny`` flavor is being set as " "trusted:" msgstr "" #: ../compute-security.rst:132 msgid "" "You can request that your instance is run on a trusted host by specifying a " "trusted flavor when booting the instance:" msgstr "" #: ../compute-security.rst:144 msgid "Encrypt Compute metadata traffic" msgstr "" #: ../compute-security.rst:146 msgid "**Enabling SSL encryption**" msgstr "" #: ../compute-security.rst:148 msgid "" "OpenStack supports encrypting Compute metadata traffic with HTTPS. Enable " "SSL encryption in the ``metadata_agent.ini`` file." msgstr "" #: ../compute-security.rst:151 msgid "Enable the HTTPS protocol." msgstr "" #: ../compute-security.rst:157 msgid "" "Determine whether insecure SSL connections are accepted for Compute metadata " "server requests. The default value is ``False``." msgstr "" #: ../compute-security.rst:164 msgid "Specify the path to the client certificate." msgstr "" #: ../compute-security.rst:170 msgid "Specify the path to the private key." msgstr "" #: ../compute-service-groups.rst:5 msgid "Configure Compute service groups" msgstr "" #: ../compute-service-groups.rst:7 msgid "" "The Compute service must know the status of each compute node to effectively " "manage and use them. This can include events like a user launching a new VM, " "the scheduler sending a request to a live node, or a query to the " "ServiceGroup API to determine if a node is live." msgstr "" #: ../compute-service-groups.rst:12 msgid "" "When a compute worker running the nova-compute daemon starts, it calls the " "join API to join the compute group. Any service (such as the scheduler) can " "query the group's membership and the status of its nodes. Internally, the " "ServiceGroup client driver automatically updates the compute worker status." msgstr "" #: ../compute-service-groups.rst:21 msgid "Database ServiceGroup driver" msgstr "" #: ../compute-service-groups.rst:23 msgid "" "By default, Compute uses the database driver to track if a node is live. In " "a compute worker, this driver periodically sends a ``db update`` command to " "the database, saying “I'm OK” with a timestamp. Compute uses a pre-defined " "timeout (``service_down_time``) to determine if a node is dead." msgstr "" #: ../compute-service-groups.rst:29 msgid "" "The driver has limitations, which can be problematic depending on your " "environment. If a lot of compute worker nodes need to be checked, the " "database can be put under heavy load, which can cause the timeout to " "trigger, and a live node could incorrectly be considered dead. By default, " "the timeout is 60 seconds. Reducing the timeout value can help in this " "situation, but you must also make the database update more frequently, which " "again increases the database workload." msgstr "" #: ../compute-service-groups.rst:37 msgid "" "The database contains data that is both transient (such as whether the node " "is alive) and persistent (such as entries for VM owners). With the " "ServiceGroup abstraction, Compute can treat each type separately." msgstr "" #: ../compute-service-groups.rst:44 msgid "ZooKeeper ServiceGroup driver" msgstr "" #: ../compute-service-groups.rst:46 msgid "" "The ZooKeeper ServiceGroup driver works by using ZooKeeper ephemeral nodes. " "ZooKeeper, unlike databases, is a distributed system, with its load divided " "among several servers. On a compute worker node, the driver can establish a " "ZooKeeper session, then create an ephemeral znode in the group directory. " "Ephemeral znodes have the same lifespan as the session. If the worker node " "or the nova-compute daemon crashes, or a network partition is in place " "between the worker and the ZooKeeper server quorums, the ephemeral znodes " "are removed automatically. The driver can be given group membership by " "running the :command:`ls` command in the group directory." msgstr "" #: ../compute-service-groups.rst:57 msgid "" "The ZooKeeper driver requires the ZooKeeper servers and client libraries. " "Setting up ZooKeeper servers is outside the scope of this guide (for more " "information, see `Apache Zookeeper `_). These " "client-side Python libraries must be installed on every compute node:" msgstr "" #: ../compute-service-groups.rst:63 msgid "**python-zookeeper**" msgstr "" #: ../compute-service-groups.rst:63 msgid "The official Zookeeper Python binding" msgstr "" #: ../compute-service-groups.rst:66 msgid "**evzookeeper**" msgstr "" #: ../compute-service-groups.rst:66 msgid "This library makes the binding work with the eventlet threading model." msgstr "" #: ../compute-service-groups.rst:68 msgid "" "This example assumes the ZooKeeper server addresses and ports are " "``192.168.2.1:2181``, ``192.168.2.2:2181``, and ``192.168.2.3:2181``." msgstr "" #: ../compute-service-groups.rst:71 msgid "" "These values in the ``/etc/nova/nova.conf`` file are required on every node " "for the ZooKeeper driver:" msgstr "" #: ../compute-service-groups.rst:85 msgid "Memcache ServiceGroup driver" msgstr "" #: ../compute-service-groups.rst:87 msgid "" "The memcache ServiceGroup driver uses memcached, a distributed memory object " "caching system that is used to increase site performance. For more details, " "see `memcached.org `_." msgstr "" #: ../compute-service-groups.rst:91 msgid "" "To use the memcache driver, you must install memcached. You might already " "have it installed, as the same driver is also used for the OpenStack Object " "Storage and OpenStack dashboard. If you need to install memcached, see the " "instructions in the `OpenStack Installation Guide `_." msgstr "" #: ../compute-service-groups.rst:96 msgid "" "These values in the ``/etc/nova/nova.conf`` file are required on every node " "for the memcache driver:" msgstr "" #: ../compute-system-admin.rst:5 msgid "System administration" msgstr "" #: ../compute-system-admin.rst:25 msgid "" "To effectively administer Compute, you must understand how the different " "installed nodes interact with each other. Compute can be installed in many " "different ways using multiple servers, but generally multiple compute nodes " "control the virtual servers and a cloud controller node contains the " "remaining Compute services." msgstr "" #: ../compute-system-admin.rst:31 msgid "" "The Compute cloud works using a series of daemon processes named ``nova-*`` " "that exist persistently on the host machine. These binaries can all run on " "the same machine or be spread out on multiple boxes in a large deployment. " "The responsibilities of services and drivers are:" msgstr "" #: ../compute-system-admin.rst:36 msgid "**Services**" msgstr "" #: ../compute-system-admin.rst:39 msgid "" "receives XML requests and sends them to the rest of the system. A WSGI app " "routes and authenticates requests. Supports the EC2 and OpenStack APIs. A " "``nova.conf`` configuration file is created when Compute is installed." msgstr "" #: ../compute-system-admin.rst:42 msgid "``nova-api``" msgstr "" #: ../compute-system-admin.rst:45 msgid "``nova-cert``" msgstr "" #: ../compute-system-admin.rst:45 msgid "manages certificates." msgstr "" #: ../compute-system-admin.rst:48 msgid "" "manages virtual machines. Loads a Service object, and exposes the public " "methods on ComputeManager through a Remote Procedure Call (RPC)." msgstr "" #: ../compute-system-admin.rst:50 msgid "``nova-compute``" msgstr "" #: ../compute-system-admin.rst:53 msgid "" "provides database-access support for Compute nodes (thereby reducing " "security risks)." msgstr "" #: ../compute-system-admin.rst:54 msgid "``nova-conductor``" msgstr "" #: ../compute-system-admin.rst:57 msgid "``nova-consoleauth``" msgstr "" #: ../compute-system-admin.rst:57 msgid "manages console authentication." msgstr "" #: ../compute-system-admin.rst:60 msgid "" "a simple file-based storage system for images that replicates most of the S3 " "API. It can be replaced with OpenStack Image service and either a simple " "image manager or OpenStack Object Storage as the virtual machine image " "storage facility. It must exist on the same node as ``nova-compute``." msgstr "" #: ../compute-system-admin.rst:64 msgid "``nova-objectstore``" msgstr "" #: ../compute-system-admin.rst:67 msgid "" "manages floating and fixed IPs, DHCP, bridging and VLANs. Loads a Service " "object which exposes the public methods on one of the subclasses of " "NetworkManager. Different networking strategies are available by changing " "the ``network_manager`` configuration option to ``FlatManager``, " "``FlatDHCPManager``, or ``VLANManager`` (defaults to ``VLANManager`` if " "nothing is specified)." msgstr "" #: ../compute-system-admin.rst:72 msgid "``nova-network``" msgstr "" #: ../compute-system-admin.rst:75 msgid "dispatches requests for new virtual machines to the correct node." msgstr "" #: ../compute-system-admin.rst:76 msgid "``nova-scheduler``" msgstr "" #: ../compute-system-admin.rst:79 msgid "" "provides a VNC proxy for browsers, allowing VNC consoles to access virtual " "machines." msgstr "" #: ../compute-system-admin.rst:80 msgid "``nova-novncproxy``" msgstr "" #: ../compute-system-admin.rst:84 msgid "" "Some services have drivers that change how the service implements its core " "functionality. For example, the ``nova-compute`` service supports drivers " "that let you choose which hypervisor type it can use. ``nova-network`` and " "``nova-scheduler`` also have drivers." msgstr "" # #-#-#-#-# compute.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_config-identity.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute.rst:3 ../dashboard_set_quotas.rst:43 #: ../dashboard_set_quotas.rst:46 ../dashboard_set_quotas.rst:49 #: ../dashboard_set_quotas.rst:52 ../dashboard_set_quotas.rst:54 #: ../dashboard_set_quotas.rst:57 ../dashboard_set_quotas.rst:60 #: ../dashboard_set_quotas.rst:63 ../dashboard_set_quotas.rst:69 #: ../networking_config-identity.rst:123 msgid "Compute" msgstr "" #: ../compute.rst:5 msgid "" "The OpenStack Compute service allows you to control an :term:`Infrastructure-" "as-a-Service (IaaS) ` cloud computing platform. It gives you control " "over instances and networks, and allows you to manage access to the cloud " "through users and projects." msgstr "" #: ../compute.rst:10 msgid "" "Compute does not include virtualization software. Instead, it defines " "drivers that interact with underlying virtualization mechanisms that run on " "your host operating system, and exposes functionality over a web-based API." msgstr "" #: ../compute_arch.rst:5 msgid "OpenStack Compute contains several main components." msgstr "" #: ../compute_arch.rst:7 msgid "" "The :term:`cloud controller` represents the global state and interacts with " "the other components. The ``API server`` acts as the web services front end " "for the cloud controller. The ``compute controller`` provides compute server " "resources and usually also contains the Compute service." msgstr "" #: ../compute_arch.rst:13 msgid "" "The ``object store`` is an optional component that provides storage " "services; you can also use OpenStack Object Storage instead." msgstr "" #: ../compute_arch.rst:16 msgid "" "An ``auth manager`` provides authentication and authorization services when " "used with the Compute system; you can also use OpenStack Identity as a " "separate authentication service instead." msgstr "" #: ../compute_arch.rst:20 msgid "" "A ``volume controller`` provides fast and permanent block-level storage for " "the compute servers." msgstr "" #: ../compute_arch.rst:23 msgid "" "The ``network controller`` provides virtual networks to enable compute " "servers to interact with each other and with the public network. You can " "also use OpenStack Networking instead." msgstr "" #: ../compute_arch.rst:27 msgid "" "The ``scheduler`` is used to select the most suitable compute controller to " "host an instance." msgstr "" #: ../compute_arch.rst:30 msgid "" "Compute uses a messaging-based, ``shared nothing`` architecture. All major " "components exist on multiple servers, including the compute, volume, and " "network controllers, and the Object Storage or Image service. The state of " "the entire system is stored in a database. The cloud controller communicates " "with the internal object store using HTTP, but it communicates with the " "scheduler, network controller, and volume controller using Advanced Message " "Queuing Protocol (AMQP). To avoid blocking a component while waiting for a " "response, Compute uses asynchronous calls, with a callback that is triggered " "when a response is received." msgstr "" #: ../compute_arch.rst:42 msgid "Hypervisors" msgstr "" #: ../compute_arch.rst:43 msgid "" "Compute controls hypervisors through an API server. Selecting the best " "hypervisor to use can be difficult, and you must take budget, resource " "constraints, supported features, and required technical specifications into " "account. However, the majority of OpenStack development is done on systems " "using KVM and Xen-based hypervisors. For a detailed list of features and " "support across different hypervisors, see http://wiki.openstack.org/" "HypervisorSupportMatrix." msgstr "" #: ../compute_arch.rst:51 msgid "" "You can also orchestrate clouds using multiple hypervisors in different " "availability zones. Compute supports the following hypervisors:" msgstr "" #: ../compute_arch.rst:54 msgid "`Baremetal `__" msgstr "" #: ../compute_arch.rst:56 msgid "`Docker `__" msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-system-architecture.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute_arch.rst:58 ../telemetry-system-architecture.rst:121 msgid "" "`Hyper-V `__" msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-system-architecture.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute_arch.rst:60 ../telemetry-system-architecture.rst:108 msgid "" "`Kernel-based Virtual Machine (KVM) `__" msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-system-architecture.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute_arch.rst:63 ../telemetry-system-architecture.rst:112 msgid "`Linux Containers (LXC) `__" msgstr "" #: ../compute_arch.rst:65 msgid "`Quick Emulator (QEMU) `__" msgstr "" #: ../compute_arch.rst:67 msgid "`User Mode Linux (UML) `__" msgstr "" # #-#-#-#-# compute_arch.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-system-architecture.pot (Administrator Guide 0.9) #-#-#-#-# #: ../compute_arch.rst:69 ../telemetry-system-architecture.rst:125 msgid "" "`VMware vSphere `__" msgstr "" #: ../compute_arch.rst:72 msgid "`Xen `__" msgstr "" #: ../compute_arch.rst:74 msgid "" "For more information about hypervisors, see the `Hypervisors `__ section " "in the OpenStack Configuration Reference." msgstr "" #: ../compute_arch.rst:79 msgid "Tenants, users, and roles" msgstr "" #: ../compute_arch.rst:80 msgid "" "The Compute system is designed to be used by different consumers in the form " "of tenants on a shared system, and role-based access assignments. Roles " "control the actions that a user is allowed to perform." msgstr "" #: ../compute_arch.rst:84 msgid "" "Tenants are isolated resource containers that form the principal " "organizational structure within the Compute service. They consist of an " "individual VLAN, and volumes, instances, images, keys, and users. A user can " "specify the tenant by appending ``project_id`` to their access key. If no " "tenant is specified in the API request, Compute attempts to use a tenant " "with the same ID as the user." msgstr "" #: ../compute_arch.rst:91 msgid "For tenants, you can use quota controls to limit the:" msgstr "" #: ../compute_arch.rst:93 msgid "Number of volumes that can be launched." msgstr "" #: ../compute_arch.rst:95 msgid "Number of processor cores and the amount of RAM that can be allocated." msgstr "" #: ../compute_arch.rst:98 msgid "" "Floating IP addresses assigned to any instance when it launches. This allows " "instances to have the same publicly accessible IP addresses." msgstr "" #: ../compute_arch.rst:101 msgid "" "Fixed IP addresses assigned to the same instance when it launches. This " "allows instances to have the same publicly or privately accessible IP " "addresses." msgstr "" #: ../compute_arch.rst:105 msgid "" "Roles control the actions a user is allowed to perform. By default, most " "actions do not require a particular role, but you can configure them by " "editing the ``policy.json`` file for user roles. For example, a rule can be " "defined so that a user must have the ``admin`` role in order to be able to " "allocate a public IP address." msgstr "" #: ../compute_arch.rst:111 msgid "" "A tenant limits users' access to particular images. Each user is assigned a " "user name and password. Keypairs granting access to an instance are enabled " "for each user, but quotas are set, so that each tenant can control resource " "consumption across available hardware resources." msgstr "" #: ../compute_arch.rst:119 msgid "" "Earlier versions of OpenStack used the term ``project`` instead of " "``tenant``. Because of this legacy terminology, some command-line tools use :" "option:`--project_id` where you would normally expect to enter a tenant ID." msgstr "" #: ../compute_arch.rst:125 msgid "Block storage" msgstr "" #: ../compute_arch.rst:126 msgid "" "OpenStack provides two classes of block storage: ephemeral storage and " "persistent volume." msgstr "" #: ../compute_arch.rst:129 msgid "**Ephemeral storage**" msgstr "" #: ../compute_arch.rst:131 msgid "" "Ephemeral storage includes a root ephemeral volume and an additional " "ephemeral volume." msgstr "" #: ../compute_arch.rst:134 msgid "" "The root disk is associated with an instance, and exists only for the life " "of this very instance. Generally, it is used to store an instance's root " "file system, persists across the guest operating system reboots, and is " "removed on an instance deletion. The amount of the root ephemeral volume is " "defined by the flavor of an instance." msgstr "" #: ../compute_arch.rst:140 msgid "" "In addition to the ephemeral root volume, all default types of flavors, " "except ``m1.tiny``, which is the smallest one, provide an additional " "ephemeral block device sized between 20 and 160 GB (a configurable value to " "suit an environment). It is represented as a raw block device with no " "partition table or file system. A cloud-aware operating system can discover, " "format, and mount such a storage device. OpenStack Compute defines the " "default file system for different operating systems as Ext4 for Linux " "distributions, VFAT for non-Linux and non-Windows operating systems, and " "NTFS for Windows. However, it is possible to specify any other filesystem " "type by using ``virt_mkfs`` or ``default_ephemeral_format`` configuration " "options." msgstr "" #: ../compute_arch.rst:154 msgid "" "For example, the ``cloud-init`` package included into an Ubuntu's stock " "cloud image, by default, formats this space as an Ext4 file system and " "mounts it on ``/mnt``. This is a cloud-init feature, and is not an OpenStack " "mechanism. OpenStack only provisions the raw storage." msgstr "" #: ../compute_arch.rst:159 msgid "**Persistent volume**" msgstr "" #: ../compute_arch.rst:161 msgid "" "A persistent volume is represented by a persistent virtualized block device " "independent of any particular instance, and provided by OpenStack Block " "Storage." msgstr "" #: ../compute_arch.rst:165 msgid "" "Only a single configured instance can access a persistent volume. Multiple " "instances cannot access a persistent volume. This type of configuration " "requires a traditional network file system to allow multiple instances " "accessing the persistent volume. It also requires a traditional network file " "system like NFS, CIFS, or a cluster file system such as GlusterFS. These " "systems can be built within an OpenStack cluster, or provisioned outside of " "it, but OpenStack software does not provide these features." msgstr "" #: ../compute_arch.rst:174 msgid "" "You can configure a persistent volume as bootable and use it to provide a " "persistent virtual instance similar to the traditional non-cloud-based " "virtualization system. It is still possible for the resulting instance to " "keep ephemeral storage, depending on the flavor selected. In this case, the " "root file system can be on the persistent volume, and its state is " "maintained, even if the instance is shut down. For more information about " "this type of configuration, see the `OpenStack Configuration Reference " "`__." msgstr "" #: ../compute_arch.rst:186 msgid "" "A persistent volume does not provide concurrent access from multiple " "instances. That type of configuration requires a traditional network file " "system like NFS, or CIFS, or a cluster file system such as GlusterFS. These " "systems can be built within an OpenStack cluster, or provisioned outside of " "it, but OpenStack software does not provide these features." msgstr "" #: ../compute_arch.rst:194 msgid "EC2 compatibility API" msgstr "" #: ../compute_arch.rst:195 msgid "" "In addition to the native compute API, OpenStack provides an EC2-compatible " "API. This API allows EC2 legacy workflows built for EC2 to work with " "OpenStack." msgstr "" #: ../compute_arch.rst:201 msgid "" "Nova in tree EC2-compatible API is deprecated. The `ec2-api project `_ is working to implement the EC2 " "API." msgstr "" #: ../compute_arch.rst:205 msgid "" "You can use numerous third-party tools and language-specific SDKs to " "interact with OpenStack clouds. You can use both native and compatibility " "APIs. Some of the more popular third-party tools are:" msgstr "" #: ../compute_arch.rst:210 msgid "" "A popular open source command-line tool for interacting with the EC2 API. " "This is convenient for multi-cloud environments where EC2 is the common API, " "or for transitioning from EC2-based clouds to OpenStack. For more " "information, see the `Eucalyptus Documentation `__." msgstr "" #: ../compute_arch.rst:214 msgid "Euca2ools" msgstr "" #: ../compute_arch.rst:217 msgid "" "A Firefox browser add-on that provides a graphical interface to many popular " "public and private cloud technologies, including OpenStack. For more " "information, see the `hybridfox site `__." msgstr "" #: ../compute_arch.rst:220 msgid "Hybridfox" msgstr "" #: ../compute_arch.rst:223 msgid "" "Python library for interacting with Amazon Web Services. You can use this " "library to access OpenStack through the EC2 compatibility API. For more " "information, see the `boto project page on GitHub `__." msgstr "" #: ../compute_arch.rst:226 msgid "boto" msgstr "" #: ../compute_arch.rst:229 msgid "" "A Ruby cloud services library. It provides methods to interact with a large " "number of cloud and virtualization platforms, including OpenStack. For more " "information, see the `fog site `__." msgstr "" #: ../compute_arch.rst:232 msgid "fog" msgstr "" #: ../compute_arch.rst:235 msgid "" "A PHP SDK designed to work with most OpenStack-based cloud deployments, as " "well as Rackspace public cloud. For more information, see the `php-opencloud " "site `__." msgstr "" #: ../compute_arch.rst:238 msgid "php-opencloud" msgstr "" #: ../compute_arch.rst:241 msgid "Building blocks" msgstr "" #: ../compute_arch.rst:242 msgid "" "In OpenStack the base operating system is usually copied from an image " "stored in the OpenStack Image service. This is the most common case and " "results in an ephemeral instance that starts from a known template state and " "loses all accumulated states on virtual machine deletion. It is also " "possible to put an operating system on a persistent volume in the OpenStack " "Block Storage volume system. This gives a more traditional persistent system " "that accumulates states which are preserved on the OpenStack Block Storage " "volume across the deletion and re-creation of the virtual machine. To get a " "list of available images on your system, run:" msgstr "" #: ../compute_arch.rst:267 msgid "Automatically generated UUID of the image" msgstr "" #: ../compute_arch.rst:270 msgid "Free form, human-readable name for image" msgstr "" #: ../compute_arch.rst:281 msgid "" "Virtual hardware templates are called ``flavors``. The default installation " "provides five flavors. By default, these are configurable by admin users, " "however that behavior can be changed by redefining the access controls for " "``compute_extension:flavormanage`` in ``/etc/nova/policy.json`` on the " "``compute-api`` server." msgstr "" #: ../compute_arch.rst:287 msgid "For a list of flavors that are available on your system:" msgstr "" #: ../compute_arch.rst:303 msgid "Compute service architecture" msgstr "" #: ../compute_arch.rst:304 msgid "" "These basic categories describe the service architecture and information " "about the cloud controller." msgstr "" #: ../compute_arch.rst:307 msgid "**API server**" msgstr "" #: ../compute_arch.rst:309 msgid "" "At the heart of the cloud framework is an API server, which makes command " "and control of the hypervisor, storage, and networking programmatically " "available to users." msgstr "" #: ../compute_arch.rst:313 msgid "" "The API endpoints are basic HTTP web services which handle authentication, " "authorization, and basic command and control functions using various API " "interfaces under the Amazon, Rackspace, and related models. This enables API " "compatibility with multiple existing tool sets created for interaction with " "offerings from other vendors. This broad compatibility prevents vendor lock-" "in." msgstr "" #: ../compute_arch.rst:320 msgid "**Message queue**" msgstr "" #: ../compute_arch.rst:322 msgid "" "A messaging queue brokers the interaction between compute nodes " "(processing), the networking controllers (software which controls network " "infrastructure), API endpoints, the scheduler (determines which physical " "hardware to allocate to a virtual resource), and similar components. " "Communication to and from the cloud controller is handled by HTTP requests " "through multiple API endpoints." msgstr "" #: ../compute_arch.rst:329 msgid "" "A typical message passing event begins with the API server receiving a " "request from a user. The API server authenticates the user and ensures that " "they are permitted to issue the subject command. The availability of objects " "implicated in the request is evaluated and, if available, the request is " "routed to the queuing engine for the relevant workers. Workers continually " "listen to the queue based on their role, and occasionally their type host " "name. When an applicable work request arrives on the queue, the worker takes " "assignment of the task and begins executing it. Upon completion, a response " "is dispatched to the queue which is received by the API server and relayed " "to the originating user. Database entries are queried, added, or removed as " "necessary during the process." msgstr "" #: ../compute_arch.rst:342 msgid "**Compute worker**" msgstr "" #: ../compute_arch.rst:344 msgid "" "Compute workers manage computing instances on host machines. The API " "dispatches commands to compute workers to complete these tasks:" msgstr "" #: ../compute_arch.rst:347 msgid "Run instances" msgstr "" #: ../compute_arch.rst:349 msgid "Delete instances (Terminate instances)" msgstr "" #: ../compute_arch.rst:351 msgid "Reboot instances" msgstr "" #: ../compute_arch.rst:353 msgid "Attach volumes" msgstr "" #: ../compute_arch.rst:355 msgid "Detach volumes" msgstr "" #: ../compute_arch.rst:357 msgid "Get console output" msgstr "" #: ../compute_arch.rst:359 msgid "**Network Controller**" msgstr "" #: ../compute_arch.rst:361 msgid "" "The Network Controller manages the networking resources on host machines. " "The API server dispatches commands through the message queue, which are " "subsequently processed by Network Controllers. Specific operations include:" msgstr "" #: ../compute_arch.rst:366 msgid "Allocating fixed IP addresses" msgstr "" #: ../compute_arch.rst:368 msgid "Configuring VLANs for projects" msgstr "" #: ../compute_arch.rst:370 msgid "Configuring networks for compute nodes" msgstr "" #: ../cross_project.rst:3 msgid "Cross-project features" msgstr "" #: ../cross_project.rst:5 msgid "" "Many features are common to all the OpenStack services and are consistent in " "their configuration and deployment patterns. Unless explicitly noted, you " "can safely assume that the features in this chapter are supported and " "configured in a consistent manner." msgstr "" #: ../cross_project_cors.rst:5 msgid "Cross-origin resource sharing" msgstr "" #: ../cross_project_cors.rst:9 msgid "This is a new feature in OpenStack Liberty." msgstr "" #: ../cross_project_cors.rst:11 msgid "" "OpenStack supports :term:`Cross-Origin Resource Sharing (CORS)`, a W3C " "specification defining a contract by which the single-origin policy of a " "user agent (usually a browser) may be relaxed. It permits it's javascript " "engine to access an API that does not reside on the same domain, protocol, " "or port." msgstr "" #: ../cross_project_cors.rst:16 msgid "" "This feature is most useful to organizations which maintain one or more " "custom user interfaces for OpenStack, as it permits those interfaces to " "access the services directly, rather than requiring an intermediate proxy " "server. It can, however, also be misused by malicious actors; please review " "the security advisory below for more information." msgstr "" #: ../cross_project_cors.rst:24 msgid "" "Both the Object Storage and dashboard projects provide CORS support that is " "not covered by this document. For those, please refer to their respective " "implementations:" msgstr "" #: ../cross_project_cors.rst:28 msgid "" "`CORS in Object Storage `_" msgstr "" #: ../cross_project_cors.rst:29 msgid "" "`CORS in dashboard `_" msgstr "" #: ../cross_project_cors.rst:33 msgid "Enabling CORS with configuration" msgstr "" #: ../cross_project_cors.rst:35 msgid "" "In most cases, CORS support is built directly into the service itself. To " "enable it, simply follow the configuration options exposed in the default " "configuration file, or add it yourself according to the pattern below." msgstr "" #: ../cross_project_cors.rst:48 msgid "" "Additional origins can be explicitly added. To express this in your " "configuration file, first begin with a ``[cors]`` group as above, into which " "you place your default configuration values. Then, add as many additional " "configuration groups as necessary, naming them ``[cors.{something}]`` (each " "name must be unique). The purpose of the suffix to ``cors.`` is legibility, " "we recommend using a reasonable human-readable string:" msgstr "" #: ../cross_project_cors.rst:75 msgid "Enabling CORS with PasteDeploy" msgstr "" #: ../cross_project_cors.rst:77 msgid "" "CORS can also be configured using PasteDeploy. First of all, ensure that " "OpenStack's ``oslo_middleware`` package (version 2.4.0 or later) is " "available in the Python environment that is running the service. Then, add " "the following configuration block to your ``paste.ini`` file." msgstr "" #: ../cross_project_cors.rst:93 msgid "" "To add an additional domain in oslo_middleware v2.4.0, add another filter. " "In v3.0.0 and after, you may add multiple domains in the above " "``allowed_origin`` field, separated by commas." msgstr "" #: ../cross_project_cors.rst:98 msgid "Security concerns" msgstr "" #: ../cross_project_cors.rst:100 msgid "" "CORS specifies a wildcard character ``*``, which permits access to all user " "agents, regardless of domain, protocol, or host. While there are valid use " "cases for this approach, it also permits a malicious actor to create a " "convincing facsimile of a user interface, and trick users into revealing " "authentication credentials. Please carefully evaluate your use case and the " "relevant documentation for any risk to your organization." msgstr "" #: ../cross_project_cors.rst:109 msgid "" "The CORS specification does not support using this wildcard as a part of a " "URI. Setting ``allowed_origin`` to ``*`` would work, while ``*.openstack." "org`` would not." msgstr "" #: ../cross_project_cors.rst:116 msgid "" "CORS is very easy to get wrong, as even one incorrect property will violate " "the prescribed contract. Here are some steps you can take to troubleshoot " "your configuration." msgstr "" #: ../cross_project_cors.rst:121 msgid "Check the service log" msgstr "" #: ../cross_project_cors.rst:123 msgid "" "The CORS middleware used by OpenStack provides verbose debug logging that " "should reveal most configuration problems. Here are some example log " "messages, and how to resolve them." msgstr "" #: ../cross_project_cors.rst:130 msgid "``CORS request from origin 'http://example.com' not permitted.``" msgstr "" #: ../cross_project_cors.rst:135 msgid "" "A request was received from the origin ``http://example.com``, however this " "origin was not found in the permitted list. The cause may be a superfluous " "port notation (ports 80 and 443 do not need to be specified). To correct, " "ensure that the configuration property for this host is identical to the " "host indicated in the log message." msgstr "" #: ../cross_project_cors.rst:144 msgid "``Request method 'DELETE' not in permitted list: GET,PUT,POST``" msgstr "" #: ../cross_project_cors.rst:149 msgid "" "A user agent has requested permission to perform a DELETE request, however " "the CORS configuration for the domain does not permit this. To correct, add " "this method to the ``allow_methods`` configuration property." msgstr "" #: ../cross_project_cors.rst:156 msgid "" "``Request header 'X-Custom-Header' not in permitted list: X-Other-Header``" msgstr "" #: ../cross_project_cors.rst:161 msgid "" "A request was received with the header ``X-Custom-Header``, which is not " "permitted. Add this header to the ``allow_headers`` configuration property." msgstr "" #: ../cross_project_cors.rst:166 msgid "Open your browser's console log" msgstr "" #: ../cross_project_cors.rst:168 msgid "" "Most browsers provide helpful debug output when a CORS request is rejected. " "Usually this happens when a request was successful, but the return headers " "on the response do not permit access to a property which the browser is " "trying to access." msgstr "" #: ../cross_project_cors.rst:174 msgid "Manually construct a CORS request" msgstr "" #: ../cross_project_cors.rst:176 msgid "" "By using ``curl`` or a similar tool, you can trigger a CORS response with a " "properly constructed HTTP request. An example request and response might " "look like this." msgstr "" #: ../cross_project_cors.rst:180 msgid "Request example:" msgstr "" #: ../cross_project_cors.rst:186 msgid "Response example:" msgstr "" #: ../cross_project_cors.rst:198 msgid "" "If the service does not return any access control headers, check the service " "log, such as ``/var/log/upstart/ironic-api.log`` for an indication on what " "went wrong." msgstr "" #: ../dashboard.rst:3 msgid "Dashboard" msgstr "" #: ../dashboard.rst:5 msgid "" "The OpenStack Dashboard is a web-based interface that allows you to manage " "OpenStack resources and services. The Dashboard allows you to interact with " "the OpenStack Compute cloud controller using the OpenStack APIs. For more " "information about installing and configuring the Dashboard, see the " "`OpenStack Installation Guide `__ for your operating system." msgstr "" #: ../dashboard.rst:29 msgid "" "To deploy the dashboard, see the `OpenStack dashboard documentation `__." msgstr "" #: ../dashboard.rst:31 msgid "" "To launch instances with the dashboard, see the `OpenStack End User Guide " "`__." msgstr "" #: ../dashboard_admin_manage_roles.rst:3 msgid "Create and manage roles" msgstr "" #: ../dashboard_admin_manage_roles.rst:5 msgid "" "A role is a personality that a user assumes to perform a specific set of " "operations. A role includes a set of rights and privileges. A user assumes " "that role inherits those rights and privileges." msgstr "" #: ../dashboard_admin_manage_roles.rst:11 msgid "" "OpenStack Identity service defines a user's role on a project, but it is " "completely up to the individual service to define what that role means. This " "is referred to as the service's policy. To get details about what the " "privileges for each role are, refer to the ``policy.json`` file available " "for each service in the ``/etc/SERVICE/policy.json`` file. For example, the " "policy defined for OpenStack Identity service is defined in the ``/etc/" "keystone/policy.json`` file." msgstr "" # #-#-#-#-# dashboard_admin_manage_roles.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_volumes.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_view_cloud_resources.pot (Administrator Guide 0.9) #-#-#-#-# #: ../dashboard_admin_manage_roles.rst:24 ../dashboard_manage_volumes.rst:21 #: ../dashboard_manage_volumes.rst:122 ../dashboard_manage_volumes.rst:146 #: ../dashboard_set_quotas.rst:81 ../dashboard_set_quotas.rst:99 #: ../dashboard_view_cloud_resources.rst:27 msgid "" "Log in to the dashboard and select the :guilabel:`admin` project from the " "drop-down list." msgstr "" #: ../dashboard_admin_manage_roles.rst:26 #: ../dashboard_admin_manage_roles.rst:37 #: ../dashboard_admin_manage_roles.rst:53 msgid "On the :guilabel:`Identity` tab, click the :guilabel:`Roles` category." msgstr "" #: ../dashboard_admin_manage_roles.rst:27 msgid "Click the :guilabel:`Create Role` button." msgstr "" #: ../dashboard_admin_manage_roles.rst:29 msgid "In the :guilabel:`Create Role` window, enter a name for the role." msgstr "" #: ../dashboard_admin_manage_roles.rst:30 msgid "Click the :guilabel:`Create Role` button to confirm your changes." msgstr "" #: ../dashboard_admin_manage_roles.rst:33 msgid "Edit a role" msgstr "" #: ../dashboard_admin_manage_roles.rst:35 #: ../dashboard_admin_manage_roles.rst:51 msgid "" "Log in to the dashboard and select the :guilabel:`Identity` project from the " "drop-down list." msgstr "" #: ../dashboard_admin_manage_roles.rst:38 msgid "Click the :guilabel:`Edit` button." msgstr "" #: ../dashboard_admin_manage_roles.rst:40 msgid "In the :guilabel:`Update Role` window, enter a new name for the role." msgstr "" #: ../dashboard_admin_manage_roles.rst:41 msgid "Click the :guilabel:`Update Role` button to confirm your changes." msgstr "" #: ../dashboard_admin_manage_roles.rst:45 msgid "Using the dashboard, you can edit only the name assigned to a role." msgstr "" #: ../dashboard_admin_manage_roles.rst:49 msgid "Delete a role" msgstr "" #: ../dashboard_admin_manage_roles.rst:54 msgid "" "Select the role you want to delete and click the :guilabel:`Delete Roles` " "button." msgstr "" #: ../dashboard_admin_manage_roles.rst:56 msgid "" "In the :guilabel:`Confirm Delete Roles` window, click :guilabel:`Delete " "Roles` to confirm the deletion." msgstr "" # #-#-#-#-# dashboard_admin_manage_roles.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_images.pot (Administrator Guide 0.9) #-#-#-#-# #: ../dashboard_admin_manage_roles.rst:59 ../dashboard_manage_images.rst:121 msgid "You cannot undo this action." msgstr "" #: ../dashboard_admin_manage_stacks.rst:3 msgid "Launch and manage stacks using the Dashboard" msgstr "" #: ../dashboard_admin_manage_stacks.rst:5 msgid "" "The Orchestration service provides a template-based orchestration engine for " "the OpenStack cloud. Orchestration services create and manage cloud " "infrastructure resources such as storage, networking, instances, and " "applications as a repeatable running environment." msgstr "" #: ../dashboard_admin_manage_stacks.rst:11 msgid "" "Administrators use templates to create stacks, which are collections of " "resources. For example, a stack might include instances, floating IPs, " "volumes, security groups, or users. The Orchestration service offers access " "to all OpenStack core services via a single modular template, with " "additional orchestration capabilities such as auto-scaling and basic high " "availability." msgstr "" #: ../dashboard_admin_manage_stacks.rst:22 msgid "" "administrative tasks on the command-line, see the `OpenStack Administrator " "Guide `__." msgstr "" #: ../dashboard_admin_manage_stacks.rst:28 msgid "" "There are no administration-specific tasks that can be done through the " "Dashboard." msgstr "" #: ../dashboard_admin_manage_stacks.rst:31 msgid "" "the basic creation and deletion of Orchestration stacks, refer to the " "`OpenStack End User Guide `__." msgstr "" #: ../dashboard_manage_flavors.rst:5 msgid "" "In OpenStack, a flavor defines the compute, memory, and storage capacity of " "a virtual server, also known as an instance. As an administrative user, you " "can create, edit, and delete flavors." msgstr "" #: ../dashboard_manage_flavors.rst:9 msgid "The following table lists the default flavors." msgstr "" #: ../dashboard_manage_flavors.rst:22 msgid "Create flavors" msgstr "" # #-#-#-#-# dashboard_manage_flavors.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_host_aggregates.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_images.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_instances.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_services.pot (Administrator Guide 0.9) #-#-#-#-# #: ../dashboard_manage_flavors.rst:24 ../dashboard_manage_flavors.rst:76 #: ../dashboard_manage_flavors.rst:89 ../dashboard_manage_flavors.rst:152 #: ../dashboard_manage_host_aggregates.rst:19 #: ../dashboard_manage_images.rst:26 ../dashboard_manage_images.rst:96 #: ../dashboard_manage_images.rst:112 ../dashboard_manage_instances.rst:16 #: ../dashboard_manage_instances.rst:38 ../dashboard_manage_instances.rst:68 #: ../dashboard_manage_services.rst:7 msgid "" "Log in to the Dashboard and select the :guilabel:`admin` project from the " "drop-down list." msgstr "" #: ../dashboard_manage_flavors.rst:26 ../dashboard_manage_flavors.rst:78 #: ../dashboard_manage_flavors.rst:91 ../dashboard_manage_flavors.rst:154 msgid "" "In the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Flavors` category." msgstr "" #: ../dashboard_manage_flavors.rst:28 ../dashboard_manage_flavors.rst:71 msgid "Click :guilabel:`Create Flavor`." msgstr "" #: ../dashboard_manage_flavors.rst:29 msgid "" "In the :guilabel:`Create Flavor` window, enter or select the parameters for " "the flavor in the :guilabel:`Flavor Information` tab." msgstr "" #: ../dashboard_manage_flavors.rst:34 msgid "**Dashboard — Create Flavor**" msgstr "" #: ../dashboard_manage_flavors.rst:37 msgid "**Name**" msgstr "" #: ../dashboard_manage_flavors.rst:37 msgid "Enter the flavor name." msgstr "" #: ../dashboard_manage_flavors.rst:38 msgid "**ID**" msgstr "" #: ../dashboard_manage_flavors.rst:41 msgid "**VCPUs**" msgstr "" #: ../dashboard_manage_flavors.rst:41 msgid "Enter the number of virtual CPUs to use." msgstr "" #: ../dashboard_manage_flavors.rst:43 msgid "**RAM (MB)**" msgstr "" #: ../dashboard_manage_flavors.rst:43 msgid "Enter the amount of RAM to use, in megabytes." msgstr "" #: ../dashboard_manage_flavors.rst:45 msgid "**Root Disk (GB)**" msgstr "" #: ../dashboard_manage_flavors.rst:45 msgid "" "Enter the amount of disk space in gigabytes to use for the root (/) " "partition." msgstr "" #: ../dashboard_manage_flavors.rst:48 msgid "**Ephemeral Disk (GB)**" msgstr "" #: ../dashboard_manage_flavors.rst:48 msgid "" "Enter the amount of disk space in gigabytes to use for the ephemeral " "partition. If unspecified, the value is 0 by default." msgstr "" #: ../dashboard_manage_flavors.rst:53 msgid "" "Ephemeral disks offer machine local disk storage linked to the lifecycle of " "a VM instance. When a VM is terminated, all data on the ephemeral disk is " "lost. Ephemeral disks are not included in any snapshots." msgstr "" #: ../dashboard_manage_flavors.rst:59 msgid "**Swap Disk (MB)**" msgstr "" #: ../dashboard_manage_flavors.rst:59 msgid "" "Enter the amount of swap space (in megabytes) to use. If unspecified, the " "default is 0." msgstr "" #: ../dashboard_manage_flavors.rst:64 msgid "" "In the :guilabel:`Flavor Access` tab, you can control access to the flavor " "by moving projects from the :guilabel:`All Projects` column to the :guilabel:" "`Selected Projects` column." msgstr "" #: ../dashboard_manage_flavors.rst:68 msgid "" "Only projects in the :guilabel:`Selected Projects` column can use the " "flavor. If there are no projects in the right column, all projects can use " "the flavor." msgstr "" #: ../dashboard_manage_flavors.rst:74 msgid "Update flavors" msgstr "" #: ../dashboard_manage_flavors.rst:80 msgid "Select the flavor that you want to edit. Click :guilabel:`Edit Flavor`." msgstr "" #: ../dashboard_manage_flavors.rst:82 msgid "" "In the :guilabel:`Edit Flavor` window, you can change the flavor name, " "VCPUs, RAM, root disk, ephemeral disk, and swap disk values." msgstr "" #: ../dashboard_manage_flavors.rst:84 ../dashboard_manage_flavors.rst:98 msgid "Click :guilabel:`Save`." msgstr "" #: ../dashboard_manage_flavors.rst:87 msgid "Update Metadata" msgstr "" #: ../dashboard_manage_flavors.rst:93 msgid "" "Select the flavor that you want to update. In the drop-down list, click :" "guilabel:`Update Metadata` or click :guilabel:`No` or :guilabel:`Yes` in " "the :guilabel:`Metadata` column." msgstr "" #: ../dashboard_manage_flavors.rst:96 msgid "" "In the :guilabel:`Update Flavor Metadata` window, you can customize some " "metadata keys, then add it to this flavor and set them values." msgstr "" #: ../dashboard_manage_flavors.rst:100 msgid "**Optional metadata keys**" msgstr "" #: ../dashboard_manage_flavors.rst:103 msgid "quota:cpu_shares" msgstr "" #: ../dashboard_manage_flavors.rst:105 msgid "**CPU limits**" msgstr "" #: ../dashboard_manage_flavors.rst:105 msgid "quota:cpu_period" msgstr "" #: ../dashboard_manage_flavors.rst:107 msgid "quota:cpu_limit" msgstr "" #: ../dashboard_manage_flavors.rst:109 msgid "quota:cpu_reservation" msgstr "" #: ../dashboard_manage_flavors.rst:111 msgid "quota:cpu_quota" msgstr "" #: ../dashboard_manage_flavors.rst:113 msgid "quota:disk_read_bytes_sec" msgstr "" #: ../dashboard_manage_flavors.rst:115 msgid "**Disk tuning**" msgstr "" #: ../dashboard_manage_flavors.rst:115 msgid "quota:disk_read_iops_sec" msgstr "" #: ../dashboard_manage_flavors.rst:117 msgid "quota:disk_write_bytes_sec" msgstr "" #: ../dashboard_manage_flavors.rst:119 msgid "quota:disk_write_iops_sec" msgstr "" #: ../dashboard_manage_flavors.rst:121 msgid "quota:disk_total_bytes_sec" msgstr "" #: ../dashboard_manage_flavors.rst:123 msgid "quota:disk_total_iops_sec" msgstr "" #: ../dashboard_manage_flavors.rst:125 msgid "quota:vif_inbound_average" msgstr "" #: ../dashboard_manage_flavors.rst:127 msgid "**Bandwidth I/O**" msgstr "" #: ../dashboard_manage_flavors.rst:127 msgid "quota:vif_inbound_burst" msgstr "" #: ../dashboard_manage_flavors.rst:129 msgid "quota:vif_inbound_peak" msgstr "" #: ../dashboard_manage_flavors.rst:131 msgid "quota:vif_outbound_average" msgstr "" #: ../dashboard_manage_flavors.rst:133 msgid "quota:vif_outbound_burst" msgstr "" #: ../dashboard_manage_flavors.rst:135 msgid "quota:vif_outbound_peak" msgstr "" #: ../dashboard_manage_flavors.rst:137 msgid "**Watchdog behavior**" msgstr "" #: ../dashboard_manage_flavors.rst:137 msgid "hw:watchdog_action" msgstr "" #: ../dashboard_manage_flavors.rst:139 msgid "hw_rng:allowed" msgstr "" #: ../dashboard_manage_flavors.rst:141 msgid "**Random-number generator**" msgstr "" #: ../dashboard_manage_flavors.rst:141 msgid "hw_rng:rate_bytes" msgstr "" #: ../dashboard_manage_flavors.rst:143 msgid "hw_rng:rate_period" msgstr "" #: ../dashboard_manage_flavors.rst:146 msgid "" "For information about supporting metadata keys, see the :ref:`compute-" "flavors`." msgstr "" #: ../dashboard_manage_flavors.rst:150 msgid "Delete flavors" msgstr "" #: ../dashboard_manage_flavors.rst:156 msgid "Select the flavors that you want to delete." msgstr "" #: ../dashboard_manage_flavors.rst:157 msgid "Click :guilabel:`Delete Flavors`." msgstr "" #: ../dashboard_manage_flavors.rst:158 msgid "" "In the :guilabel:`Confirm Delete Flavors` window, click :guilabel:`Delete " "Flavors` to confirm the deletion. You cannot undo this action." msgstr "" #: ../dashboard_manage_host_aggregates.rst:3 msgid "Create and manage host aggregates" msgstr "" #: ../dashboard_manage_host_aggregates.rst:5 msgid "" "Host aggregates enable administrative users to assign key-value pairs to " "groups of machines." msgstr "" #: ../dashboard_manage_host_aggregates.rst:8 msgid "" "Each node can have multiple aggregates and each aggregate can have multiple " "key-value pairs. You can assign the same key-value pair to multiple " "aggregates." msgstr "" #: ../dashboard_manage_host_aggregates.rst:12 msgid "" "The scheduler uses this information to make scheduling decisions. For " "information, see `Scheduling `__." msgstr "" #: ../dashboard_manage_host_aggregates.rst:17 msgid "To create a host aggregate" msgstr "" #: ../dashboard_manage_host_aggregates.rst:22 #: ../dashboard_manage_host_aggregates.rst:59 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Host Aggregates` category." msgstr "" #: ../dashboard_manage_host_aggregates.rst:25 msgid "Click :guilabel:`Create Host Aggregate`." msgstr "" #: ../dashboard_manage_host_aggregates.rst:27 msgid "" "In the :guilabel:`Create Host Aggregate` dialog box, enter or select the " "following values on the :guilabel:`Host Aggregate Information` tab:" msgstr "" #: ../dashboard_manage_host_aggregates.rst:30 msgid ":guilabel:`Name`: The host aggregate name." msgstr "" #: ../dashboard_manage_host_aggregates.rst:32 msgid "" ":guilabel:`Availability Zone`: The cloud provider defines the default " "availability zone, such as ``us-west``, ``apac-south``, or ``nova``. You can " "target the host aggregate, as follows:" msgstr "" #: ../dashboard_manage_host_aggregates.rst:36 msgid "" "When the host aggregate is exposed as an availability zone, select the " "availability zone when you launch an instance." msgstr "" #: ../dashboard_manage_host_aggregates.rst:39 msgid "" "When the host aggregate is not exposed as an availability zone, select a " "flavor and its extra specs to target the host aggregate." msgstr "" #: ../dashboard_manage_host_aggregates.rst:43 msgid "" "Assign hosts to the aggregate using the :guilabel:`Manage Hosts within " "Aggregate` tab in the same dialog box." msgstr "" #: ../dashboard_manage_host_aggregates.rst:46 msgid "" "To assign a host to the aggregate, click **+** for the host. The host moves " "from the :guilabel:`All available hosts` list to the :guilabel:`Selected " "hosts` list." msgstr "" #: ../dashboard_manage_host_aggregates.rst:50 msgid "" "You can add one host to one or more aggregates. To add a host to an existing " "aggregate, edit the aggregate." msgstr "" #: ../dashboard_manage_host_aggregates.rst:54 msgid "To manage host aggregates" msgstr "" #: ../dashboard_manage_host_aggregates.rst:56 msgid "" "Select the :guilabel:`admin` project from the drop-down list at the top of " "the page." msgstr "" #: ../dashboard_manage_host_aggregates.rst:62 msgid "" "To edit host aggregates, select the host aggregate that you want to edit. " "Click :guilabel:`Edit Host Aggregate`." msgstr "" #: ../dashboard_manage_host_aggregates.rst:65 msgid "" "In the :guilabel:`Edit Host Aggregate` dialog box, you can change the name " "and availability zone for the aggregate." msgstr "" #: ../dashboard_manage_host_aggregates.rst:68 msgid "" "To manage hosts, locate the host aggregate that you want to edit in the " "table. Click :guilabel:`More` and select :guilabel:`Manage Hosts`." msgstr "" #: ../dashboard_manage_host_aggregates.rst:71 msgid "" "In the :guilabel:`Add/Remove Hosts to Aggregate` dialog box, click **+** to " "assign a host to an aggregate. Click **-** to remove a host that is assigned " "to an aggregate." msgstr "" #: ../dashboard_manage_host_aggregates.rst:75 msgid "" "To delete host aggregates, locate the host aggregate that you want to edit " "in the table. Click :guilabel:`More` and select :guilabel:`Delete Host " "Aggregate`." msgstr "" #: ../dashboard_manage_images.rst:3 msgid "Create and manage images" msgstr "" #: ../dashboard_manage_images.rst:5 msgid "" "As an administrative user, you can create and manage images for the projects " "to which you belong. You can also create and manage images for users in all " "projects to which you have access." msgstr "" #: ../dashboard_manage_images.rst:10 msgid "" "To create and manage images in specified projects as an end user, see the " "`upload and manage images with Dashboard in OpenStack End User Guide `_ and `manage " "images with CLI in OpenStack End User Guide `_ ." msgstr "" #: ../dashboard_manage_images.rst:17 msgid "" "To create and manage images as an administrator for other users, use the " "following procedures." msgstr "" #: ../dashboard_manage_images.rst:21 msgid "Create images" msgstr "" #: ../dashboard_manage_images.rst:23 msgid "" "For details about image creation, see the `Virtual Machine Image Guide " "`_." msgstr "" #: ../dashboard_manage_images.rst:28 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Images` category. The images that you can administer for cloud " "users appear on this page." msgstr "" #: ../dashboard_manage_images.rst:31 msgid "" "Click :guilabel:`Create Image`, which opens the :guilabel:`Create An Image` " "window." msgstr "" #: ../dashboard_manage_images.rst:36 msgid "**Figure Dashboard — Create Image**" msgstr "" #: ../dashboard_manage_images.rst:38 msgid "" "In the :guilabel:`Create An Image` window, enter or select the following " "values:" msgstr "" #: ../dashboard_manage_images.rst:42 msgid ":guilabel:`Name`" msgstr "" #: ../dashboard_manage_images.rst:42 msgid "Enter a name for the image." msgstr "" #: ../dashboard_manage_images.rst:44 msgid ":guilabel:`Description`" msgstr "" #: ../dashboard_manage_images.rst:44 msgid "Enter a brief description of the image." msgstr "" #: ../dashboard_manage_images.rst:47 msgid ":guilabel:`Image Source`" msgstr "" #: ../dashboard_manage_images.rst:47 msgid "" "Choose the image source from the dropdown list. Your choices are :guilabel:" "`Image Location` and :guilabel:`Image File`." msgstr "" #: ../dashboard_manage_images.rst:52 msgid ":guilabel:`Image File` or :guilabel:`Image Location`" msgstr "" #: ../dashboard_manage_images.rst:52 msgid "" "Based on your selection, there is an :guilabel:`Image File` or :guilabel:" "`Image Location` field. You can include the location URL or browse for the " "image file on your file system and add it." msgstr "" #: ../dashboard_manage_images.rst:60 msgid ":guilabel:`Kernel`" msgstr "" #: ../dashboard_manage_images.rst:60 msgid "Select the kernel to boot an AMI-style image." msgstr "" #: ../dashboard_manage_images.rst:63 msgid ":guilabel:`Ramdisk`" msgstr "" #: ../dashboard_manage_images.rst:63 msgid "Select the ramdisk to boot an AMI-style image." msgstr "" #: ../dashboard_manage_images.rst:66 msgid ":guilabel:`Format`" msgstr "" #: ../dashboard_manage_images.rst:66 msgid "Select the image format." msgstr "" #: ../dashboard_manage_images.rst:68 msgid ":guilabel:`Architecture`" msgstr "" #: ../dashboard_manage_images.rst:68 msgid "" "Specify the architecture. For example, ``i386`` for a 32-bit architecture or " "``x86_64`` for a 64-bit architecture." msgstr "" #: ../dashboard_manage_images.rst:73 msgid ":guilabel:`Minimum Disk (GB)`" msgstr "" #: ../dashboard_manage_images.rst:73 ../dashboard_manage_images.rst:75 msgid "Leave this field empty." msgstr "" #: ../dashboard_manage_images.rst:75 msgid ":guilabel:`Minimum RAM (MB)`" msgstr "" #: ../dashboard_manage_images.rst:77 msgid ":guilabel:`Copy Data`" msgstr "" #: ../dashboard_manage_images.rst:77 msgid "Specify this option to copy image data to the Image service." msgstr "" #: ../dashboard_manage_images.rst:80 msgid ":guilabel:`Public`" msgstr "" #: ../dashboard_manage_images.rst:80 msgid "Select this option to make the image public to all users." msgstr "" #: ../dashboard_manage_images.rst:83 msgid ":guilabel:`Protected`" msgstr "" #: ../dashboard_manage_images.rst:83 msgid "" "Select this option to ensure that only users with permissions can delete it." msgstr "" #: ../dashboard_manage_images.rst:88 msgid "Click :guilabel:`Create Image`." msgstr "" #: ../dashboard_manage_images.rst:90 msgid "" "The image is queued to be uploaded. It might take several minutes before the " "status changes from ``Queued`` to ``Active``." msgstr "" #: ../dashboard_manage_images.rst:94 msgid "Update images" msgstr "" #: ../dashboard_manage_images.rst:98 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Images` category." msgstr "" #: ../dashboard_manage_images.rst:100 msgid "Select the images that you want to edit. Click :guilabel:`Edit Image`." msgstr "" #: ../dashboard_manage_images.rst:101 msgid "In the :guilabel:`Update Image` window, you can change the image name." msgstr "" #: ../dashboard_manage_images.rst:103 msgid "" "Select the :guilabel:`Public` check box to make the image public. Clear this " "check box to make the image private. You cannot change the :guilabel:`Kernel " "ID`, :guilabel:`Ramdisk ID`, or :guilabel:`Architecture` attributes for an " "image." msgstr "" #: ../dashboard_manage_images.rst:107 msgid "Click :guilabel:`Update Image`." msgstr "" #: ../dashboard_manage_images.rst:110 msgid "Delete images" msgstr "" #: ../dashboard_manage_images.rst:114 msgid "" "On the :guilabel:`Admin tab`, open the :guilabel:`System` tab and click the :" "guilabel:`Images` category." msgstr "" #: ../dashboard_manage_images.rst:116 msgid "Select the images that you want to delete." msgstr "" #: ../dashboard_manage_images.rst:117 msgid "Click :guilabel:`Delete Images`." msgstr "" #: ../dashboard_manage_images.rst:118 msgid "" "In the :guilabel:`Confirm Delete Images` window, click :guilabel:`Delete " "Images` to confirm the deletion." msgstr "" #: ../dashboard_manage_instances.rst:3 msgid "Manage instances" msgstr "" #: ../dashboard_manage_instances.rst:5 msgid "" "As an administrative user, you can manage instances for users in various " "projects. You can view, terminate, edit, perform a soft or hard reboot, " "create a snapshot from, and migrate instances. You can also view the logs " "for instances or launch a VNC console for an instance." msgstr "" #: ../dashboard_manage_instances.rst:10 msgid "" "For information about using the Dashboard to launch instances as an end " "user, see the `OpenStack End User Guide `__." msgstr "" #: ../dashboard_manage_instances.rst:14 msgid "Create instance snapshots" msgstr "" #: ../dashboard_manage_instances.rst:19 ../dashboard_manage_instances.rst:41 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Instances` category." msgstr "" #: ../dashboard_manage_instances.rst:22 msgid "" "Select an instance to create a snapshot from it. From the :guilabel:" "`Actions` drop-down list, select :guilabel:`Create Snapshot`." msgstr "" #: ../dashboard_manage_instances.rst:25 msgid "" "In the :guilabel:`Create Snapshot` window, enter a name for the snapshot." msgstr "" #: ../dashboard_manage_instances.rst:27 msgid "" "Click :guilabel:`Create Snapshot`. The Dashboard shows the instance snapshot " "in the :guilabel:`Images` category." msgstr "" #: ../dashboard_manage_instances.rst:30 msgid "" "To launch an instance from the snapshot, select the snapshot and click :" "guilabel:`Launch Instance`. For information about launching instances, see " "the `OpenStack End User Guide `__." msgstr "" #: ../dashboard_manage_instances.rst:36 msgid "Control the state of an instance" msgstr "" #: ../dashboard_manage_instances.rst:44 msgid "Select the instance for which you want to change the state." msgstr "" #: ../dashboard_manage_instances.rst:46 msgid "" "From the drop-down list in the :guilabel:`Actions` column, select the state." msgstr "" #: ../dashboard_manage_instances.rst:49 msgid "" "Depending on the current state of the instance, you can perform various " "actions on the instance. For example, pause, un-pause, suspend, resume, soft " "or hard reboot, or terminate (actions in red are dangerous)." msgstr "" #: ../dashboard_manage_instances.rst:56 msgid "**Figure Dashboard — Instance Actions**" msgstr "" #: ../dashboard_manage_instances.rst:60 msgid "Track usage" msgstr "" #: ../dashboard_manage_instances.rst:62 msgid "" "Use the :guilabel:`Overview` category to track usage of instances for each " "project." msgstr "" #: ../dashboard_manage_instances.rst:65 msgid "" "You can track costs per month by showing meters like number of VCPUs, disks, " "RAM, and uptime of all your instances." msgstr "" #: ../dashboard_manage_instances.rst:71 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Overview` category." msgstr "" #: ../dashboard_manage_instances.rst:74 msgid "" "Select a month and click :guilabel:`Submit` to query the instance usage for " "that month." msgstr "" #: ../dashboard_manage_instances.rst:77 msgid "Click :guilabel:`Download CSV Summary` to download a CSV summary." msgstr "" #: ../dashboard_manage_resources.rst:3 msgid "View cloud resources" msgstr "" #: ../dashboard_manage_services.rst:3 msgid "View services information" msgstr "" #: ../dashboard_manage_services.rst:5 msgid "" "As an administrative user, you can view information for OpenStack services." msgstr "" #: ../dashboard_manage_services.rst:10 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`System Information` category." msgstr "" #: ../dashboard_manage_services.rst:13 msgid "View the following information on these tabs:" msgstr "" #: ../dashboard_manage_services.rst:15 msgid "" ":guilabel:`Services`: Displays the internal name and the public OpenStack " "name for each service, the host on which the service runs, and whether or " "not the service is enabled." msgstr "" #: ../dashboard_manage_services.rst:20 msgid "" ":guilabel:`Compute Services`: Displays information specific to the Compute " "service. Both host and zone are listed for each service, as well as its " "activation status." msgstr "" #: ../dashboard_manage_services.rst:25 msgid "" ":guilabel:`Block Storage Services`: Displays information specific to the " "Block Storage service. Both host and zone are listed for each service, as " "well as its activation status." msgstr "" #: ../dashboard_manage_services.rst:30 msgid "" ":guilabel:`Network Agents`: Displays the network agents active within the " "cluster, such as L3 and DHCP agents, and the status of each agent." msgstr "" #: ../dashboard_manage_services.rst:34 msgid "" ":guilabel:`Orchestration Services`: Displays information specific to the " "Orchestration service. Name, engine id, host and topic are listed for each " "service, as well as its activation status." msgstr "" #: ../dashboard_manage_shares.rst:3 msgid "Manage shares and share types" msgstr "" #: ../dashboard_manage_shares.rst:5 msgid "" "Shares are file storage that instances can access. Users can allow or deny a " "running instance to have access to a share at any time. For information " "about using the Dashboard to create and manage shares as an end user, see " "the `OpenStack End User Guide `_." msgstr "" #: ../dashboard_manage_shares.rst:11 msgid "" "As an administrative user, you can manage shares and share types for users " "in various projects. You can create and delete share types, and view or " "delete shares." msgstr "" #: ../dashboard_manage_shares.rst:18 msgid "Create a share type" msgstr "" #: ../dashboard_manage_shares.rst:20 ../dashboard_manage_shares.rst:46 #: ../dashboard_manage_shares.rst:73 ../dashboard_manage_shares.rst:94 #: ../dashboard_manage_shares.rst:114 ../dashboard_manage_shares.rst:134 msgid "" "Log in to the Dashboard and choose the :guilabel:`admin` project from the " "drop-down list." msgstr "" #: ../dashboard_manage_shares.rst:23 ../dashboard_manage_shares.rst:49 #: ../dashboard_manage_shares.rst:76 ../dashboard_manage_shares.rst:97 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Shares` category." msgstr "" #: ../dashboard_manage_shares.rst:26 msgid "" "Click the :guilabel:`Share Types` tab, and click :guilabel:`Create Share " "Type` button. In the :guilabel:`Create Share Type` window, enter or select " "the following values." msgstr "" #: ../dashboard_manage_shares.rst:31 msgid ":guilabel:`Name`: Enter a name for the share type." msgstr "" #: ../dashboard_manage_shares.rst:33 msgid ":guilabel:`Driver handles share servers`: Choose True or False" msgstr "" #: ../dashboard_manage_shares.rst:35 msgid ":guilabel:`Extra specs`: To add extra specs, use key=value." msgstr "" #: ../dashboard_manage_shares.rst:37 msgid "Click :guilabel:`Create Share Type` button to confirm your changes." msgstr "" # #-#-#-#-# dashboard_manage_shares.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# dashboard_manage_volumes.pot (Administrator Guide 0.9) #-#-#-#-# #: ../dashboard_manage_shares.rst:41 ../dashboard_manage_shares.rst:66 #: ../dashboard_manage_shares.rst:89 ../dashboard_manage_shares.rst:109 #: ../dashboard_manage_shares.rst:129 ../dashboard_manage_shares.rst:149 #: ../dashboard_manage_volumes.rst:35 ../dashboard_manage_volumes.rst:138 #: ../dashboard_manage_volumes.rst:161 msgid "A message indicates whether the action succeeded." msgstr "" #: ../dashboard_manage_shares.rst:44 msgid "Update share type" msgstr "" #: ../dashboard_manage_shares.rst:52 msgid "" "Click the :guilabel:`Share Types` tab, select the share type that you want " "to update." msgstr "" #: ../dashboard_manage_shares.rst:55 msgid "Select :guilabel:`Update Share Type` from Actions." msgstr "" #: ../dashboard_manage_shares.rst:57 msgid "In the :guilabel:`Update Share Type` window, update extra specs." msgstr "" #: ../dashboard_manage_shares.rst:59 msgid "" ":guilabel:`Extra specs`: To add extra specs, use key=value. To unset extra " "specs, use key." msgstr "" #: ../dashboard_manage_shares.rst:62 msgid "Click :guilabel:`Update Share Type` button to confirm your changes." msgstr "" #: ../dashboard_manage_shares.rst:69 msgid "Delete share types" msgstr "" #: ../dashboard_manage_shares.rst:71 msgid "When you delete a share type, shares of that type are not deleted." msgstr "" #: ../dashboard_manage_shares.rst:79 msgid "" "Click the :guilabel:`Share Types` tab, select the share type or types that " "you want to delete." msgstr "" #: ../dashboard_manage_shares.rst:82 msgid "Click :guilabel:`Delete Share Types` button." msgstr "" #: ../dashboard_manage_shares.rst:84 msgid "" "In the :guilabel:`Confirm Delete Share Types` window, click the :guilabel:" "`Delete Share Types` button to confirm the action." msgstr "" #: ../dashboard_manage_shares.rst:92 msgid "Delete shares" msgstr "" #: ../dashboard_manage_shares.rst:100 msgid "Select the share or shares that you want to delete." msgstr "" #: ../dashboard_manage_shares.rst:102 msgid "Click :guilabel:`Delete Shares` button." msgstr "" #: ../dashboard_manage_shares.rst:104 msgid "" "In the :guilabel:`Confirm Delete Shares` window, click the :guilabel:`Delete " "Shares` button to confirm the action." msgstr "" #: ../dashboard_manage_shares.rst:112 msgid "Delete share server" msgstr "" #: ../dashboard_manage_shares.rst:117 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Share Servers` category." msgstr "" #: ../dashboard_manage_shares.rst:120 msgid "Select the share that you want to delete." msgstr "" #: ../dashboard_manage_shares.rst:122 msgid "Click :guilabel:`Delete Share Server` button." msgstr "" #: ../dashboard_manage_shares.rst:124 msgid "" "In the :guilabel:`Confirm Delete Share Server` window, click the :guilabel:" "`Delete Share Server` button to confirm the action." msgstr "" #: ../dashboard_manage_shares.rst:132 msgid "Delete share networks" msgstr "" #: ../dashboard_manage_shares.rst:137 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Share Networks` category." msgstr "" #: ../dashboard_manage_shares.rst:140 msgid "Select the share network or share networks that you want to delete." msgstr "" #: ../dashboard_manage_shares.rst:142 msgid "Click :guilabel:`Delete Share Networks` button." msgstr "" #: ../dashboard_manage_shares.rst:144 msgid "" "In the :guilabel:`Confirm Delete Share Networks` window, click the :guilabel:" "`Delete Share Networks` button to confirm the action." msgstr "" #: ../dashboard_manage_volumes.rst:3 msgid "Manage volumes and volume types" msgstr "" #: ../dashboard_manage_volumes.rst:5 msgid "" "Volumes are the Block Storage devices that you attach to instances to enable " "persistent storage. Users can attach a volume to a running instance or " "detach a volume and attach it to another instance at any time. For " "information about using the dashboard to create and manage volumes as an end " "user, see the `OpenStack End User Guide `_." msgstr "" #: ../dashboard_manage_volumes.rst:11 msgid "" "As an administrative user, you can manage volumes and volume types for users " "in various projects. You can create and delete volume types, and you can " "view and delete volumes. Note that a volume can be encrypted by using the " "steps outlined below." msgstr "" #: ../dashboard_manage_volumes.rst:19 msgid "Create a volume type" msgstr "" #: ../dashboard_manage_volumes.rst:24 ../dashboard_manage_volumes.rst:125 #: ../dashboard_manage_volumes.rst:149 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Volumes` category." msgstr "" #: ../dashboard_manage_volumes.rst:27 msgid "" "Click the :guilabel:`Volume Types` tab, and click :guilabel:`Create Volume " "Type` button. In the :guilabel:`Create Volume Type` window, enter a name for " "the volume type." msgstr "" #: ../dashboard_manage_volumes.rst:31 msgid "Click :guilabel:`Create Volume Type` button to confirm your changes." msgstr "" #: ../dashboard_manage_volumes.rst:38 msgid "Create an encrypted volume type" msgstr "" #: ../dashboard_manage_volumes.rst:40 msgid "" "Create a volume type using the steps above for :ref:`create-a-volume-type`." msgstr "" #: ../dashboard_manage_volumes.rst:42 msgid "" "Click :guilabel:`Create Encryption` in the Actions column of the newly " "created volume type." msgstr "" #: ../dashboard_manage_volumes.rst:45 msgid "" "Configure the encrypted volume by setting the parameters below from " "available options (see table):" msgstr "" #: ../dashboard_manage_volumes.rst:48 ../dashboard_manage_volumes.rst:71 msgid "Provider" msgstr "" #: ../dashboard_manage_volumes.rst:49 msgid "Specifies the class responsible for configuring the encryption." msgstr "" #: ../dashboard_manage_volumes.rst:51 ../dashboard_manage_volumes.rst:82 msgid "Control Location" msgstr "" #: ../dashboard_manage_volumes.rst:51 msgid "" "Specifies whether the encryption is from the front end (nova) or the back " "end (cinder)." msgstr "" #: ../dashboard_manage_volumes.rst:53 ../dashboard_manage_volumes.rst:98 msgid "Cipher" msgstr "" #: ../dashboard_manage_volumes.rst:54 msgid "Specifies the encryption algorithm." msgstr "" #: ../dashboard_manage_volumes.rst:56 ../dashboard_manage_volumes.rst:105 msgid "Key Size (bits)" msgstr "" #: ../dashboard_manage_volumes.rst:56 msgid "Specifies the encryption key size." msgstr "" #: ../dashboard_manage_volumes.rst:58 msgid "Click :guilabel:`Create Volume Type Encryption`." msgstr "" #: ../dashboard_manage_volumes.rst:62 msgid "**Encryption Options**" msgstr "" #: ../dashboard_manage_volumes.rst:64 msgid "" "The table below provides a few alternatives available for creating encrypted " "volumes." msgstr "" #: ../dashboard_manage_volumes.rst:68 msgid "Comments" msgstr "" #: ../dashboard_manage_volumes.rst:68 msgid "Encryption parameters" msgstr "" #: ../dashboard_manage_volumes.rst:68 msgid "Parameter options" msgstr "" #: ../dashboard_manage_volumes.rst:71 msgid "" "Allows easier import and migration of imported encrypted volumes, and allows " "access key to be changed without re-encrypting the volume" msgstr "" #: ../dashboard_manage_volumes.rst:71 msgid "nova.volume.encryptors. luks.LuksEncryptor (Recommended)" msgstr "" #: ../dashboard_manage_volumes.rst:78 msgid "Less disk overhead than LUKS" msgstr "" #: ../dashboard_manage_volumes.rst:78 msgid "nova.volume.encryptors. cryptsetup. CryptsetupEncryptor" msgstr "" #: ../dashboard_manage_volumes.rst:82 msgid "" "The encryption occurs within nova so that the data transmitted over the " "network is encrypted" msgstr "" #: ../dashboard_manage_volumes.rst:82 msgid "front-end (Recommended)" msgstr "" #: ../dashboard_manage_volumes.rst:88 msgid "" "This could be selected if a cinder plug-in supporting an encrypted back-end " "block storage device becomes available in the future. TLS or other network " "encryption would also be needed to protect data as it traverses the network" msgstr "" #: ../dashboard_manage_volumes.rst:88 msgid "back-end" msgstr "" #: ../dashboard_manage_volumes.rst:98 msgid "See NIST reference below to see advantages*" msgstr "" #: ../dashboard_manage_volumes.rst:98 msgid "aes-xts-plain64 (Recommended)" msgstr "" #: ../dashboard_manage_volumes.rst:101 msgid "" "Note: On the command line, type 'cryptsetup benchmark' for additional options" msgstr "" #: ../dashboard_manage_volumes.rst:101 msgid "aes-cbc-essiv" msgstr "" #: ../dashboard_manage_volumes.rst:105 msgid "" "512 (Recommended for aes-xts-plain64. 256 should be used for aes-cbc-essiv)" msgstr "" #: ../dashboard_manage_volumes.rst:105 msgid "" "Using this selection for aes-xts, the underlying key size would only be 256-" "bits*" msgstr "" #: ../dashboard_manage_volumes.rst:110 msgid "256" msgstr "" #: ../dashboard_manage_volumes.rst:110 msgid "" "Using this selection for aes-xts, the underlying key size would only be 128-" "bits*" msgstr "" #: ../dashboard_manage_volumes.rst:115 msgid "" "`*` Source `NIST SP 800-38E `_" msgstr "" #: ../dashboard_manage_volumes.rst:118 msgid "Delete volume types" msgstr "" #: ../dashboard_manage_volumes.rst:120 msgid "When you delete a volume type, volumes of that type are not deleted." msgstr "" #: ../dashboard_manage_volumes.rst:128 msgid "" "Click the :guilabel:`Volume Types` tab, select the volume type or types that " "you want to delete." msgstr "" #: ../dashboard_manage_volumes.rst:131 msgid "Click :guilabel:`Delete Volume Types` button." msgstr "" #: ../dashboard_manage_volumes.rst:133 msgid "" "In the :guilabel:`Confirm Delete Volume Types` window, click the :guilabel:" "`Delete Volume Types` button to confirm the action." msgstr "" #: ../dashboard_manage_volumes.rst:141 msgid "Delete volumes" msgstr "" #: ../dashboard_manage_volumes.rst:143 msgid "" "When you delete an instance, the data of its attached volumes is not " "destroyed." msgstr "" #: ../dashboard_manage_volumes.rst:152 msgid "Select the volume or volumes that you want to delete." msgstr "" #: ../dashboard_manage_volumes.rst:154 msgid "Click :guilabel:`Delete Volumes` button." msgstr "" #: ../dashboard_manage_volumes.rst:156 msgid "" "In the :guilabel:`Confirm Delete Volumes` window, click the :guilabel:" "`Delete Volumes` button to confirm the action." msgstr "" #: ../dashboard_sessions.rst:3 msgid "Set up session storage for the Dashboard" msgstr "" #: ../dashboard_sessions.rst:5 msgid "" "The Dashboard uses `Django sessions framework `__ to handle user session data. However, " "you can use any available session back end. You customize the session back " "end through the ``SESSION_ENGINE`` setting in your ``local_settings.py`` " "file." msgstr "" #: ../dashboard_sessions.rst:11 msgid "" "After architecting and implementing the core OpenStack services and other " "required services, combined with the Dashboard service steps below, users " "and administrators can use the OpenStack dashboard. Refer to the `OpenStack " "Dashboard `__ chapter " "of the User Guide for further instructions on logging in to the Dashboard." msgstr "" #: ../dashboard_sessions.rst:19 msgid "" "The following sections describe the pros and cons of each option as it " "pertains to deploying the Dashboard." msgstr "" #: ../dashboard_sessions.rst:23 msgid "Local memory cache" msgstr "" #: ../dashboard_sessions.rst:25 msgid "" "Local memory storage is the quickest and easiest session back end to set up, " "as it has no external dependencies whatsoever. It has the following " "significant drawbacks:" msgstr "" #: ../dashboard_sessions.rst:29 msgid "No shared storage across processes or workers." msgstr "" #: ../dashboard_sessions.rst:30 msgid "No persistence after a process terminates." msgstr "" #: ../dashboard_sessions.rst:32 msgid "" "The local memory back end is enabled as the default for Horizon solely " "because it has no dependencies. It is not recommended for production use, or " "even for serious development work." msgstr "" #: ../dashboard_sessions.rst:45 msgid "" "You can use applications such as ``Memcached`` or ``Redis`` for external " "caching. These applications offer persistence and shared storage and are " "useful for small-scale deployments and development." msgstr "" #: ../dashboard_sessions.rst:50 msgid "Memcached" msgstr "" #: ../dashboard_sessions.rst:52 msgid "" "Memcached is a high-performance and distributed memory object caching system " "providing in-memory key-value store for small chunks of arbitrary data." msgstr "" #: ../dashboard_sessions.rst:56 ../dashboard_sessions.rst:77 msgid "Requirements:" msgstr "" #: ../dashboard_sessions.rst:58 msgid "Memcached service running and accessible." msgstr "" #: ../dashboard_sessions.rst:59 msgid "Python module ``python-memcached`` installed." msgstr "" #: ../dashboard_sessions.rst:72 msgid "Redis" msgstr "" #: ../dashboard_sessions.rst:74 msgid "" "Redis is an open source, BSD licensed, advanced key-value store. It is often " "referred to as a data structure server." msgstr "" #: ../dashboard_sessions.rst:79 msgid "Redis service running and accessible." msgstr "" #: ../dashboard_sessions.rst:80 msgid "Python modules ``redis`` and ``django-redis`` installed." msgstr "" #: ../dashboard_sessions.rst:96 msgid "Initialize and configure the database" msgstr "" #: ../dashboard_sessions.rst:98 msgid "" "Database-backed sessions are scalable, persistent, and can be made high-" "concurrency and highly-available." msgstr "" #: ../dashboard_sessions.rst:101 msgid "" "However, database-backed sessions are one of the slower session storages and " "incur a high overhead under heavy usage. Proper configuration of your " "database deployment can also be a substantial undertaking and is far beyond " "the scope of this documentation." msgstr "" #: ../dashboard_sessions.rst:106 msgid "Start the MySQL command-line client." msgstr "" #: ../dashboard_sessions.rst:112 msgid "Enter the MySQL root user's password when prompted." msgstr "" #: ../dashboard_sessions.rst:113 msgid "To configure the MySQL database, create the dash database." msgstr "" #: ../dashboard_sessions.rst:119 msgid "" "Create a MySQL user for the newly created dash database that has full " "control of the database. Replace DASH\\_DBPASS with a password for the new " "user." msgstr "" #: ../dashboard_sessions.rst:128 msgid "Enter ``quit`` at the ``mysql>`` prompt to exit MySQL." msgstr "" #: ../dashboard_sessions.rst:130 msgid "In the ``local_settings.py`` file, change these options:" msgstr "" #: ../dashboard_sessions.rst:147 msgid "" "After configuring the ``local_settings.py`` file as shown, you can run the :" "command:`manage.py syncdb` command to populate this newly created database." msgstr "" #: ../dashboard_sessions.rst:155 msgid "The following output is returned:" msgstr "" #: ../dashboard_sessions.rst:164 msgid "" "To avoid a warning when you restart Apache on Ubuntu, create a ``blackhole`` " "directory in the Dashboard directory, as follows." msgstr "" #: ../dashboard_sessions.rst:171 msgid "Restart the Apache service." msgstr "" #: ../dashboard_sessions.rst:173 msgid "" "On Ubuntu, restart the ``nova-api`` service to ensure that the API server " "can connect to the Dashboard without error." msgstr "" #: ../dashboard_sessions.rst:181 msgid "Cached database" msgstr "" #: ../dashboard_sessions.rst:183 msgid "" "To mitigate the performance issues of database queries, you can use the " "Django ``cached_db`` session back end, which utilizes both your database and " "caching infrastructure to perform write-through caching and efficient " "retrieval." msgstr "" #: ../dashboard_sessions.rst:188 msgid "" "Enable this hybrid setting by configuring both your database and cache, as " "discussed previously. Then, set the following value:" msgstr "" #: ../dashboard_sessions.rst:196 msgid "Cookies" msgstr "" #: ../dashboard_sessions.rst:198 msgid "" "If you use Django 1.4 or later, the ``signed_cookies`` back end avoids " "server load and scaling problems." msgstr "" #: ../dashboard_sessions.rst:201 msgid "" "This back end stores session data in a cookie, which is stored by the user's " "browser. The back end uses a cryptographic signing technique to ensure " "session data is not tampered with during transport. This is not the same as " "encryption; session data is still readable by an attacker." msgstr "" #: ../dashboard_sessions.rst:206 msgid "" "The pros of this engine are that it requires no additional dependencies or " "infrastructure overhead, and it scales indefinitely as long as the quantity " "of session data being stored fits into a normal cookie." msgstr "" #: ../dashboard_sessions.rst:210 msgid "" "The biggest downside is that it places session data into storage on the " "user's machine and transports it over the wire. It also limits the quantity " "of session data that can be stored." msgstr "" #: ../dashboard_sessions.rst:214 msgid "" "See the Django `cookie-based sessions `__ documentation." msgstr "" #: ../dashboard_set_quotas.rst:5 msgid "View and manage quotas" msgstr "" #: ../dashboard_set_quotas.rst:16 msgid "" "Typically, you change quotas when a project needs more than ten volumes or 1 " "|nbsp| TB on a compute node." msgstr "" #: ../dashboard_set_quotas.rst:19 msgid "" "Using the Dashboard, you can view default Compute and Block Storage quotas " "for new tenants, as well as update quotas for existing tenants." msgstr "" #: ../dashboard_set_quotas.rst:24 msgid "" "Using the command-line interface, you can manage quotas for the OpenStack " "Compute service, the OpenStack Block Storage service, and the OpenStack " "Networking service (see `OpenStack Administrator Guide `_). Additionally, you can " "update Compute service quotas for tenant users." msgstr "" #: ../dashboard_set_quotas.rst:31 msgid "" "The following table describes the Compute and Block Storage service quotas:" msgstr "" #: ../dashboard_set_quotas.rst:35 msgid "**Quota Descriptions**" msgstr "" #: ../dashboard_set_quotas.rst:38 msgid "Quota Name" msgstr "" # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# identity_concepts.pot (Administrator Guide 0.9) #-#-#-#-# #: ../dashboard_set_quotas.rst:38 ../identity_concepts.rst:72 msgid "Service" msgstr "" #: ../dashboard_set_quotas.rst:40 msgid "Gigabytes" msgstr "" #: ../dashboard_set_quotas.rst:43 msgid "Instances" msgstr "" #: ../dashboard_set_quotas.rst:43 msgid "Instances allowed for each project." msgstr "" #: ../dashboard_set_quotas.rst:46 msgid "Injected Files" msgstr "" #: ../dashboard_set_quotas.rst:46 msgid "Injected files allowed for each project." msgstr "" #: ../dashboard_set_quotas.rst:49 msgid "Content bytes allowed for each injected file." msgstr "" #: ../dashboard_set_quotas.rst:49 msgid "Injected File Content Bytes" msgstr "" #: ../dashboard_set_quotas.rst:52 msgid "Keypairs" msgstr "" #: ../dashboard_set_quotas.rst:52 msgid "Number of keypairs." msgstr "" #: ../dashboard_set_quotas.rst:54 msgid "Metadata Items" msgstr "" #: ../dashboard_set_quotas.rst:54 msgid "Metadata items allowed for each instance." msgstr "" #: ../dashboard_set_quotas.rst:57 msgid "RAM (MB)" msgstr "" #: ../dashboard_set_quotas.rst:57 msgid "RAM megabytes allowed for each instance." msgstr "" #: ../dashboard_set_quotas.rst:60 msgid "Security Groups" msgstr "" #: ../dashboard_set_quotas.rst:60 msgid "Security groups allowed for each project." msgstr "" #: ../dashboard_set_quotas.rst:63 msgid "Rules allowed for each security group." msgstr "" #: ../dashboard_set_quotas.rst:63 msgid "Security Group Rules" msgstr "" # #-#-#-#-# dashboard_set_quotas.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_share_replication.pot (Administrator Guide 0.9) #-#-#-#-# #: ../dashboard_set_quotas.rst:66 #: ../shared_file_systems_share_replication.rst:442 msgid "Snapshots" msgstr "" #: ../dashboard_set_quotas.rst:69 msgid "Instance cores allowed for each project." msgstr "" #: ../dashboard_set_quotas.rst:72 msgid "Volumes" msgstr "" #: ../dashboard_set_quotas.rst:79 msgid "View default project quotas" msgstr "" #: ../dashboard_set_quotas.rst:84 ../dashboard_set_quotas.rst:102 msgid "" "On the :guilabel:`Admin` tab, open the :guilabel:`System` tab and click the :" "guilabel:`Defaults` category." msgstr "" #: ../dashboard_set_quotas.rst:87 msgid "The default quota values are displayed." msgstr "" #: ../dashboard_set_quotas.rst:91 msgid "" "You can sort the table by clicking on either the :guilabel:`Quota Name` or :" "guilabel:`Limit` column headers." msgstr "" #: ../dashboard_set_quotas.rst:97 msgid "Update project quotas" msgstr "" #: ../dashboard_set_quotas.rst:105 ../dashboard_set_quotas.rst:110 msgid "Click the :guilabel:`Update Defaults` button." msgstr "" #: ../dashboard_set_quotas.rst:107 msgid "" "In the :guilabel:`Update Default Quotas` window, you can edit the default " "quota values." msgstr "" #: ../dashboard_set_quotas.rst:114 msgid "" "The dashboard does not show all possible project quotas. To view and update " "the quotas for a service, use its command-line client. See `OpenStack " "Administrator Guide `_." msgstr "" #: ../dashboard_view_cloud_resources.rst:3 msgid "View cloud usage statistics" msgstr "" #: ../dashboard_view_cloud_resources.rst:5 msgid "" "The Telemetry service provides user-level usage data for OpenStack-based " "clouds, which can be used for customer billing, system monitoring, or " "alerts. Data can be collected by notifications sent by existing OpenStack " "components (for example, usage events emitted from Compute) or by polling " "the infrastructure (for example, libvirt)." msgstr "" #: ../dashboard_view_cloud_resources.rst:13 msgid "" "You can only view metering statistics on the dashboard (available only to " "administrators). The Telemetry service must be set up and administered " "through the :command:`ceilometer` command-line interface (CLI)." msgstr "" #: ../dashboard_view_cloud_resources.rst:18 msgid "" "For basic administration information, refer to the \"Measure Cloud Resources" "\" chapter in the `OpenStack End User Guide `_." msgstr "" #: ../dashboard_view_cloud_resources.rst:25 msgid "View resource statistics" msgstr "" #: ../dashboard_view_cloud_resources.rst:30 msgid "" "On the :guilabel:`Admin` tab, click the :guilabel:`Resource Usage` category." msgstr "" #: ../dashboard_view_cloud_resources.rst:32 msgid "Click the:" msgstr "" #: ../dashboard_view_cloud_resources.rst:34 msgid "" ":guilabel:`Usage Report` tab to view a usage report per tenant (project) by " "specifying the time period (or even use a calendar to define a date range)." msgstr "" #: ../dashboard_view_cloud_resources.rst:38 msgid "" ":guilabel:`Stats` tab to view a multi-series line chart with user-defined " "meters. You group by project, define the value type (min, max, avg, or sum), " "and specify the time period (or even use a calendar to define a date range)." msgstr "" # #-#-#-#-# database.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# #: ../database.rst:5 ../telemetry-data-collection.rst:1114 msgid "Database" msgstr "" #: ../database.rst:7 msgid "The Database service provides database management features." msgstr "" #: ../database.rst:12 msgid "" "The Database service provides scalable and reliable cloud provisioning " "functionality for both relational and non-relational database engines. Users " "can quickly and easily use database features without the burden of handling " "complex administrative tasks. Cloud users and database administrators can " "provision and manage multiple database instances as needed." msgstr "" #: ../database.rst:19 msgid "" "The Database service provides resource isolation at high performance levels, " "and automates complex administrative tasks such as deployment, " "configuration, patching, backups, restores, and monitoring." msgstr "" #: ../database.rst:23 msgid "" "You can modify various cluster characteristics by editing the ``/etc/trove/" "trove.conf`` file. A comprehensive list of the Database service " "configuration options is described in the `Database service `_ chapter in " "the *Configuration Reference*." msgstr "" #: ../database.rst:30 msgid "Create a data store" msgstr "" #: ../database.rst:32 msgid "" "An administrative user can create data stores for a variety of databases." msgstr "" #: ../database.rst:35 msgid "" "This section assumes you do not yet have a MySQL data store, and shows you " "how to create a MySQL data store and populate it with a MySQL 5.5 data store " "version." msgstr "" #: ../database.rst:40 msgid "**To create a data store**" msgstr "" #: ../database.rst:42 msgid "**Create a trove image**" msgstr "" #: ../database.rst:44 msgid "" "Create an image for the type of database you want to use, for example, " "MySQL, MongoDB, Cassandra." msgstr "" #: ../database.rst:47 msgid "" "This image must have the trove guest agent installed, and it must have the " "``trove-guestagent.conf`` file configured to connect to your OpenStack " "environment. To configure ``trove-guestagent.conf``, add the following lines " "to ``trove-guestagent.conf`` on the guest instance you are using to build " "your image:" msgstr "" #: ../database.rst:62 msgid "" "This example assumes you have created a MySQL 5.5 image called ``mysql-5.5." "qcow2``." msgstr "" #: ../database.rst:67 msgid "" "If you have a guest image that was created with an OpenStack version before " "Kilo, modify the guest agent init script for the guest image to read the " "configuration files from the directory ``/etc/trove/conf.d``." msgstr "" #: ../database.rst:71 msgid "" "For a backwards compatibility with pre-Kilo guest instances, set the " "database service configuration options ``injected_config_location`` to ``/" "etc/trove`` and ``guest_info`` to ``/etc/guest_info``." msgstr "" #: ../database.rst:75 msgid "**Register image with Image service**" msgstr "" #: ../database.rst:77 msgid "You need to register your guest image with the Image service." msgstr "" #: ../database.rst:79 msgid "" "In this example, you use the glance :command:`image-create` command to " "register a ``mysql-5.5.qcow2`` image." msgstr "" #: ../database.rst:107 msgid "**Create the data store**" msgstr "" #: ../database.rst:109 msgid "" "Create the data store that will house the new image. To do this, use the :" "command:`trove-manage` :command:`datastore_update` command." msgstr "" #: ../database.rst:112 ../database.rst:152 msgid "This example uses the following arguments:" msgstr "" #: ../database.rst:118 ../database.rst:158 msgid "Argument" msgstr "" #: ../database.rst:121 ../database.rst:162 msgid "config file" msgstr "" #: ../database.rst:122 ../database.rst:163 msgid "The configuration file to use." msgstr "" #: ../database.rst:123 ../database.rst:164 msgid ":option:`--config-file=/etc/trove/trove.conf`" msgstr "" #: ../database.rst:125 msgid "Name you want to use for this data store." msgstr "" #: ../database.rst:126 ../database.rst:169 ../database.rst:192 msgid "``mysql``" msgstr "" #: ../database.rst:127 msgid "default version" msgstr "" #: ../database.rst:128 msgid "" "You can attach multiple versions/images to a data store. For example, you " "might have a MySQL 5.5 version and a MySQL 5.6 version. You can designate " "one version as the default, which the system uses if a user does not " "explicitly request a specific version." msgstr "" #: ../database.rst:133 ../database.rst:204 msgid "``\"\"``" msgstr "" #: ../database.rst:135 msgid "" "At this point, you do not yet have a default version, so pass in an empty " "string." msgstr "" #: ../database.rst:140 ../database.rst:217 msgid "Example:" msgstr "" #: ../database.rst:146 msgid "**Add a version to the new data store**" msgstr "" #: ../database.rst:148 msgid "" "Now that you have a MySQL data store, you can add a version to it, using " "the :command:`trove-manage` :command:`datastore_version_update` command. The " "version indicates which guest image to use." msgstr "" #: ../database.rst:166 msgid "data store" msgstr "" #: ../database.rst:167 msgid "" "The name of the data store you just created via ``trove-manage`` :command:" "`datastore_update`." msgstr "" #: ../database.rst:171 msgid "version name" msgstr "" #: ../database.rst:172 msgid "The name of the version you are adding to the data store." msgstr "" #: ../database.rst:173 msgid "``mysql-5.5``" msgstr "" #: ../database.rst:175 msgid "data store manager" msgstr "" #: ../database.rst:176 msgid "" "Which data store manager to use for this version. Typically, the data store " "manager is identified by one of the following strings, depending on the " "database:" msgstr "" #: ../database.rst:180 msgid "cassandra" msgstr "" #: ../database.rst:181 msgid "couchbase" msgstr "" #: ../database.rst:182 msgid "couchdb" msgstr "" #: ../database.rst:183 msgid "db2" msgstr "" #: ../database.rst:184 msgid "mariadb" msgstr "" #: ../database.rst:185 msgid "mongodb" msgstr "" #: ../database.rst:186 msgid "mysql" msgstr "" #: ../database.rst:187 msgid "percona" msgstr "" #: ../database.rst:188 msgid "postgresql" msgstr "" #: ../database.rst:189 msgid "pxc" msgstr "" #: ../database.rst:190 msgid "redis" msgstr "" #: ../database.rst:191 msgid "vertica" msgstr "" #: ../database.rst:194 msgid "glance ID" msgstr "" #: ../database.rst:195 msgid "" "The ID of the guest image you just added to the Image service. You can get " "this ID by using the glance :command:`image-show` IMAGE_NAME command." msgstr "" #: ../database.rst:198 msgid "bb75f870-0c33-4907-8467-1367f8cb15b6" msgstr "" #: ../database.rst:200 msgid "packages" msgstr "" #: ../database.rst:201 msgid "" "If you want to put additional packages on each guest that you create with " "this data store version, you can list the package names here." msgstr "" #: ../database.rst:206 msgid "" "In this example, the guest image already contains all the required packages, " "so leave this argument empty." msgstr "" # #-#-#-#-# database.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-eql-volume-size.pot (Administrator Guide 0.9) #-#-#-#-# #: ../database.rst:209 ../ts-eql-volume-size.rst:112 msgid "active" msgstr "" #: ../database.rst:211 msgid "Set this to either 1 or 0:" msgstr "" #: ../database.rst:211 msgid "``1`` = active" msgstr "" #: ../database.rst:212 msgid "``0`` = disabled" msgstr "" #: ../database.rst:223 msgid "" "**Optional.** Set your new version as the default version. To do this, use " "the :command:`trove-manage` :command:`datastore_update` command again, this " "time specifying the version you just created." msgstr "" #: ../database.rst:231 msgid "**Load validation rules for configuration groups**" msgstr "" #: ../database.rst:235 msgid "**Applies only to MySQL and Percona data stores**" msgstr "" #: ../database.rst:237 msgid "" "If you just created a MySQL or Percona data store, then you need to load the " "appropriate validation rules, as described in this step." msgstr "" #: ../database.rst:240 msgid "If you just created a different data store, skip this step." msgstr "" #: ../database.rst:242 msgid "" "**Background.** You can manage database configuration tasks by using " "configuration groups. Configuration groups let you set configuration " "parameters, in bulk, on one or more databases." msgstr "" #: ../database.rst:246 msgid "" "When you set up a configuration group using the trove :command:" "`configuration-create` command, this command compares the configuration " "values you are setting against a list of valid configuration values that are " "stored in the ``validation-rules.json`` file." msgstr "" #: ../database.rst:255 msgid "Operating System" msgstr "" #: ../database.rst:256 msgid "Location of :file:`validation-rules.json`" msgstr "" # #-#-#-#-# database.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# #: ../database.rst:257 ../networking_adv-features.rst:630 msgid "Notes" msgstr "" #: ../database.rst:259 msgid "Ubuntu 14.04" msgstr "" #: ../database.rst:260 msgid ":file:`/usr/lib/python2.7/dist-packages/trove/templates/DATASTORE_NAME`" msgstr "" #: ../database.rst:261 ../database.rst:267 msgid "" "DATASTORE_NAME is the name of either the MySQL data store or the Percona " "data store. This is typically either ``mysql`` or ``percona``." msgstr "" #: ../database.rst:265 msgid "RHEL 7, CentOS 7, Fedora 20, and Fedora 21" msgstr "" #: ../database.rst:266 msgid ":file:`/usr/lib/python2.7/site-packages/trove/templates/DATASTORE_NAME`" msgstr "" #: ../database.rst:272 msgid "" "Therefore, as part of creating a data store, you need to load the " "``validation-rules.json`` file, using the :command:`trove-manage` :command:" "`db_load_datastore_config_parameters` command. This command takes the " "following arguments:" msgstr "" #: ../database.rst:277 msgid "Data store name" msgstr "" #: ../database.rst:278 msgid "Data store version" msgstr "" #: ../database.rst:279 msgid "Full path to the ``validation-rules.json`` file" msgstr "" #: ../database.rst:283 msgid "" "This example loads the ``validation-rules.json`` file for a MySQL database " "on Ubuntu 14.04:" msgstr "" #: ../database.rst:290 msgid "**Validate data store**" msgstr "" #: ../database.rst:292 msgid "" "To validate your new data store and version, start by listing the data " "stores on your system:" msgstr "" #: ../database.rst:305 msgid "" "Take the ID of the MySQL data store and pass it in with the :command:" "`datastore-version-list` command:" msgstr "" #: ../database.rst:318 msgid "Data store classifications" msgstr "" #: ../database.rst:320 msgid "" "The Database service supports a variety of both relational and non-" "relational database engines, but to a varying degree of support for each :" "term:`data store`. The Database service project has defined several " "classifications that indicate the quality of support for each data store. " "Data stores also implement different extensions. An extension is called a :" "term:`strategy` and is classified similar to data stores." msgstr "" #: ../database.rst:328 msgid "Valid classifications for a data store and a strategy are:" msgstr "" #: ../database.rst:330 ../database.rst:343 ../database.rst:411 msgid "Experimental" msgstr "" #: ../database.rst:332 ../database.rst:362 msgid "Technical preview" msgstr "" #: ../database.rst:334 ../database.rst:387 ../database.rst:407 msgid "Stable" msgstr "" #: ../database.rst:336 msgid "" "Each classification builds on the previous one. This means that a data store " "that meets the ``technical preview`` requirements must also meet all the " "requirements for ``experimental``, and a data store that meets the " "``stable`` requirements must also meet all the requirements for ``technical " "preview``." msgstr "" #: ../database.rst:341 msgid "**Requirements**" msgstr "" #: ../database.rst:345 msgid "" "A data store is considered to be ``experimental`` if it meets these criteria:" msgstr "" #: ../database.rst:347 msgid "" "It implements a basic subset of the Database service API including " "``create`` and ``delete``." msgstr "" #: ../database.rst:350 msgid "It has guest agent elements that allow guest agent creation." msgstr "" #: ../database.rst:352 msgid "It has a definition of supported operating systems." msgstr "" #: ../database.rst:354 ../database.rst:379 ../database.rst:393 msgid "" "It meets the other `Documented Technical Requirements `_." msgstr "" #: ../database.rst:357 msgid "A strategy is considered ``experimental`` if:" msgstr "" #: ../database.rst:359 msgid "" "It meets the `Documented Technical Requirements `_." msgstr "" #: ../database.rst:364 msgid "" "A data store is considered to be a ``technical preview`` if it meets the " "requirements of ``experimental`` and further:" msgstr "" #: ../database.rst:367 msgid "" "It implements APIs required to plant and start the capabilities of the data " "store as defined in the `Datastore Compatibility Matrix `_." msgstr "" #: ../database.rst:373 msgid "" "It is not required that the data store implements all features like resize, " "backup, replication, or clustering to meet this classification." msgstr "" #: ../database.rst:376 msgid "" "It provides a mechanism for building a guest image that allows you to " "exercise its capabilities." msgstr "" #: ../database.rst:384 msgid "A strategy is not normally considered to be ``technical preview``." msgstr "" #: ../database.rst:389 msgid "A data store or a strategy is considered ``stable`` if:" msgstr "" #: ../database.rst:391 msgid "It meets the requirements of ``technical preview``." msgstr "" #: ../database.rst:396 msgid "**Initial Classifications**" msgstr "" #: ../database.rst:398 msgid "" "The following table shows the current classification assignments for the " "different data stores." msgstr "" #: ../database.rst:405 msgid "Classification" msgstr "" #: ../database.rst:406 msgid "Data store" msgstr "" #: ../database.rst:408 msgid "MySQL" msgstr "" #: ../database.rst:409 msgid "Technical Preview" msgstr "" #: ../database.rst:410 msgid "Cassandra, MongoDB" msgstr "" #: ../database.rst:412 msgid "All others" msgstr "" #: ../database.rst:415 msgid "Configure a cluster" msgstr "" #: ../database.rst:417 msgid "" "An administrative user can configure various characteristics of a MongoDB " "cluster." msgstr "" #: ../database.rst:420 msgid "**Query routers and config servers**" msgstr "" #: ../database.rst:422 msgid "" "**Background.** Each cluster includes at least one query router and one " "config server. Query routers and config servers count against your quota. " "When you delete a cluster, the system deletes the associated query router(s) " "and config server(s)." msgstr "" #: ../database.rst:427 msgid "" "**Configuration.** By default, the system creates one query router and one " "config server per cluster. You can change this by editing the ``/etc/trove/" "trove.conf`` file. These settings are in the ``mongodb`` section of the file:" msgstr "" #: ../database.rst:436 msgid "Setting" msgstr "" #: ../database.rst:437 msgid "Valid values are:" msgstr "" #: ../database.rst:439 msgid "num_config_servers_per_cluster" msgstr "" #: ../database.rst:440 ../database.rst:443 msgid "1 or 3" msgstr "" #: ../database.rst:442 msgid "num_query_routers_per_cluster" msgstr "" #: ../identity_auth_token_middleware.rst:2 msgid "Authentication middleware with user name and password" msgstr "" #: ../identity_auth_token_middleware.rst:4 msgid "" "You can also configure Identity authentication middleware using the " "``admin_user`` and ``admin_password`` options." msgstr "" #: ../identity_auth_token_middleware.rst:9 msgid "" "The ``admin_token`` option is deprecated and no longer used for configuring " "auth_token middleware." msgstr "" #: ../identity_auth_token_middleware.rst:12 msgid "" "For services that have a separate paste-deploy ``.ini`` file, you can " "configure the authentication middleware in the ``[keystone_authtoken]`` " "section of the main configuration file, such as ``nova.conf``. In Compute, " "for example, you can remove the middleware parameters from ``api-paste." "ini``, as follows:" msgstr "" #: ../identity_auth_token_middleware.rst:24 msgid "And set the following values in ``nova.conf`` as follows:" msgstr "" #: ../identity_auth_token_middleware.rst:41 msgid "" "The middleware parameters in the paste config take priority. You must remove " "them to use the values in the ``[keystone_authtoken]`` section." msgstr "" #: ../identity_auth_token_middleware.rst:47 #: ../identity_auth_token_middleware.rst:71 msgid "" "Comment out any ``auth_host``, ``auth_port``, and ``auth_protocol`` options " "because the ``identity_uri`` option replaces them." msgstr "" #: ../identity_auth_token_middleware.rst:51 msgid "" "This sample paste config filter makes use of the ``admin_user`` and " "``admin_password`` options:" msgstr "" #: ../identity_auth_token_middleware.rst:66 msgid "" "Using this option requires an admin tenant/role relationship. The admin user " "is granted access to the admin role on the admin tenant." msgstr "" #: ../identity_concepts.rst:3 msgid "Identity concepts" msgstr "" #: ../identity_concepts.rst:6 msgid "" "The process of confirming the identity of a user. To confirm an incoming " "request, OpenStack Identity validates a set of credentials users supply. " "Initially, these credentials are a user name and password, or a user name " "and API key. When OpenStack Identity validates user credentials, it issues " "an authentication token. Users provide the token in subsequent requests." msgstr "" #: ../identity_concepts.rst:11 msgid "Authentication" msgstr "" #: ../identity_concepts.rst:14 msgid "" "Data that confirms the identity of the user. For example, user name and " "password, user name and API key, or an authentication token that the " "Identity service provides." msgstr "" #: ../identity_concepts.rst:16 msgid "Credentials" msgstr "" #: ../identity_concepts.rst:19 msgid "" "An Identity service API v3 entity. Domains are a collection of projects and " "users that define administrative boundaries for managing Identity entities. " "Domains can represent an individual, company, or operator-owned space. They " "expose administrative activities directly to system users. Users can be " "granted the administrator role for a domain. A domain administrator can " "create projects, users, and groups in a domain and assign roles to users and " "groups in a domain." msgstr "" #: ../identity_concepts.rst:26 msgid "Domain" msgstr "" #: ../identity_concepts.rst:29 msgid "" "A network-accessible address, usually a URL, through which you can access a " "service. If you are using an extension for templates, you can create an " "endpoint template that represents the templates of all consumable services " "that are available across the regions." msgstr "" #: ../identity_concepts.rst:32 msgid "Endpoint" msgstr "" #: ../identity_concepts.rst:35 msgid "" "An Identity service API v3 entity. Groups are a collection of users owned by " "a domain. A group role, granted to a domain or project, applies to all users " "in the group. Adding or removing users to or from a group grants or revokes " "their role and authentication to the associated domain or project." msgstr "" # #-#-#-#-# identity_concepts.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# #: ../identity_concepts.rst:39 ../networking_adv-features.rst:627 msgid "Group" msgstr "" #: ../identity_concepts.rst:42 msgid "" "A command-line interface for several OpenStack services including the " "Identity API. For example, a user can run the :command:`openstack service " "create` and :command:`openstack endpoint create` commands to register " "services in their OpenStack installation." msgstr "" #: ../identity_concepts.rst:46 msgid "OpenStackClient" msgstr "" #: ../identity_concepts.rst:49 msgid "" "A container that groups or isolates resources or identity objects. Depending " "on the service operator, a project might map to a customer, account, " "organization, or tenant." msgstr "" #: ../identity_concepts.rst:51 msgid "Project" msgstr "" #: ../identity_concepts.rst:54 msgid "" "An Identity service API v3 entity. Represents a general division in an " "OpenStack deployment. You can associate zero or more sub-regions with a " "region to make a tree-like structured hierarchy. Although a region does not " "have a geographical connotation, a deployment can use a geographical name " "for a region, such as ``us-east``." msgstr "" #: ../identity_concepts.rst:58 msgid "Region" msgstr "" #: ../identity_concepts.rst:61 msgid "" "A personality with a defined set of user rights and privileges to perform a " "specific set of operations. The Identity service issues a token to a user " "that includes a list of roles. When a user calls a service, that service " "interprets the user role set, and determines to which operations or " "resources each role grants access." msgstr "" #: ../identity_concepts.rst:66 msgid "Role" msgstr "" #: ../identity_concepts.rst:69 msgid "" "An OpenStack service, such as Compute (nova), Object Storage (swift), or " "Image service (glance), that provides one or more endpoints through which " "users can access resources and perform operations." msgstr "" #: ../identity_concepts.rst:75 msgid "" "An alpha-numeric text string that enables access to OpenStack APIs and " "resources. A token may be revoked at any time and is valid for a finite " "duration. While OpenStack Identity supports token-based authentication in " "this release, it intends to support additional protocols in the future. " "OpenStack Identity is an integration service that does not aspire to be a " "full-fledged identity store and management solution." msgstr "" #: ../identity_concepts.rst:81 msgid "Token" msgstr "" #: ../identity_concepts.rst:84 msgid "" "A digital representation of a person, system, or service that uses OpenStack " "cloud services. The Identity service validates that incoming requests are " "made by the user who claims to be making the call. Users have a login and " "can access resources by using assigned tokens. Users can be directly " "assigned to a particular project and behave as if they are contained in that " "project." msgstr "" #: ../identity_concepts.rst:89 msgid "User" msgstr "" #: ../identity_concepts.rst:92 msgid "User management" msgstr "" #: ../identity_concepts.rst:94 msgid "Identity user management examples:" msgstr "" #: ../identity_concepts.rst:96 msgid "Create a user named ``alice``:" msgstr "" #: ../identity_concepts.rst:102 msgid "Create a project named ``acme``:" msgstr "" #: ../identity_concepts.rst:108 msgid "Create a domain named ``emea``:" msgstr "" #: ../identity_concepts.rst:114 msgid "Create a role named ``compute-user``:" msgstr "" #: ../identity_concepts.rst:122 msgid "" "Individual services assign meaning to roles, typically through limiting or " "granting access to users with the role to the operations that the service " "supports. Role access is typically configured in the service's ``policy." "json`` file. For example, to limit Compute access to the ``compute-user`` " "role, edit the Compute service's ``policy.json`` file to require this role " "for Compute operations." msgstr "" #: ../identity_concepts.rst:130 msgid "" "The Identity service assigns a tenant and a role to a user. You might assign " "the ``compute-user`` role to the ``alice`` user in the ``acme`` tenant:" msgstr "" #: ../identity_concepts.rst:138 msgid "" "A user can have different roles in different tenants. For example, Alice " "might also have the ``admin`` role in the ``Cyberdyne`` tenant. A user can " "also have multiple roles in the same tenant." msgstr "" #: ../identity_concepts.rst:142 msgid "" "The ``/etc/[SERVICE_CODENAME]/policy.json`` file controls the tasks that " "users can perform for a given service. For example, the ``/etc/nova/policy." "json`` file specifies the access policy for the Compute service, the ``/etc/" "glance/policy.json`` file specifies the access policy for the Image service, " "and the ``/etc/keystone/policy.json`` file specifies the access policy for " "the Identity service." msgstr "" #: ../identity_concepts.rst:150 msgid "" "The default ``policy.json`` files in the Compute, Identity, and Image " "services recognize only the ``admin`` role. Any user with any role in a " "tenant can access all operations that do not require the ``admin`` role." msgstr "" #: ../identity_concepts.rst:155 msgid "" "To restrict users from performing operations in, for example, the Compute " "service, you must create a role in the Identity service and then modify the " "``/etc/nova/policy.json`` file so that this role is required for Compute " "operations." msgstr "" #: ../identity_concepts.rst:160 msgid "" "For example, the following line in the ``/etc/nova/policy.json`` file does " "not restrict which users can create volumes:" msgstr "" #: ../identity_concepts.rst:167 msgid "" "If the user has any role in a tenant, he can create volumes in that tenant." msgstr "" #: ../identity_concepts.rst:170 msgid "" "To restrict the creation of volumes to users who have the ``compute-user`` " "role in a particular tenant, you add ``\"role:compute-user\"``:" msgstr "" #: ../identity_concepts.rst:177 msgid "" "To restrict all Compute service requests to require this role, the resulting " "file looks like:" msgstr "" #: ../identity_concepts.rst:281 msgid "Service management" msgstr "" #: ../identity_concepts.rst:283 msgid "" "The Identity service provides identity, token, catalog, and policy services. " "It consists of:" msgstr "" #: ../identity_concepts.rst:287 msgid "" "Can be run in a WSGI-capable web server such as Apache httpd to provide the " "Identity service. The service and administrative APIs are run as separate " "instances of the WSGI service." msgstr "" #: ../identity_concepts.rst:289 msgid "keystone Web Server Gateway Interface (WSGI) service" msgstr "" #: ../identity_concepts.rst:292 msgid "" "Each has a pluggable back end that allow different ways to use the " "particular service. Most support standard back ends like LDAP or SQL." msgstr "" #: ../identity_concepts.rst:293 msgid "Identity service functions" msgstr "" #: ../identity_concepts.rst:296 msgid "" "Starts both the service and administrative APIs in a single process. Using " "federation with keystone-all is not supported. keystone-all is deprecated in " "favor of the WSGI service." msgstr "" #: ../identity_concepts.rst:298 msgid "keystone-all" msgstr "" #: ../identity_concepts.rst:300 msgid "" "The Identity service also maintains a user that corresponds to each service, " "such as, a user named ``nova`` for the Compute service, and a special " "service tenant called ``service``." msgstr "" #: ../identity_concepts.rst:304 msgid "" "For information about how to create services and endpoints, see the " "`OpenStack Administrator Guide `__." msgstr "" #: ../identity_concepts.rst:309 msgid "Groups" msgstr "" #: ../identity_concepts.rst:311 msgid "" "A group is a collection of users in a domain. Administrators can create " "groups and add users to them. A role can then be assigned to the group, " "rather than individual users. Groups were introduced with the Identity API " "v3." msgstr "" #: ../identity_concepts.rst:316 msgid "Identity API V3 provides the following group-related operations:" msgstr "" #: ../identity_concepts.rst:318 msgid "Create a group" msgstr "" #: ../identity_concepts.rst:320 msgid "Delete a group" msgstr "" #: ../identity_concepts.rst:322 msgid "Update a group (change its name or description)" msgstr "" #: ../identity_concepts.rst:324 msgid "Add a user to a group" msgstr "" #: ../identity_concepts.rst:326 msgid "Remove a user from a group" msgstr "" #: ../identity_concepts.rst:328 msgid "List group members" msgstr "" #: ../identity_concepts.rst:330 msgid "List groups for a user" msgstr "" #: ../identity_concepts.rst:332 msgid "Assign a role on a tenant to a group" msgstr "" #: ../identity_concepts.rst:334 msgid "Assign a role on a domain to a group" msgstr "" #: ../identity_concepts.rst:336 msgid "Query role assignments to groups" msgstr "" #: ../identity_concepts.rst:340 msgid "" "The Identity service server might not allow all operations. For example, if " "you use the Identity server with the LDAP Identity back end and group " "updates are disabled, a request to create, delete, or update a group fails." msgstr "" #: ../identity_concepts.rst:345 msgid "Here are a couple of examples:" msgstr "" #: ../identity_concepts.rst:347 msgid "" "Group A is granted Role A on Tenant A. If User A is a member of Group A, " "when User A gets a token scoped to Tenant A, the token also includes Role A." msgstr "" #: ../identity_concepts.rst:351 msgid "" "Group B is granted Role B on Domain B. If User B is a member of Group B, " "when User B gets a token scoped to Domain B, the token also includes Role B." msgstr "" #: ../identity_keystone_usage_and_features.rst:3 msgid "Example usage and Identity features" msgstr "" #: ../identity_keystone_usage_and_features.rst:5 msgid "" "The ``keystone`` client is set up to expect commands in the general form of " "``keystone command argument``, followed by flag-like keyword arguments to " "provide additional (often optional) information. For example, the :command:" "`user-list` and :command:`tenant-create` commands can be invoked as follows:" msgstr "" #: ../identity_keystone_usage_and_features.rst:38 msgid "" "You configure logging externally to the rest of Identity. The name of the " "file specifying the logging configuration is set using the ``log_config`` " "option in the ``[DEFAULT]`` section of the ``keystone.conf`` file. To route " "logging through syslog, set ``use_syslog=true`` in the ``[DEFAULT]`` section." msgstr "" #: ../identity_keystone_usage_and_features.rst:44 msgid "" "A sample logging configuration file is available with the project in ``etc/" "logging.conf.sample``. Like other OpenStack projects, Identity uses the " "Python logging module, which provides extensive configuration options that " "let you define the output levels and formats." msgstr "" #: ../identity_keystone_usage_and_features.rst:51 msgid "User CRUD" msgstr "" #: ../identity_keystone_usage_and_features.rst:53 msgid "" "Identity provides a user CRUD (Create, Read, Update, and Delete) filter that " "Administrators can add to the ``public_api`` pipeline. The user CRUD filter " "enables users to use a HTTP PATCH to change their own password. To enable " "this extension you should define a ``user_crud_extension`` filter, insert it " "after the ``*_body`` middleware and before the ``public_service`` " "application in the ``public_api`` WSGI pipeline in ``keystone-paste.ini``. " "For example:" msgstr "" #: ../identity_keystone_usage_and_features.rst:69 msgid "Each user can then change their own password with a HTTP PATCH." msgstr "" #: ../identity_keystone_usage_and_features.rst:76 msgid "" "In addition to changing their password, all current tokens for the user are " "invalidated." msgstr "" #: ../identity_keystone_usage_and_features.rst:81 msgid "Only use a KVS back end for tokens when testing." msgstr "" #: ../identity_management.rst:5 msgid "Identity management" msgstr "" #: ../identity_management.rst:7 msgid "" "OpenStack Identity, code-named keystone, is the default Identity management " "system for OpenStack. After you install Identity, you configure it through " "the ``/etc/keystone/keystone.conf`` configuration file and, possibly, a " "separate logging configuration file. You initialize data into Identity by " "using the ``keystone`` command-line client." msgstr "" #: ../identity_service_api_protection.rst:3 msgid "Identity API protection with role-based access control (RBAC)" msgstr "" #: ../identity_service_api_protection.rst:5 msgid "" "Like most OpenStack projects, Identity supports the protection of its APIs " "by defining policy rules based on an RBAC approach. Identity stores a " "reference to a policy JSON file in the main Identity configuration file, " "``keystone.conf``. Typically this file is named ``policy.json``, and " "contains the rules for which roles have access to certain actions in defined " "services." msgstr "" #: ../identity_service_api_protection.rst:12 msgid "" "Each Identity API v3 call has a line in the policy file that dictates which " "level of governance of access applies." msgstr "" #: ../identity_service_api_protection.rst:21 msgid "" "``RULE_STATEMENT`` can contain ``RULE_STATEMENT`` or ``MATCH_STATEMENT``." msgstr "" #: ../identity_service_api_protection.rst:24 msgid "" "``MATCH_STATEMENT`` is a set of identifiers that must match between the " "token provided by the caller of the API and the parameters or target " "entities of the API call in question. For example:" msgstr "" #: ../identity_service_api_protection.rst:32 msgid "" "Indicates that to create a user, you must have the admin role in your token. " "The ``domain_id`` in your token must match the ``domain_id`` in the user " "object that you are trying to create, which implies this must be a domain-" "scoped token. In other words, you must have the admin role on the domain in " "which you are creating the user, and the token that you use must be scoped " "to that domain." msgstr "" #: ../identity_service_api_protection.rst:40 msgid "Each component of a match statement uses this format:" msgstr "" #: ../identity_service_api_protection.rst:46 msgid "The Identity service expects these attributes:" msgstr "" #: ../identity_service_api_protection.rst:48 msgid "Attributes from token:" msgstr "" # #-#-#-#-# identity_service_api_protection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-data-retrieval.pot (Administrator Guide 0.9) #-#-#-#-# #: ../identity_service_api_protection.rst:50 #: ../telemetry-data-retrieval.rst:157 msgid "``user_id``" msgstr "" #: ../identity_service_api_protection.rst:51 msgid "``domain_id``" msgstr "" # #-#-#-#-# identity_service_api_protection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-data-retrieval.pot (Administrator Guide 0.9) #-#-#-#-# #: ../identity_service_api_protection.rst:52 #: ../telemetry-data-retrieval.rst:153 msgid "``project_id``" msgstr "" #: ../identity_service_api_protection.rst:54 msgid "" "The ``project_id`` attribute requirement depends on the scope, and the list " "of roles you have within that scope." msgstr "" #: ../identity_service_api_protection.rst:57 msgid "Attributes related to API call:" msgstr "" #: ../identity_service_api_protection.rst:59 msgid "``user.domain_id``" msgstr "" #: ../identity_service_api_protection.rst:60 msgid "Any parameters passed into the API call" msgstr "" #: ../identity_service_api_protection.rst:61 msgid "Any filters specified in the query string" msgstr "" #: ../identity_service_api_protection.rst:63 msgid "" "You reference attributes of objects passed with an object.attribute syntax " "(such as, ``user.domain_id``). The target objects of an API are also " "available using a target.object.attribute syntax. For instance:" msgstr "" #: ../identity_service_api_protection.rst:71 msgid "" "would ensure that Identity only deletes the user object in the same domain " "as the provided token." msgstr "" #: ../identity_service_api_protection.rst:74 msgid "" "Every target object has an ``id`` and a ``name`` available as ``target." "OBJECT.id`` and ``target.OBJECT.name``. Identity retrieves other attributes " "from the database, and the attributes vary between object types. The " "Identity service filters out some database fields, such as user passwords." msgstr "" #: ../identity_service_api_protection.rst:80 msgid "List of object attributes:" msgstr "" #: ../identity_service_api_protection.rst:114 msgid "" "The default ``policy.json`` file supplied provides a somewhat basic example " "of API protection, and does not assume any particular use of domains. Refer " "to ``policy.v3cloudsample.json`` as an example of multi-domain configuration " "installations where a cloud provider wants to delegate administration of the " "contents of a domain to a particular ``admin domain``. This example policy " "file also shows the use of an ``admin_domain`` to allow a cloud provider to " "enable administrators to have wider access across the APIs." msgstr "" #: ../identity_service_api_protection.rst:123 msgid "" "A clean installation could start with the standard policy file, to allow " "creation of the ``admin_domain`` with the first users within it. You could " "then obtain the ``domain_id`` of the admin domain, paste the ID into a " "modified version of ``policy.v3cloudsample.json``, and then enable it as the " "main ``policy file``." msgstr "" #: ../identity_start.rst:3 msgid "Start the Identity service" msgstr "" #: ../identity_start.rst:5 msgid "" "In Kilo and newer releases, the Identity service should use the Apache HTTP " "Server with the ``mod_wsgi`` module instead of the Eventlet library. Using " "the proper WSGI configuration, the Apache HTTP Server binds to ports 5000 " "and 35357 rather than the keystone process." msgstr "" #: ../identity_start.rst:10 msgid "" "For more information, see http://docs.openstack.org/developer/keystone/" "apache-httpd.html and https://git.openstack.org/cgit/openstack/keystone/tree/" "httpd." msgstr "" #: ../identity_troubleshoot.rst:3 msgid "Troubleshoot the Identity service" msgstr "" #: ../identity_troubleshoot.rst:5 msgid "" "To troubleshoot the Identity service, review the logs in the ``/var/log/" "keystone/keystone.log`` file." msgstr "" #: ../identity_troubleshoot.rst:10 msgid "" "Use the ``/etc/keystone/logging.conf`` file to configure the location of log " "files." msgstr "" #: ../identity_troubleshoot.rst:13 msgid "" "The logs show the components that have come in to the WSGI request, and " "ideally show an error that explains why an authorization request failed. If " "you do not see the request in the logs, run keystone with the :option:`--" "debug` parameter. Pass the :option:`--debug` parameter before the command " "parameters." msgstr "" #: ../identity_troubleshoot.rst:20 msgid "Debug PKI middleware" msgstr "" #: ../identity_troubleshoot.rst:25 msgid "" "If you receive an ``Invalid OpenStack Identity Credentials`` message when " "you accessing and reaching an OpenStack service, it might be caused by the " "changeover from UUID tokens to PKI tokens in the Grizzly release." msgstr "" #: ../identity_troubleshoot.rst:29 msgid "" "The PKI-based token validation scheme relies on certificates from Identity " "that are fetched through HTTP and stored in a local directory. The location " "for this directory is specified by the ``signing_dir`` configuration option." msgstr "" #: ../identity_troubleshoot.rst:37 msgid "In your services configuration file, look for a section like this:" msgstr "" #: ../identity_troubleshoot.rst:48 msgid "" "The first thing to check is that the ``signing_dir`` does, in fact, exist. " "If it does, check for certificate files:" msgstr "" #: ../identity_troubleshoot.rst:62 msgid "" "This directory contains two certificates and the token revocation list. If " "these files are not present, your service cannot fetch them from Identity. " "To troubleshoot, try to talk to Identity to make sure it correctly serves " "files, as follows:" msgstr "" #: ../identity_troubleshoot.rst:71 msgid "This command fetches the signing certificate:" msgstr "" #: ../identity_troubleshoot.rst:86 msgid "Note the expiration dates of the certificate:" msgstr "" #: ../identity_troubleshoot.rst:93 msgid "" "The token revocation list is updated once a minute, but the certificates are " "not. One possible problem is that the certificates are the wrong files or " "garbage. You can remove these files and run another command against your " "server; they are fetched on demand." msgstr "" #: ../identity_troubleshoot.rst:98 msgid "" "The Identity service log should show the access of the certificate files. " "You might have to turn up your logging levels. Set ``debug = True`` in your " "Identity configuration file and restart the Identity server." msgstr "" #: ../identity_troubleshoot.rst:109 msgid "" "If the files do not appear in your directory after this, it is likely one of " "the following issues:" msgstr "" #: ../identity_troubleshoot.rst:112 msgid "" "Your service is configured incorrectly and cannot talk to Identity. Check " "the ``auth_port`` and ``auth_host`` values and make sure that you can talk " "to that service through cURL, as shown previously." msgstr "" #: ../identity_troubleshoot.rst:116 msgid "" "Your signing directory is not writable. Use the ``chmod`` command to change " "its permissions so that the service (POSIX) user can write to it. Verify the " "change through ``su`` and ``touch`` commands." msgstr "" #: ../identity_troubleshoot.rst:120 msgid "The SELinux policy is denying access to the directory." msgstr "" #: ../identity_troubleshoot.rst:122 msgid "" "SELinux troubles often occur when you use Fedora or RHEL-based packages and " "you choose configuration options that do not match the standard policy. Run " "the ``setenforce permissive`` command. If that makes a difference, you " "should relabel the directory. If you are using a sub-directory of the ``/var/" "cache/`` directory, run the following command:" msgstr "" #: ../identity_troubleshoot.rst:132 msgid "" "If you are not using a ``/var/cache`` sub-directory, you should. Modify the " "``signing_dir`` configuration option for your service and restart." msgstr "" #: ../identity_troubleshoot.rst:135 msgid "" "Set back to ``setenforce enforcing`` to confirm that your changes solve the " "problem." msgstr "" #: ../identity_troubleshoot.rst:138 msgid "" "If your certificates are fetched on demand, the PKI validation is working " "properly. Most likely, the token from Identity is not valid for the " "operation you are attempting to perform, and your user needs a different " "role for the operation." msgstr "" #: ../identity_troubleshoot.rst:144 msgid "Debug signing key file errors" msgstr "" #: ../identity_troubleshoot.rst:149 msgid "" "If an error occurs when the signing key file opens, it is possible that the " "person who ran the :command:`keystone-manage pki_setup` command to generate " "certificates and keys did not use the correct user." msgstr "" #: ../identity_troubleshoot.rst:156 msgid "" "When you run the :command:`keystone-manage pki_setup` command, Identity " "generates a set of certificates and keys in ``/etc/keystone/ssl*``, which is " "owned by ``root:root``. This can present a problem when you run the Identity " "daemon under the keystone user account (nologin) when you try to run PKI. " "Unless you run the :command:`chown` command against the files ``keystone:" "keystone``, or run the :command:`keystone-manage pki_setup` command with " "the :option:`--keystone-user` and :option:`--keystone-group` parameters, you " "will get an error. For example:" msgstr "" #: ../identity_troubleshoot.rst:176 msgid "Flush expired tokens from the token database table" msgstr "" #: ../identity_troubleshoot.rst:181 msgid "" "As you generate tokens, the token database table on the Identity server " "grows." msgstr "" #: ../identity_troubleshoot.rst:187 msgid "" "To clear the token table, an administrative user must run the :command:" "`keystone-manage token_flush` command to flush the tokens. When you flush " "tokens, expired tokens are deleted and traceability is eliminated." msgstr "" #: ../identity_troubleshoot.rst:191 msgid "" "Use ``cron`` to schedule this command to run frequently based on your " "workload. For large workloads, running it every minute is recommended." msgstr "" #: ../index.rst:3 msgid "OpenStack Administrator Guide" msgstr "" #: ../index.rst:6 msgid "Abstract" msgstr "" #: ../index.rst:8 msgid "" "OpenStack offers open source software for OpenStack administrators to manage " "and troubleshoot an OpenStack cloud." msgstr "" #: ../index.rst:11 msgid "" "This guide documents OpenStack Mitaka, OpenStack Liberty, and OpenStack Kilo " "releases." msgstr "" #: ../index.rst:15 msgid "Contents" msgstr "" #: ../index.rst:39 msgid "Search in this guide" msgstr "" #: ../index.rst:41 msgid ":ref:`search`" msgstr "" #: ../keystone_caching_layer.rst:4 msgid "Caching layer" msgstr "" #: ../keystone_caching_layer.rst:6 msgid "" "OpenStack Identity supports a caching layer that is above the configurable " "subsystems (for example, token, assignment). OpenStack Identity uses the " "`dogpile.cache `__ library " "which allows flexible cache back ends. The majority of the caching " "configuration options are set in the ``[cache]`` section of the ``keystone." "conf`` file. However, each section that has the capability to be cached " "usually has a caching boolean value that toggles caching." msgstr "" #: ../keystone_caching_layer.rst:15 msgid "" "So to enable only the token back end caching, set the values as follows:" msgstr "" #: ../keystone_caching_layer.rst:30 msgid "" "Since the Juno release, the default setting is enabled for subsystem " "caching, but the global toggle is disabled. As a result, no caching in " "available unless the global toggle for ``[cache]`` is enabled by setting the " "value to ``true``." msgstr "" #: ../keystone_caching_layer.rst:36 msgid "Caching for tokens and tokens validation" msgstr "" #: ../keystone_caching_layer.rst:38 msgid "" "The token system has a separate ``cache_time`` configuration option, that " "can be set to a value above or below the global ``expiration_time`` default, " "allowing for different caching behavior from the other systems in OpenStack " "Identity. This option is set in the ``[token]`` section of the configuration " "file." msgstr "" #: ../keystone_caching_layer.rst:44 msgid "" "The token revocation list cache time is handled by the configuration option " "``revocation_cache_time`` in the ``[token]`` section. The revocation list is " "refreshed whenever a token is revoked. It typically sees significantly more " "requests than specific token retrievals or token validation calls." msgstr "" #: ../keystone_caching_layer.rst:50 msgid "" "Here is a list of actions that are affected by the cached time: getting a " "new token, revoking tokens, validating tokens, checking v2 tokens, and " "checking v3 tokens." msgstr "" #: ../keystone_caching_layer.rst:54 msgid "" "The delete token API calls invalidate the cache for the tokens being acted " "upon, as well as invalidating the cache for the revoked token list and the " "validate/check token calls." msgstr "" #: ../keystone_caching_layer.rst:58 msgid "" "Token caching is configurable independently of the ``revocation_list`` " "caching. Lifted expiration checks from the token drivers to the token " "manager. This ensures that cached tokens will still raise a " "``TokenNotFound`` flag when expired." msgstr "" #: ../keystone_caching_layer.rst:63 msgid "" "For cache consistency, all token IDs are transformed into the short token " "hash at the provider and token driver level. Some methods have access to the " "full ID (PKI Tokens), and some methods do not. Cache invalidation is " "inconsistent without token ID normalization." msgstr "" #: ../keystone_caching_layer.rst:69 msgid "Caching around assignment CRUD" msgstr "" #: ../keystone_caching_layer.rst:71 msgid "" "The assignment system has a separate ``cache_time`` configuration option, " "that can be set to a value above or below the global ``expiration_time`` " "default, allowing for different caching behavior from the other systems in " "Identity service. This option is set in the ``[assignment]`` section of the " "configuration file." msgstr "" #: ../keystone_caching_layer.rst:77 msgid "" "Currently ``assignment`` has caching for ``project``, ``domain``, and " "``role`` specific requests (primarily around the CRUD actions). Caching is " "currently not implemented on grants. The ``list`` methods are not subject to " "caching." msgstr "" #: ../keystone_caching_layer.rst:82 msgid "" "Here is a list of actions that are affected by the assignment: assign domain " "API, assign project API, and assign role API." msgstr "" #: ../keystone_caching_layer.rst:85 msgid "" "The create, update, and delete actions for domains, projects and roles will " "perform proper invalidations of the cached methods listed above." msgstr "" #: ../keystone_caching_layer.rst:90 msgid "" "If a read-only ``assignment`` back end is in use, the cache will not " "immediately reflect changes on the back end. Any given change may take up to " "the ``cache_time`` (if set in the ``[assignment]`` section of the " "configuration file) or the global ``expiration_time`` (set in the " "``[cache]`` section of the configuration file) before it is reflected. If " "this type of delay (when using a read-only ``assignment`` back end) is an " "issue, it is recommended that caching be disabled on ``assignment``. To " "disable caching specifically on ``assignment``, in the ``[assignment]`` " "section of the configuration set ``caching`` to ``False``." msgstr "" #: ../keystone_caching_layer.rst:101 msgid "" "For more information about the different back ends (and configuration " "options), see:" msgstr "" #: ../keystone_caching_layer.rst:104 msgid "" "`dogpile.cache.backends.memory `__" msgstr "" #: ../keystone_caching_layer.rst:106 msgid "" "`dogpile.cache.backends.memcached `__" msgstr "" #: ../keystone_caching_layer.rst:110 msgid "" "The memory back end is not suitable for use in a production environment." msgstr "" #: ../keystone_caching_layer.rst:113 msgid "" "`dogpile.cache.backends.redis `__" msgstr "" #: ../keystone_caching_layer.rst:115 msgid "" "`dogpile.cache.backends.file `__" msgstr "" #: ../keystone_caching_layer.rst:117 msgid "``keystone.common.cache.backends.mongo``" msgstr "" #: ../keystone_caching_layer.rst:120 msgid "Configure the Memcached back end example" msgstr "" #: ../keystone_caching_layer.rst:122 msgid "The following example shows how to configure the memcached back end:" msgstr "" #: ../keystone_caching_layer.rst:132 msgid "" "You need to specify the URL to reach the ``memcached`` instance with the " "``backend_argument`` parameter." msgstr "" #: ../keystone_certificates_for_pki.rst:3 msgid "Certificates for PKI" msgstr "" #: ../keystone_certificates_for_pki.rst:5 msgid "" "PKI stands for Public Key Infrastructure. Tokens are documents, " "cryptographically signed using the X509 standard. In order to work correctly " "token generation requires a public/private key pair. The public key must be " "signed in an X509 certificate, and the certificate used to sign it must be " "available as a :term:`Certificate Authority (CA)` certificate. These " "files can be generated either using the :command:`keystone-manage` utility, " "or externally generated. The files need to be in the locations specified by " "the top level Identity service configuration file ``keystone.conf`` as " "specified in the above section. Additionally, the private key should only be " "readable by the system user that will run the Identity service." msgstr "" #: ../keystone_certificates_for_pki.rst:20 msgid "" "The certificates can be world readable, but the private key cannot be. The " "private key should only be readable by the account that is going to sign " "tokens. When generating files with the :command:`keystone-manage pki_setup` " "command, your best option is to run as the pki user. If you run :command:" "`keystone-manage` as root, you can append :option:`--keystone-user` and :" "option:`--keystone-group` parameters to set the user name and group keystone " "is going to run under." msgstr "" #: ../keystone_certificates_for_pki.rst:28 msgid "" "The values that specify where to read the certificates are under the " "``[signing]`` section of the configuration file. The configuration values " "are:" msgstr "" #: ../keystone_certificates_for_pki.rst:33 msgid "" "Location of certificate used to verify tokens. Default is ``/etc/keystone/" "ssl/certs/signing_cert.pem``." msgstr "" # #-#-#-#-# keystone_certificates_for_pki.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_configure_with_SSL.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_certificates_for_pki.rst:34 #: ../keystone_configure_with_SSL.rst:58 msgid "``certfile``" msgstr "" #: ../keystone_certificates_for_pki.rst:37 msgid "" "Location of private key used to sign tokens. Default is ``/etc/keystone/ssl/" "private/signing_key.pem``." msgstr "" # #-#-#-#-# keystone_certificates_for_pki.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_configure_with_SSL.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_certificates_for_pki.rst:38 #: ../keystone_configure_with_SSL.rst:63 msgid "``keyfile``" msgstr "" #: ../keystone_certificates_for_pki.rst:41 msgid "" "Location of certificate for the authority that issued the above certificate. " "Default is ``/etc/keystone/ssl/certs/ca.pem``." msgstr "" # #-#-#-#-# keystone_certificates_for_pki.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_configure_with_SSL.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_certificates_for_pki.rst:43 #: ../keystone_configure_with_SSL.rst:66 msgid "``ca_certs``" msgstr "" #: ../keystone_certificates_for_pki.rst:46 msgid "" "Location of the private key used by the CA. Default is ``/etc/keystone/ssl/" "private/cakey.pem``." msgstr "" #: ../keystone_certificates_for_pki.rst:47 msgid "``ca_key``" msgstr "" #: ../keystone_certificates_for_pki.rst:50 msgid "Default is ``2048``." msgstr "" #: ../keystone_certificates_for_pki.rst:50 msgid "``key_size``" msgstr "" #: ../keystone_certificates_for_pki.rst:53 msgid "Default is ``3650``." msgstr "" #: ../keystone_certificates_for_pki.rst:53 msgid "``valid_days``" msgstr "" #: ../keystone_certificates_for_pki.rst:56 msgid "" "Certificate subject (auto generated certificate) for token signing. Default " "is ``/C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com``." msgstr "" #: ../keystone_certificates_for_pki.rst:57 msgid "``cert_subject``" msgstr "" #: ../keystone_certificates_for_pki.rst:59 msgid "" "When generating certificates with the :command:`keystone-manage pki_setup` " "command, the ``ca_key``, ``key_size``, and ``valid_days`` configuration " "options are used." msgstr "" #: ../keystone_certificates_for_pki.rst:63 msgid "" "If the :command:`keystone-manage pki_setup` command is not used to generate " "certificates, or you are providing your own certificates, these values do " "not need to be set." msgstr "" #: ../keystone_certificates_for_pki.rst:67 msgid "" "If ``provider=keystone.token.providers.uuid.Provider`` in the ``[token]`` " "section of the keystone configuration, a typical token looks like " "``53f7f6ef0cc344b5be706bcc8b1479e1``. If ``provider=keystone.token.providers." "pki.Provider``, a typical token is a much longer string, such as::" msgstr "" #: ../keystone_certificates_for_pki.rst:102 msgid "Sign certificate issued by external CA" msgstr "" #: ../keystone_certificates_for_pki.rst:104 msgid "" "You can use a signing certificate issued by an external CA instead of " "generated by :command:`keystone-manage`. However, a certificate issued by an " "external CA must satisfy the following conditions:" msgstr "" #: ../keystone_certificates_for_pki.rst:108 msgid "" "All certificate and key files must be in Privacy Enhanced Mail (PEM) format" msgstr "" #: ../keystone_certificates_for_pki.rst:111 msgid "Private key files must not be protected by a password" msgstr "" #: ../keystone_certificates_for_pki.rst:113 msgid "" "When using a signing certificate issued by an external CA, you do not need " "to specify ``key_size``, ``valid_days``, and ``ca_password`` as they will be " "ignored." msgstr "" #: ../keystone_certificates_for_pki.rst:117 msgid "" "The basic workflow for using a signing certificate issued by an external CA " "involves:" msgstr "" #: ../keystone_certificates_for_pki.rst:120 msgid "Request Signing Certificate from External CA" msgstr "" #: ../keystone_certificates_for_pki.rst:122 msgid "Convert certificate and private key to PEM if needed" msgstr "" #: ../keystone_certificates_for_pki.rst:124 msgid "Install External Signing Certificate" msgstr "" #: ../keystone_certificates_for_pki.rst:127 msgid "Request a signing certificate from an external CA" msgstr "" #: ../keystone_certificates_for_pki.rst:129 msgid "" "One way to request a signing certificate from an external CA is to first " "generate a PKCS #10 Certificate Request Syntax (CRS) using OpenSSL CLI." msgstr "" #: ../keystone_certificates_for_pki.rst:132 msgid "" "Create a certificate request configuration file. For example, create the " "``cert_req.conf`` file, as follows:" msgstr "" #: ../keystone_certificates_for_pki.rst:154 msgid "" "Then generate a CRS with OpenSSL CLI. **Do not encrypt the generated private " "key. You must use the -nodes option.**" msgstr "" #: ../keystone_certificates_for_pki.rst:164 msgid "" "If everything is successful, you should end up with ``signing_cert_req.pem`` " "and ``signing_key.pem``. Send ``signing_cert_req.pem`` to your CA to request " "a token signing certificate and make sure to ask the certificate to be in " "PEM format. Also, make sure your trusted CA certificate chain is also in PEM " "format." msgstr "" #: ../keystone_certificates_for_pki.rst:171 msgid "Install an external signing certificate" msgstr "" #: ../keystone_certificates_for_pki.rst:173 msgid "Assuming you have the following already:" msgstr "" #: ../keystone_certificates_for_pki.rst:176 msgid "(Keystone token) signing certificate in PEM format" msgstr "" #: ../keystone_certificates_for_pki.rst:176 msgid "``signing_cert.pem``" msgstr "" #: ../keystone_certificates_for_pki.rst:179 msgid "Corresponding (non-encrypted) private key in PEM format" msgstr "" #: ../keystone_certificates_for_pki.rst:179 msgid "``signing_key.pem``" msgstr "" #: ../keystone_certificates_for_pki.rst:182 msgid "Trust CA certificate chain in PEM format" msgstr "" #: ../keystone_certificates_for_pki.rst:182 msgid "``cacert.pem``" msgstr "" #: ../keystone_certificates_for_pki.rst:184 msgid "Copy the above to your certificate directory. For example:" msgstr "" #: ../keystone_certificates_for_pki.rst:196 msgid "Make sure the certificate directory is only accessible by root." msgstr "" #: ../keystone_certificates_for_pki.rst:200 msgid "" "The procedure of copying the key and cert files may be improved if done " "after first running :command:`keystone-manage pki_setup` since this command " "also creates other needed files, such as the ``index.txt`` and ``serial`` " "files." msgstr "" #: ../keystone_certificates_for_pki.rst:205 msgid "" "Also, when copying the necessary files to a different server for replicating " "the functionality, the entire directory of files is needed, not just the key " "and cert files." msgstr "" #: ../keystone_certificates_for_pki.rst:209 msgid "" "If your certificate directory path is different from the default ``/etc/" "keystone/ssl/certs``, make sure it is reflected in the ``[signing]`` section " "of the configuration file." msgstr "" #: ../keystone_certificates_for_pki.rst:214 msgid "Switching out expired signing certificates" msgstr "" #: ../keystone_certificates_for_pki.rst:216 msgid "" "The following procedure details how to switch out expired signing " "certificates with no cloud outages." msgstr "" #: ../keystone_certificates_for_pki.rst:219 msgid "Generate a new signing key." msgstr "" #: ../keystone_certificates_for_pki.rst:221 msgid "Generate a new certificate request." msgstr "" #: ../keystone_certificates_for_pki.rst:223 msgid "" "Sign the new certificate with the existing CA to generate a new " "``signing_cert``." msgstr "" #: ../keystone_certificates_for_pki.rst:226 msgid "" "Append the new ``signing_cert`` to the old ``signing_cert``. Ensure the old " "certificate is in the file first." msgstr "" #: ../keystone_certificates_for_pki.rst:229 msgid "" "Remove all signing certificates from all your hosts to force OpenStack " "Compute to download the new ``signing_cert``." msgstr "" #: ../keystone_certificates_for_pki.rst:232 msgid "" "Replace the old signing key with the new signing key. Move the new signing " "certificate above the old certificate in the ``signing_cert`` file." msgstr "" #: ../keystone_certificates_for_pki.rst:236 msgid "" "After the old certificate reads as expired, you can safely remove the old " "signing certificate from the file." msgstr "" #: ../keystone_configure_with_SSL.rst:3 msgid "Configure the Identity service with SSL" msgstr "" #: ../keystone_configure_with_SSL.rst:5 msgid "You can configure the Identity service to support two-way SSL." msgstr "" #: ../keystone_configure_with_SSL.rst:7 msgid "You must obtain the x509 certificates externally and configure them." msgstr "" #: ../keystone_configure_with_SSL.rst:9 msgid "" "The Identity service provides a set of sample certificates in the ``examples/" "pki/certs`` and ``examples/pki/private`` directories:" msgstr "" #: ../keystone_configure_with_SSL.rst:13 msgid "Certificate Authority chain to validate against." msgstr "" #: ../keystone_configure_with_SSL.rst:13 msgid "cacert.pem" msgstr "" #: ../keystone_configure_with_SSL.rst:16 msgid "Public certificate for Identity service server." msgstr "" #: ../keystone_configure_with_SSL.rst:16 msgid "ssl\\_cert.pem" msgstr "" #: ../keystone_configure_with_SSL.rst:19 msgid "Public and private certificate for Identity service middleware/client." msgstr "" #: ../keystone_configure_with_SSL.rst:20 msgid "middleware.pem" msgstr "" #: ../keystone_configure_with_SSL.rst:23 msgid "Private key for the CA." msgstr "" #: ../keystone_configure_with_SSL.rst:23 msgid "cakey.pem" msgstr "" #: ../keystone_configure_with_SSL.rst:26 msgid "Private key for the Identity service server." msgstr "" #: ../keystone_configure_with_SSL.rst:26 msgid "ssl\\_key.pem" msgstr "" #: ../keystone_configure_with_SSL.rst:30 msgid "" "You can choose names for these certificates. You can also combine public/" "private keys in the same file, if you wish. These certificates are provided " "as an example." msgstr "" #: ../keystone_configure_with_SSL.rst:35 msgid "Client authentication with keystone-all" msgstr "" #: ../keystone_configure_with_SSL.rst:37 msgid "" "When running ``keystone-all``, the server can be configured to enable SSL " "with client authentication using the following instructions. Modify the " "``[eventlet_server_ssl]`` section in the ``/etc/keystone/keystone.conf`` " "file. The following SSL configuration example uses the included sample " "certificates:" msgstr "" #: ../keystone_configure_with_SSL.rst:52 msgid "**Options**" msgstr "" #: ../keystone_configure_with_SSL.rst:55 msgid "``True`` enables SSL. Default is ``False``." msgstr "" #: ../keystone_configure_with_SSL.rst:55 msgid "``enable``" msgstr "" #: ../keystone_configure_with_SSL.rst:58 msgid "Path to the Identity service public certificate file." msgstr "" #: ../keystone_configure_with_SSL.rst:61 msgid "" "Path to the Identity service private certificate file. If you include the " "private key in the certfile, you can omit the keyfile." msgstr "" #: ../keystone_configure_with_SSL.rst:66 msgid "Path to the CA trust chain." msgstr "" #: ../keystone_configure_with_SSL.rst:69 msgid "Requires client certificate. Default is ``False``." msgstr "" #: ../keystone_configure_with_SSL.rst:69 msgid "``cert_required``" msgstr "" #: ../keystone_configure_with_SSL.rst:71 msgid "" "When running the Identity service as a WSGI service in a web server such as " "Apache httpd, this configuration is done in the web server instead. In this " "case the options in the ``[eventlet_server_ssl]`` section are ignored." msgstr "" #: ../keystone_external_authentication.rst:3 msgid "External authentication with Identity" msgstr "" #: ../keystone_external_authentication.rst:5 msgid "" "When Identity runs in ``apache-httpd``, you can use external authentication " "methods that differ from the authentication provided by the identity store " "back end. For example, you can use an SQL identity back end together with " "X.509 authentication and Kerberos, instead of using the user name and " "password combination." msgstr "" #: ../keystone_external_authentication.rst:12 msgid "Use HTTPD authentication" msgstr "" #: ../keystone_external_authentication.rst:14 msgid "" "Web servers, like Apache HTTP, support many methods of authentication. " "Identity can allow the web server to perform the authentication. The web " "server then passes the authenticated user to Identity by using the " "``REMOTE_USER`` environment variable. This user must already exist in the " "Identity back end to get a token from the controller. To use this method, " "Identity should run on ``apache-httpd``." msgstr "" #: ../keystone_external_authentication.rst:22 msgid "Use X.509" msgstr "" #: ../keystone_external_authentication.rst:24 msgid "" "The following Apache configuration snippet authenticates the user based on a " "valid X.509 certificate from a known CA:" msgstr "" #: ../keystone_fernet_token_faq.rst:3 msgid "Fernet - Frequently Asked Questions" msgstr "" #: ../keystone_fernet_token_faq.rst:5 msgid "" "The following questions have been asked periodically since the initial " "release of the fernet token format in Kilo." msgstr "" #: ../keystone_fernet_token_faq.rst:9 msgid "What are the different types of keys?" msgstr "" #: ../keystone_fernet_token_faq.rst:11 msgid "" "A key repository is required by keystone in order to create fernet tokens. " "These keys are used to encrypt and decrypt the information that makes up the " "payload of the token. Each key in the repository can have one of three " "states. The state of the key determines how keystone uses a key with fernet " "tokens. The different types are as follows:" msgstr "" #: ../keystone_fernet_token_faq.rst:18 msgid "" "There is only ever one primary key in a key repository. The primary key is " "allowed to encrypt and decrypt tokens. This key is always named as the " "highest index in the repository." msgstr "" #: ../keystone_fernet_token_faq.rst:19 msgid "Primary key:" msgstr "" #: ../keystone_fernet_token_faq.rst:22 msgid "" "A secondary key was at one point a primary key, but has been demoted in " "place of another primary key. It is only allowed to decrypt tokens. Since it " "was the primary at some point in time, its existence in the key repository " "is justified. Keystone needs to be able to decrypt tokens that were created " "with old primary keys." msgstr "" #: ../keystone_fernet_token_faq.rst:25 msgid "Secondary key:" msgstr "" #: ../keystone_fernet_token_faq.rst:28 msgid "" "The staged key is a special key that shares some similarities with secondary " "keys. There can only ever be one staged key in a repository and it must " "exist. Just like secondary keys, staged keys have the ability to decrypt " "tokens. Unlike secondary keys, staged keys have never been a primary key. In " "fact, they are opposites since the staged key will always be the next " "primary key. This helps clarify the name because they are the next key " "staged to be the primary key. This key is always named as ``0`` in the key " "repository." msgstr "" #: ../keystone_fernet_token_faq.rst:34 msgid "Staged key:" msgstr "" #: ../keystone_fernet_token_faq.rst:37 msgid "So, how does a staged key help me and why do I care about it?" msgstr "" #: ../keystone_fernet_token_faq.rst:39 msgid "" "The fernet keys have a natural lifecycle. Each key starts as a staged key, " "is promoted to be the primary key, and then demoted to be a secondary key. " "New tokens can only be encrypted with a primary key. Secondary and staged " "keys are never used to encrypt token. The staged key is a special key given " "the order of events and the attributes of each type of key. The staged key " "is the only key in the repository that has not had a chance to encrypt any " "tokens yet, but it is still allowed to decrypt tokens. As an operator, this " "gives you the chance to perform a key rotation on one keystone node, and " "distribute the new key set over a span of time. This does not require the " "distribution to take place in an ultra short period of time. Tokens " "encrypted with a primary key can be decrypted, and validated, on other nodes " "where that key is still staged." msgstr "" #: ../keystone_fernet_token_faq.rst:52 msgid "Where do I put my key repository?" msgstr "" #: ../keystone_fernet_token_faq.rst:54 msgid "" "The key repository is specified using the ``key_repository`` option in the " "keystone configuration file. The keystone process should be able to read and " "write to this location but it should be kept secret otherwise. Currently, " "keystone only supports file-backed key repositories." msgstr "" #: ../keystone_fernet_token_faq.rst:65 msgid "What is the recommended way to rotate and distribute keys?" msgstr "" #: ../keystone_fernet_token_faq.rst:67 msgid "" "The :command:`keystone-manage` command line utility includes a key rotation " "mechanism. This mechanism will initialize and rotate keys but does not make " "an effort to distribute keys across keystone nodes. The distribution of keys " "across a keystone deployment is best handled through configuration " "management tooling. Use :command:`keystone-manage fernet_rotate` to rotate " "the key repository." msgstr "" #: ../keystone_fernet_token_faq.rst:75 msgid "Do fernet tokens still expire?" msgstr "" #: ../keystone_fernet_token_faq.rst:77 msgid "" "Yes, fernet tokens can expire just like any other keystone token formats." msgstr "" #: ../keystone_fernet_token_faq.rst:80 msgid "Why should I choose fernet tokens over UUID tokens?" msgstr "" #: ../keystone_fernet_token_faq.rst:82 msgid "" "Even though fernet tokens operate very similarly to UUID tokens, they do not " "require persistence. The keystone token database no longer suffers bloat as " "a side effect of authentication. Pruning expired tokens from the token " "database is no longer required when using fernet tokens. Because fernet " "tokens do not require persistence, they do not have to be replicated. As " "long as each keystone node shares the same key repository, fernet tokens can " "be created and validated instantly across nodes." msgstr "" #: ../keystone_fernet_token_faq.rst:91 msgid "Why should I choose fernet tokens over PKI or PKIZ tokens?" msgstr "" #: ../keystone_fernet_token_faq.rst:93 msgid "" "The arguments for using fernet over PKI and PKIZ remain the same as UUID, in " "addition to the fact that fernet tokens are much smaller than PKI and PKIZ " "tokens. PKI and PKIZ tokens still require persistent storage and can " "sometimes cause issues due to their size. This issue is mitigated when " "switching to fernet because fernet tokens are kept under a 250 byte limit. " "PKI and PKIZ tokens typically exceed 1600 bytes in length. The length of a " "PKI or PKIZ token is dependent on the size of the deployment. Bigger service " "catalogs will result in longer token lengths. This pattern does not exist " "with fernet tokens because the contents of the encrypted payload is kept to " "a minimum." msgstr "" #: ../keystone_fernet_token_faq.rst:104 msgid "" "Should I rotate and distribute keys from the same keystone node every " "rotation?" msgstr "" #: ../keystone_fernet_token_faq.rst:106 msgid "" "No, but the relationship between rotation and distribution should be lock-" "step. Once you rotate keys on one keystone node, the key repository from " "that node should be distributed to the rest of the cluster. Once you confirm " "that each node has the same key repository state, you could rotate and " "distribute from any other node in the cluster." msgstr "" #: ../keystone_fernet_token_faq.rst:112 msgid "" "If the rotation and distribution are not lock-step, a single keystone node " "in the deployment will create tokens with a primary key that no other node " "has as a staged key. This will cause tokens generated from one keystone node " "to fail validation on other keystone nodes." msgstr "" #: ../keystone_fernet_token_faq.rst:118 msgid "How do I add new keystone nodes to a deployment?" msgstr "" #: ../keystone_fernet_token_faq.rst:120 msgid "" "The keys used to create fernet tokens should be treated like super secret " "configuration files, similar to an SSL secret key. Before a node is allowed " "to join an existing cluster, issuing and validating tokens, it should have " "the same key repository as the rest of the nodes in the cluster." msgstr "" #: ../keystone_fernet_token_faq.rst:126 msgid "How should I approach key distribution?" msgstr "" #: ../keystone_fernet_token_faq.rst:128 msgid "" "Remember that key distribution is only required in multi-node keystone " "deployments. If you only have one keystone node serving requests in your " "deployment, key distribution is unnecessary." msgstr "" #: ../keystone_fernet_token_faq.rst:132 msgid "" "Key distribution is a problem best approached from the deployment's current " "configuration management system. Since not all deployments use the same " "configuration management systems, it makes sense to explore options around " "what is already available for managing keys, while keeping the secrecy of " "the keys in mind. Many configuration management tools can leverage something " "like ``rsync`` to manage key distribution." msgstr "" #: ../keystone_fernet_token_faq.rst:139 msgid "" "Key rotation is a single operation that promotes the current staged key to " "primary, creates a new staged key, and prunes old secondary keys. It is " "easiest to do this on a single node and verify the rotation took place " "properly before distributing the key repository to the rest of the cluster. " "The concept behind the staged key breaks the expectation that key rotation " "and key distribution have to be done in a single step. With the staged key, " "we have time to inspect the new key repository before syncing state with the " "rest of the cluster. Key distribution should be an operation that can run in " "succession until it succeeds. The following might help illustrate the " "isolation between key rotation and key distribution." msgstr "" #: ../keystone_fernet_token_faq.rst:150 msgid "" "Ensure all keystone nodes in the deployment have the same key repository." msgstr "" #: ../keystone_fernet_token_faq.rst:151 msgid "Pick a keystone node in the cluster to rotate from." msgstr "" #: ../keystone_fernet_token_faq.rst:152 msgid "Rotate keys." msgstr "" #: ../keystone_fernet_token_faq.rst:154 ../keystone_fernet_token_faq.rst:176 msgid "Was it successful?" msgstr "" #: ../keystone_fernet_token_faq.rst:156 msgid "" "If no, investigate issues with the particular keystone node you rotated keys " "on. Fernet keys are small and the operation for rotation is trivial. There " "should not be much room for error in key rotation. It is possible that the " "user does not have the ability to write new keys to the key repository. Log " "output from ``keystone-manage fernet_rotate`` should give more information " "into specific failures." msgstr "" #: ../keystone_fernet_token_faq.rst:164 msgid "" "If yes, you should see a new staged key. The old staged key should be the " "new primary. Depending on the ``max_active_keys`` limit you might have " "secondary keys that were pruned. At this point, the node that you rotated on " "will be creating fernet tokens with a primary key that all other nodes " "should have as the staged key. This is why we checked the state of all key " "repositories in Step one. All other nodes in the cluster should be able to " "decrypt tokens created with the new primary key. At this point, we are ready " "to distribute the new key set." msgstr "" #: ../keystone_fernet_token_faq.rst:174 msgid "Distribute the new key repository." msgstr "" #: ../keystone_fernet_token_faq.rst:178 msgid "" "If yes, you should be able to confirm that all nodes in the cluster have the " "same key repository that was introduced in Step 3. All nodes in the cluster " "will be creating tokens with the primary key that was promoted in Step 3. No " "further action is required until the next schedule key rotation." msgstr "" #: ../keystone_fernet_token_faq.rst:184 msgid "" "If no, try distributing again. Remember that we already rotated the " "repository and performing another rotation at this point will result in " "tokens that cannot be validated across certain hosts. Specifically, the " "hosts that did not get the latest key set. You should be able to distribe " "keys until it is successful. If certain nodes have issues syncing, it could " "be permission or network issues and those should be resolved before " "subsequent rotations." msgstr "" #: ../keystone_fernet_token_faq.rst:193 msgid "How long should I keep my keys around?" msgstr "" #: ../keystone_fernet_token_faq.rst:195 msgid "" "The fernet tokens that keystone creates are only secure as the keys creating " "them. With staged keys the penalty of key rotation is low, allowing you to " "err on the side of security and rotate weekly, daily, or even hourly. " "Ultimately, this should be less time than it takes an attacker to break a " "``AES256`` key and a ``SHA256 HMAC``." msgstr "" #: ../keystone_fernet_token_faq.rst:202 msgid "Is a fernet token still a bearer token?" msgstr "" #: ../keystone_fernet_token_faq.rst:204 msgid "" "Yes, and they follow exactly the same validation path as UUID tokens, with " "the exception of being written to, and read from, a back end. If someone " "compromises your fernet token, they have the power to do all the operations " "you are allowed to do." msgstr "" #: ../keystone_fernet_token_faq.rst:210 msgid "What if I need to revoke all my tokens?" msgstr "" #: ../keystone_fernet_token_faq.rst:212 msgid "" "To invalidate every token issued from keystone and start fresh, remove the " "current key repository, create a new key set, and redistribute it to all " "nodes in the cluster. This will render every token issued from keystone as " "invalid regardless if the token has actually expired. When a client goes to " "re-authenticate, the new token will have been created with a new fernet key." msgstr "" #: ../keystone_fernet_token_faq.rst:219 msgid "" "What can an attacker do if they compromise a fernet key in my deployment?" msgstr "" #: ../keystone_fernet_token_faq.rst:221 msgid "" "If any key used in the key repository is compromised, an attacker will be " "able to build their own tokens. If they know the ID of an administrator on a " "project, they could generate administrator tokens for the project. They will " "be able to generate their own tokens until the compromised key has been " "removed from from the repository." msgstr "" #: ../keystone_fernet_token_faq.rst:228 msgid "I rotated keys and now tokens are invalidating early, what did I do?" msgstr "" #: ../keystone_fernet_token_faq.rst:230 msgid "" "Using fernet tokens requires some awareness around token expiration and the " "key lifecycle. You do not want to rotate so often that secondary keys are " "removed that might still be needed to decrypt unexpired tokens. If this " "happens, you will not be able to decrypt the token because the key the was " "used to encrypt it is now gone. Only remove keys that you know are not being " "used to encrypt or decrypt tokens." msgstr "" #: ../keystone_fernet_token_faq.rst:237 msgid "" "For example, your token is valid for 24 hours and we want to rotate keys " "every six hours. We will need to make sure tokens that were created at 08:00 " "AM on Monday are still valid at 07:00 AM on Tuesday, assuming they were not " "prematurely revoked. To accomplish this, we will want to make sure we set " "``max_active_keys=6`` in our keystone configuration file. This will allow us " "to hold all keys that might still be required to validate a previous token, " "but keeps the key repository limited to only the keys that are needed." msgstr "" #: ../keystone_fernet_token_faq.rst:245 msgid "" "The number of ``max_active_keys`` for a deployment can be determined by " "dividing the token lifetime, in hours, by the frequency of rotation in hours " "and adding two. Better illustrated as::" msgstr "" #: ../keystone_fernet_token_faq.rst:253 msgid "" "The reason for adding two additional keys to the count is to include the " "staged key and a buffer key. This can be shown based on the previous " "example. We initially setup the key repository at 6:00 AM on Monday, and the " "initial state looks like:" msgstr "" #: ../keystone_fernet_token_faq.rst:266 msgid "" "All tokens created after 6:00 AM are encrypted with key ``1``. At 12:00 PM " "we will rotate keys again, resulting in," msgstr "" #: ../keystone_fernet_token_faq.rst:278 msgid "" "We are still able to validate tokens created between 6:00 - 11:59 AM because " "the ``1`` key still exists as a secondary key. All tokens issued after 12:00 " "PM will be encrypted with key ``2``. At 6:00 PM we do our next rotation, " "resulting in:" msgstr "" #: ../keystone_fernet_token_faq.rst:293 msgid "" "It is still possible to validate tokens issued from 6:00 AM - 5:59 PM " "because keys ``1`` and ``2`` exist as secondary keys. Every token issued " "until 11:59 PM will be encrypted with key ``3``, and at 12:00 AM we do our " "next rotation:" msgstr "" #: ../keystone_fernet_token_faq.rst:308 msgid "" "Just like before, we can still validate tokens issued from 6:00 AM the " "previous day until 5:59 AM today because keys ``1`` - ``4`` are present. At " "6:00 AM, tokens issued from the previous day will start to expire and we do " "our next scheduled rotation:" msgstr "" #: ../keystone_fernet_token_faq.rst:325 msgid "" "Tokens will naturally expire after 6:00 AM, but we will not be able to " "remove key ``1`` until the next rotation because it encrypted all tokens " "from 6:00 AM to 12:00 PM the day before. Once we do our next rotation, which " "is at 12:00 PM, the ``1`` key will be pruned from the repository:" msgstr "" #: ../keystone_fernet_token_faq.rst:342 msgid "" "If keystone were to receive a token that was created between 6:00 AM and " "12:00 PM the day before, encrypted with the ``1`` key, it would not be valid " "because it was already expired. This makes it possible for us to remove the " "``1`` key from the repository without negative validation side-effects." msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:5 msgid "Integrate assignment back end with LDAP" msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:7 msgid "" "When you configure the OpenStack Identity service to use LDAP servers, you " "can split authentication and authorization using the *assignment* feature. " "Integrating the *assignment* back end with LDAP allows administrators to use " "projects (tenant), roles, domains, and role assignments in LDAP." msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:15 msgid "" "Be aware of domain-specific back end limitations when configuring OpenStack " "Identity. The OpenStack Identity service does not support domain-specific " "assignment back ends. Using LDAP as an assignment back end is not " "recommended." msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:23 msgid "" "For OpenStack Identity assignments to access LDAP servers, you must define " "the destination LDAP server in the ``keystone.conf`` file. For more " "information, see :ref:`integrate-identity-with-ldap`." msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:27 msgid "**To integrate assignment back ends with LDAP**" msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:29 msgid "" "Enable the assignment driver. In the ``[assignment]`` section, set the " "``driver`` configuration key to ``keystone.assignment.backends.sql." "Assignment``:" msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:39 msgid "" "Create the organizational units (OU) in the LDAP directory, and define their " "corresponding location in the ``keystone.conf`` file:" msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:53 msgid "" "These schema attributes are extensible for compatibility with various " "schemas. For example, this entry maps to the groupOfNames attribute in " "Active Directory:" msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:61 msgid "" "A read-only implementation is recommended for LDAP integration. These " "permissions are applied to object types in the ``keystone.conf`` file:" msgstr "" # #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_integrate_assignment_backend_ldap.rst:75 #: ../keystone_integrate_identity_backend_ldap.rst:64 #: ../keystone_integrate_identity_backend_ldap.rst:92 #: ../keystone_integrate_identity_backend_ldap.rst:168 msgid "Restart the OpenStack Identity service." msgstr "" # #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_integrate_assignment_backend_ldap.rst:79 #: ../keystone_integrate_identity_backend_ldap.rst:68 #: ../keystone_integrate_identity_backend_ldap.rst:96 #: ../keystone_integrate_identity_backend_ldap.rst:172 #: ../keystone_integrate_identity_backend_ldap.rst:242 msgid "" "During service restart, authentication and authorization are unavailable." msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:82 msgid "**Additional LDAP integration settings.**" msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:84 msgid "" "Set these options in the ``/etc/keystone/keystone.conf`` file for a single " "LDAP server, or ``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` files " "for multiple back ends." msgstr "" # #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_integrate_assignment_backend_ldap.rst:89 #: ../keystone_integrate_identity_backend_ldap.rst:183 msgid "Use filters to control the scope of data presented through LDAP." msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:100 msgid "Filtering method" msgstr "" # #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_integrate_assignment_backend_ldap.rst:100 #: ../keystone_integrate_identity_backend_ldap.rst:189 msgid "Filters" msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:103 msgid "" "Mask account status values (include any additional attribute mappings) for " "compatibility with various directory services. Superfluous accounts are " "filtered with user\\_filter." msgstr "" # #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_integrate_assignment_backend_ldap.rst:107 #: ../keystone_integrate_identity_backend_ldap.rst:196 msgid "Setting attribute ignore to list of attributes stripped off on update." msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:126 msgid "Assignment attribute mapping" msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:129 msgid "" "An alternative method to determine if a project is enabled or not is to " "check if that project is a member of the emulation group." msgstr "" #: ../keystone_integrate_assignment_backend_ldap.rst:132 msgid "" "Use DN of the group entry to hold enabled projects when using enabled " "emulation." msgstr "" # #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_integrate_assignment_backend_ldap.rst:138 #: ../keystone_integrate_identity_backend_ldap.rst:235 msgid "Enabled emulation" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:5 msgid "Integrate Identity back end with LDAP" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:7 msgid "" "The Identity back end contains information for users, groups, and group " "member lists. Integrating the Identity back end with LDAP allows " "administrators to use users and groups in LDAP." msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:13 msgid "" "For OpenStack Identity service to access LDAP servers, you must define the " "destination LDAP server in the ``keystone.conf`` file. For more information, " "see :ref:`integrate-identity-with-ldap`." msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:17 msgid "**To integrate one Identity back end with LDAP**" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:19 msgid "" "Enable the LDAP Identity driver in the ``keystone.conf`` file. This allows " "LDAP as an identity back end:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:28 msgid "" "Create the organizational units (OU) in the LDAP directory, and define the " "corresponding location in the ``keystone.conf`` file:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:42 #: ../keystone_integrate_identity_backend_ldap.rst:145 msgid "" "These schema attributes are extensible for compatibility with various " "schemas. For example, this entry maps to the person attribute in Active " "Directory:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:50 msgid "" "A read-only implementation is recommended for LDAP integration. These " "permissions are applied to object types in the ``keystone.conf``:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:71 msgid "**To integrate multiple Identity back ends with LDAP**" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:73 msgid "Set the following options in the ``/etc/keystone/keystone.conf`` file:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:76 msgid "Enable the LDAP driver:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:84 msgid "Enable domain-specific drivers:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:99 msgid "" "List the domains using the dashboard, or the OpenStackClient CLI. Refer to " "the `Command List `__ for a list of OpenStackClient commands." msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:104 msgid "Create domains using OpenStack dashboard, or the OpenStackClient CLI." msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:106 msgid "" "For each domain, create a domain-specific configuration file in the ``/etc/" "keystone/domains`` directory. Use the file naming convention ``keystone." "DOMAIN_NAME.conf``, where DOMAIN\\_NAME is the domain name assigned in the " "previous step." msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:113 msgid "" "The options set in the ``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` " "file will override options in the ``/etc/keystone/keystone.conf`` file." msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:117 msgid "" "Define the destination LDAP server in the ``/etc/keystone/domains/keystone." "DOMAIN_NAME.conf`` file. For example:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:130 msgid "" "Create the organizational units (OU) in the LDAP directories, and define " "their corresponding locations in the ``/etc/keystone/domains/keystone." "DOMAIN_NAME.conf`` file. For example:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:153 msgid "" "A read-only implementation is recommended for LDAP integration. These " "permissions are applied to object types in the ``/etc/keystone/domains/" "keystone.DOMAIN_NAME.conf`` file:" msgstr "" # #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_integrate_with_ldap.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_integrate_identity_backend_ldap.rst:175 #: ../keystone_integrate_with_ldap.rst:79 msgid "**Additional LDAP integration settings**" msgstr "" # #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# keystone_integrate_with_ldap.pot (Administrator Guide 0.9) #-#-#-#-# #: ../keystone_integrate_identity_backend_ldap.rst:177 #: ../keystone_integrate_with_ldap.rst:81 msgid "" "Set these options in the ``/etc/keystone/keystone.conf`` file for a single " "LDAP server, or ``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` files " "for multiple back ends. Example configurations appear below each setting " "summary:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:192 msgid "" "Mask account status values (include any additional attribute mappings) for " "compatibility with various directory services. Superfluous accounts are " "filtered with ``user_filter``." msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:199 msgid "" "For example, you can mask Active Directory account status attributes in the " "``keystone.conf`` file:" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:222 msgid "Identity attribute mapping" msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:225 msgid "" "An alternative method to determine if a user is enabled or not is by " "checking if that user is a member of the emulation group." msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:228 msgid "" "Use DN of the group entry to hold enabled user when using enabled emulation." msgstr "" #: ../keystone_integrate_identity_backend_ldap.rst:237 msgid "" "When you have finished configuration, restart the OpenStack Identity service." msgstr "" #: ../keystone_integrate_with_ldap.rst:5 msgid "Integrate Identity with LDAP" msgstr "" #: ../keystone_integrate_with_ldap.rst:14 msgid "" "The OpenStack Identity service supports integration with existing LDAP " "directories for authentication and authorization services." msgstr "" #: ../keystone_integrate_with_ldap.rst:17 msgid "" "When the OpenStack Identity service is configured to use LDAP back ends, you " "can split authentication (using the *identity* feature) and authorization " "(using the *assignment* feature)." msgstr "" #: ../keystone_integrate_with_ldap.rst:21 msgid "" "The *identity* feature enables administrators to manage users and groups by " "each domain or the OpenStack Identity service entirely." msgstr "" #: ../keystone_integrate_with_ldap.rst:24 msgid "" "The *assignment* feature enables administrators to manage project role " "authorization using the OpenStack Identity service SQL database, while " "providing user authentication through the LDAP directory." msgstr "" #: ../keystone_integrate_with_ldap.rst:30 msgid "" "For the OpenStack Identity service to access LDAP servers, you must enable " "the ``authlogin_nsswitch_use_ldap`` boolean value for SELinux on the server " "running the OpenStack Identity service. To enable and make the option " "persistent across reboots, set the following boolean value as the root user:" msgstr "" #: ../keystone_integrate_with_ldap.rst:40 msgid "" "The Identity configuration is split into two separate back ends; identity " "(back end for users and groups), and assignments (back end for domains, " "projects, roles, role assignments). To configure Identity, set options in " "the ``/etc/keystone/keystone.conf`` file. See :ref:`integrate-identity-" "backend-ldap` for Identity back end configuration examples and :ref:" "`integrate-assignment-backend-ldap` for assignment back end configuration " "examples. Modify these examples as needed." msgstr "" #: ../keystone_integrate_with_ldap.rst:50 msgid "" "Multiple back ends are supported. You can integrate the OpenStack Identity " "service with a single LDAP server (configure both identity and assignments " "to LDAP, or set identity and assignments back end with SQL or LDAP), or " "multiple back ends using domain-specific configuration files." msgstr "" #: ../keystone_integrate_with_ldap.rst:57 msgid "**To define the destination LDAP server**" msgstr "" #: ../keystone_integrate_with_ldap.rst:59 msgid "Define the destination LDAP server in the ``keystone.conf`` file:" msgstr "" #: ../keystone_integrate_with_ldap.rst:71 msgid "" "Configure ``dumb_member`` to true if your environment requires the " "``use_dumb_member`` variable." msgstr "" #: ../keystone_integrate_with_ldap.rst:86 msgid "**Query option**" msgstr "" #: ../keystone_integrate_with_ldap.rst:91 msgid "" "Use ``query_scope`` to control the scope level of data presented (search " "only the first level or search an entire sub-tree) through LDAP." msgstr "" #: ../keystone_integrate_with_ldap.rst:94 msgid "" "Use ``page_size`` to control the maximum results per page. A value of zero " "disables paging." msgstr "" #: ../keystone_integrate_with_ldap.rst:96 msgid "" "Use ``alias_dereferencing`` to control the LDAP dereferencing option for " "queries." msgstr "" #: ../keystone_integrate_with_ldap.rst:98 msgid "" "Use ``chase_referrals`` to override the system's default referral chasing " "behavior for queries." msgstr "" #: ../keystone_integrate_with_ldap.rst:109 msgid "**Debug**" msgstr "" #: ../keystone_integrate_with_ldap.rst:111 msgid "" "Use ``debug_level`` to set the LDAP debugging level for LDAP calls. A value " "of zero means that debugging is not enabled." msgstr "" #: ../keystone_integrate_with_ldap.rst:121 msgid "" "This value is a bitmask, consult your LDAP documentation for possible values." msgstr "" #: ../keystone_integrate_with_ldap.rst:124 msgid "**Connection pooling**" msgstr "" #: ../keystone_integrate_with_ldap.rst:126 msgid "" "Use ``use_pool`` to enable LDAP connection pooling. Configure the connection " "pool size, maximum retry, reconnect trials, timeout (-1 indicates indefinite " "wait) and lifetime in seconds." msgstr "" #: ../keystone_integrate_with_ldap.rst:140 msgid "**Connection pooling for end user authentication**" msgstr "" #: ../keystone_integrate_with_ldap.rst:142 msgid "" "Use ``use_auth_pool`` to enable LDAP connection pooling for end user " "authentication. Configure the connection pool size and lifetime in seconds." msgstr "" #: ../keystone_integrate_with_ldap.rst:153 msgid "" "When you have finished the configuration, restart the OpenStack Identity " "service." msgstr "" #: ../keystone_integrate_with_ldap.rst:158 msgid "" "During the service restart, authentication and authorization are unavailable." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:2 msgid "Secure the OpenStack Identity service connection to an LDAP back end" msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:4 msgid "" "The Identity service supports the use of TLS to encrypt LDAP traffic. Before " "configuring this, you must first verify where your certificate authority " "file is located. For more information, see the `OpenStack Security Guide SSL " "introduction `_." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:10 msgid "Once you verify the location of your certificate authority file:" msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:12 msgid "**To configure TLS encryption on LDAP traffic**" msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:14 msgid "Open the ``/etc/keystone/keystone.conf`` configuration file." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:16 msgid "Find the ``[ldap]`` section." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:18 msgid "" "In the ``[ldap]`` section, set the ``use_tls`` configuration key to " "``True``. Doing so will enable TLS." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:21 msgid "" "Configure the Identity service to use your certificate authorities file. To " "do so, set the ``tls_cacertfile`` configuration key in the ``ldap`` section " "to the certificate authorities file's path." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:27 msgid "" "You can also set the ``tls_cacertdir`` (also in the ``ldap`` section) to the " "directory where all certificate authorities files are kept. If both " "``tls_cacertfile`` and ``tls_cacertdir`` are set, then the latter will be " "ignored." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:32 msgid "" "Specify what client certificate checks to perform on incoming TLS sessions " "from the LDAP server. To do so, set the ``tls_req_cert`` configuration key " "in the ``[ldap]`` section to ``demand``, ``allow``, or ``never``:" msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:40 msgid "" "``demand`` - The LDAP server always receives certificate requests. The " "session terminates if no certificate is provided, or if the certificate " "provided cannot be verified against the existing certificate authorities " "file." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:44 msgid "" "``allow`` - The LDAP server always receives certificate requests. The " "session will proceed as normal even if a certificate is not provided. If a " "certificate is provided but it cannot be verified against the existing " "certificate authorities file, the certificate will be ignored and the " "session will proceed as normal." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:50 msgid "``never`` - A certificate will never be requested." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:52 msgid "" "On distributions that include openstack-config, you can configure TLS " "encryption on LDAP traffic by running the following commands instead." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:66 msgid "" "``CA_FILE`` is the absolute path to the certificate authorities file that " "should be used to encrypt LDAP traffic." msgstr "" #: ../keystone_secure_identity_to_ldap_backend.rst:69 msgid "" "``CERT_BEHAVIOR`` specifies what client certificate checks to perform on an " "incoming TLS session from the LDAP server (``demand``, ``allow``, or " "``never``)." msgstr "" #: ../keystone_token-binding.rst:3 msgid "Configure Identity service for token binding" msgstr "" #: ../keystone_token-binding.rst:5 msgid "" "Token binding embeds information from an external authentication mechanism, " "such as a Kerberos server or X.509 certificate, inside a token. By using " "token binding, a client can enforce the use of a specified external " "authentication mechanism with the token. This additional security mechanism " "ensures that if a token is stolen, for example, it is not usable without " "external authentication." msgstr "" #: ../keystone_token-binding.rst:12 msgid "" "You configure the authentication types for a token binding in the ``keystone." "conf`` file:" msgstr "" #: ../keystone_token-binding.rst:20 msgid "or" msgstr "" #: ../keystone_token-binding.rst:27 msgid "Currently ``kerberos`` and ``x509`` are supported." msgstr "" #: ../keystone_token-binding.rst:29 msgid "" "To enforce checking of token binding, set the ``enforce_token_bind`` option " "to one of these modes:" msgstr "" #: ../keystone_token-binding.rst:33 msgid "Disables token bind checking." msgstr "" #: ../keystone_token-binding.rst:33 msgid "``disabled``" msgstr "" #: ../keystone_token-binding.rst:36 msgid "" "Enables bind checking. If a token is bound to an unknown authentication " "mechanism, the server ignores it. The default is this mode." msgstr "" #: ../keystone_token-binding.rst:38 msgid "``permissive``" msgstr "" #: ../keystone_token-binding.rst:41 msgid "" "Enables bind checking. If a token is bound to an unknown authentication " "mechanism, the server rejects it." msgstr "" #: ../keystone_token-binding.rst:42 msgid "``strict``" msgstr "" #: ../keystone_token-binding.rst:45 msgid "" "Enables bind checking. Requires use of at least authentication mechanism for " "tokens." msgstr "" #: ../keystone_token-binding.rst:46 msgid "``required``" msgstr "" #: ../keystone_token-binding.rst:49 msgid "" "Enables bind checking. Requires use of kerberos as the authentication " "mechanism for tokens:" msgstr "" #: ../keystone_token-binding.rst:55 msgid "``kerberos``" msgstr "" #: ../keystone_token-binding.rst:58 msgid "" "Enables bind checking. Requires use of X.509 as the authentication mechanism " "for tokens:" msgstr "" #: ../keystone_token-binding.rst:63 msgid "``x509``" msgstr "" #: ../keystone_tokens.rst:3 msgid "Keystone tokens" msgstr "" #: ../keystone_tokens.rst:5 msgid "" "Tokens are used to authenticate and authorize your interactions with the " "various OpenStack APIs. Tokens come in many flavors, representing various " "authorization scopes and sources of identity. There are also several " "different \"token providers\", each with their own user experience, " "performance, and deployment characteristics." msgstr "" #: ../keystone_tokens.rst:12 msgid "Authorization scopes" msgstr "" #: ../keystone_tokens.rst:14 msgid "" "Tokens can express your authorization in different scopes. You likely have " "different sets of roles, in different projects, and in different domains. " "While tokens always express your identity, they may only ever express one " "set of roles in one authorization scope at a time." msgstr "" #: ../keystone_tokens.rst:19 msgid "" "Each level of authorization scope is useful for certain types of operations " "in certain OpenStack services, and are not interchangeable." msgstr "" #: ../keystone_tokens.rst:23 msgid "Unscoped tokens" msgstr "" #: ../keystone_tokens.rst:25 msgid "" "An unscoped token contains neither a service catalog, any roles, a project " "scope, nor a domain scope. Their primary use case is simply to prove your " "identity to keystone at a later time (usually to generate scoped tokens), " "without repeatedly presenting your original credentials." msgstr "" #: ../keystone_tokens.rst:30 msgid "The following conditions must be met to receive an unscoped token:" msgstr "" #: ../keystone_tokens.rst:32 msgid "" "You must not specify an authorization scope in your authentication request " "(for example, on the command line with arguments such as ``--os-project-" "name`` or ``--os-domain-id``)," msgstr "" #: ../keystone_tokens.rst:36 msgid "" "Your identity must not have a \"default project\" associated with it that " "you also have role assignments, and thus authorization, upon." msgstr "" #: ../keystone_tokens.rst:40 msgid "Project-scoped tokens" msgstr "" #: ../keystone_tokens.rst:42 msgid "" "Project-scoped tokens are the bread and butter of OpenStack. They express " "your authorization to operate in a specific tenancy of the cloud and are " "useful to authenticate yourself when working with most other services." msgstr "" #: ../keystone_tokens.rst:46 msgid "" "They contain a service catalog, a set of roles, and details of the project " "upon which you have authorization." msgstr "" #: ../keystone_tokens.rst:50 msgid "Domain-scoped tokens" msgstr "" #: ../keystone_tokens.rst:52 msgid "" "Domain-scoped tokens also have limited use cases in OpenStack. They express " "your authorization to operate a domain-level, above that of the user and " "projects contained therein (typically as a domain-level administrator). " "Depending on Keystone's configuration, they are useful for working with a " "single domain in Keystone." msgstr "" #: ../keystone_tokens.rst:58 msgid "" "They contain a limited service catalog (only those services which do not " "explicitly require per-project endpoints), a set of roles, and details of " "the project upon which you have authorization." msgstr "" #: ../keystone_tokens.rst:62 msgid "" "They can also be used to work with domain-level concerns in other services, " "such as to configure domain-wide quotas that apply to all users or projects " "in a specific domain." msgstr "" #: ../keystone_tokens.rst:67 msgid "Token providers" msgstr "" #: ../keystone_tokens.rst:69 msgid "" "The token type issued by keystone is configurable through the ``etc/keystone." "conf`` file. Currently, there are four supported token types and they " "include ``UUID``, ``fernet``, ``PKI``, and ``PKIZ``." msgstr "" #: ../keystone_tokens.rst:74 msgid "UUID tokens" msgstr "" #: ../keystone_tokens.rst:76 msgid "" "UUID was the first token type supported and is currently the default token " "provider. UUID tokens are 32 bytes in length and must be persisted in a back " "end. Clients must pass their UUID token to the Identity service in order to " "validate it." msgstr "" #: ../keystone_tokens.rst:82 msgid "Fernet tokens" msgstr "" #: ../keystone_tokens.rst:84 msgid "" "The fernet token format was introduced in the OpenStack Kilo release. Unlike " "the other token types mentioned in this document, fernet tokens do not need " "to be persisted in a back end. ``AES256`` encryption is used to protect the " "information stored in the token and integrity is verified with a ``SHA256 " "HMAC`` signature. Only the Identity service should have access to the keys " "used to encrypt and decrypt fernet tokens. Like UUID tokens, fernet tokens " "must be passed back to the Identity service in order to validate them. For " "more information on the fernet token type, see the :doc:" "`keystone_fernet_token_faq`." msgstr "" #: ../keystone_tokens.rst:94 msgid "PKI and PKIZ tokens" msgstr "" #: ../keystone_tokens.rst:96 msgid "" "PKI tokens are signed documents that contain the authentication context, as " "well as the service catalog. Depending on the size of the OpenStack " "deployment, these tokens can be very long. The Identity service uses public/" "private key pairs and certificates in order to create and validate PKI " "tokens." msgstr "" #: ../keystone_tokens.rst:101 msgid "" "The same concepts from PKI tokens apply to PKIZ tokens. The only difference " "between the two is PKIZ tokens are compressed to help mitigate the size " "issues of PKI. For more information on the certificate setup for PKI and " "PKIZ tokens, see the :doc:`keystone_certificates_for_pki`." msgstr "" #: ../keystone_use_trusts.rst:3 msgid "Use trusts" msgstr "" #: ../keystone_use_trusts.rst:5 msgid "" "OpenStack Identity manages authentication and authorization. A trust is an " "OpenStack Identity extension that enables delegation and, optionally, " "impersonation through ``keystone``. A trust extension defines a relationship " "between:" msgstr "" #: ../keystone_use_trusts.rst:11 msgid "**Trustor**" msgstr "" #: ../keystone_use_trusts.rst:11 msgid "The user delegating a limited set of their own rights to another user." msgstr "" #: ../keystone_use_trusts.rst:14 msgid "The user trust is being delegated to, for a limited time." msgstr "" #: ../keystone_use_trusts.rst:16 msgid "" "The trust can eventually allow the trustee to impersonate the trustor. For " "security reasons, some safeties are added. For example, if a trustor loses a " "given role, any trusts the user issued with that role, and the related " "tokens, are automatically revoked." msgstr "" #: ../keystone_use_trusts.rst:19 msgid "**Trustee**" msgstr "" #: ../keystone_use_trusts.rst:21 msgid "The delegation parameters are:" msgstr "" #: ../keystone_use_trusts.rst:24 msgid "**User ID**" msgstr "" #: ../keystone_use_trusts.rst:24 msgid "The user IDs for the trustor and trustee." msgstr "" #: ../keystone_use_trusts.rst:27 msgid "" "The delegated privileges are a combination of a tenant ID and a number of " "roles that must be a subset of the roles assigned to the trustor." msgstr "" #: ../keystone_use_trusts.rst:31 msgid "" "If you omit all privileges, nothing is delegated. You cannot delegate " "everything." msgstr "" #: ../keystone_use_trusts.rst:32 msgid "**Privileges**" msgstr "" #: ../keystone_use_trusts.rst:35 msgid "" "Defines whether or not the delegation is recursive. If it is recursive, " "defines the delegation chain length." msgstr "" #: ../keystone_use_trusts.rst:38 msgid "Specify one of the following values:" msgstr "" #: ../keystone_use_trusts.rst:40 msgid "``0``. The delegate cannot delegate these permissions further." msgstr "" #: ../keystone_use_trusts.rst:42 msgid "" "``1``. The delegate can delegate the permissions to any set of delegates but " "the latter cannot delegate further." msgstr "" #: ../keystone_use_trusts.rst:45 msgid "**Delegation depth**" msgstr "" #: ../keystone_use_trusts.rst:45 msgid "``inf``. The delegation is infinitely recursive." msgstr "" #: ../keystone_use_trusts.rst:48 msgid "A list of endpoints associated with the delegation." msgstr "" #: ../keystone_use_trusts.rst:50 msgid "" "This parameter further restricts the delegation to the specified endpoints " "only. If you omit the endpoints, the delegation is useless. A special value " "of ``all_endpoints`` allows the trust to be used by all endpoints associated " "with the delegated tenant." msgstr "" #: ../keystone_use_trusts.rst:53 msgid "**Endpoints**" msgstr "" #: ../keystone_use_trusts.rst:55 msgid "**Duration**" msgstr "" #: ../keystone_use_trusts.rst:56 msgid "(Optional) Comprised of the start time and end time for the trust." msgstr "" # #-#-#-#-# networking.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_networking.pot (Administrator Guide 0.9) #-#-#-#-# #: ../networking.rst:5 ../shared_file_systems_networking.rst:5 msgid "Networking" msgstr "" #: ../networking.rst:7 msgid "" "Learn OpenStack Networking concepts, architecture, and basic and advanced " "``neutron`` and ``nova`` command-line interface (CLI) commands." msgstr "" #: ../networking_adv-config.rst:3 msgid "Advanced configuration options" msgstr "" #: ../networking_adv-config.rst:5 msgid "" "This section describes advanced configuration options for various system " "components. For example, configuration options where the default works but " "that the user wants to customize options. After installing from packages, ``" "$NEUTRON_CONF_DIR`` is ``/etc/neutron``." msgstr "" #: ../networking_adv-config.rst:11 msgid "L3 metering agent" msgstr "" #: ../networking_adv-config.rst:13 msgid "" "You can run an L3 metering agent that enables layer-3 traffic metering. In " "general, you should launch the metering agent on all nodes that run the L3 " "agent:" msgstr "" #: ../networking_adv-config.rst:22 msgid "" "You must configure a driver that matches the plug-in that runs on the " "service. The driver adds metering to the routing interface." msgstr "" #: ../networking_adv-config.rst:26 msgid "Option" msgstr "" #: ../networking_adv-config.rst:28 msgid "**Open vSwitch**" msgstr "" #: ../networking_adv-config.rst:30 ../networking_adv-config.rst:36 msgid "interface\\_driver ($NEUTRON\\_CONF\\_DIR/metering\\_agent.ini)" msgstr "" #: ../networking_adv-config.rst:31 msgid "neutron.agent.linux.interface. OVSInterfaceDriver" msgstr "" #: ../networking_adv-config.rst:34 msgid "**Linux Bridge**" msgstr "" #: ../networking_adv-config.rst:37 msgid "neutron.agent.linux.interface. BridgeInterfaceDriver" msgstr "" #: ../networking_adv-config.rst:42 msgid "L3 metering driver" msgstr "" #: ../networking_adv-config.rst:44 msgid "" "You must configure any driver that implements the metering abstraction. " "Currently the only available implementation uses iptables for metering." msgstr "" #: ../networking_adv-config.rst:53 msgid "L3 metering service driver" msgstr "" #: ../networking_adv-config.rst:55 msgid "" "To enable L3 metering, you must set the following option in the ``neutron." "conf`` file on the host that runs ``neutron-server``:" msgstr "" #: ../networking_adv-features.rst:0 msgid "**Basic L3 Operations**" msgstr "" #: ../networking_adv-features.rst:0 msgid "**Basic L3 operations**" msgstr "" #: ../networking_adv-features.rst:0 msgid "**Basic VMware NSX QoS operations**" msgstr "" #: ../networking_adv-features.rst:0 msgid "**Basic security group operations**" msgstr "" #: ../networking_adv-features.rst:0 msgid "**Big Switch Router rule attributes**" msgstr "" #: ../networking_adv-features.rst:0 msgid "" "**Configuration options for tuning operational status synchronization in the " "NSX plug-in**" msgstr "" #: ../networking_adv-features.rst:0 msgid "**Provider network attributes**" msgstr "" #: ../networking_adv-features.rst:5 msgid "Advanced features through API extensions" msgstr "" #: ../networking_adv-features.rst:7 msgid "" "Several plug-ins implement API extensions that provide capabilities similar " "to what was available in ``nova-network``. These plug-ins are likely to be " "of interest to the OpenStack community." msgstr "" #: ../networking_adv-features.rst:12 msgid "Provider networks" msgstr "" #: ../networking_adv-features.rst:14 msgid "" "Networks can be categorized as either tenant networks or provider networks. " "Tenant networks are created by normal users and details about how they are " "physically realized are hidden from those users. Provider networks are " "created with administrative credentials, specifying the details of how the " "network is physically realized, usually to match some existing network in " "the data center." msgstr "" #: ../networking_adv-features.rst:21 msgid "" "Provider networks enable administrators to create networks that map directly " "to the physical networks in the data center. This is commonly used to give " "tenants direct access to a public network that can be used to reach the " "Internet. It might also be used to integrate with VLANs in the network that " "already have a defined meaning (for example, enable a VM from the marketing " "department to be placed on the same VLAN as bare-metal marketing hosts in " "the same data center)." msgstr "" #: ../networking_adv-features.rst:29 msgid "" "The provider extension allows administrators to explicitly manage the " "relationship between Networking virtual networks and underlying physical " "mechanisms such as VLANs and tunnels. When this extension is supported, " "Networking client users with administrative privileges see additional " "provider attributes on all virtual networks and are able to specify these " "attributes in order to create provider networks." msgstr "" #: ../networking_adv-features.rst:36 msgid "" "The provider extension is supported by the Open vSwitch and Linux Bridge " "plug-ins. Configuration of these plug-ins requires familiarity with this " "extension." msgstr "" #: ../networking_adv-features.rst:41 msgid "Terminology" msgstr "" #: ../networking_adv-features.rst:43 msgid "" "A number of terms are used in the provider extension and in the " "configuration of plug-ins supporting the provider extension:" msgstr "" #: ../networking_adv-features.rst:46 msgid "**Provider extension terminology**" msgstr "" #: ../networking_adv-features.rst:49 msgid "Term" msgstr "" #: ../networking_adv-features.rst:51 msgid "**virtual network**" msgstr "" #: ../networking_adv-features.rst:51 msgid "" "A Networking L2 network (identified by a UUID and optional name) whose ports " "can be attached as vNICs to Compute instances and to various Networking " "agents. The Open vSwitch and Linux Bridge plug-ins each support several " "different mechanisms to realize virtual networks." msgstr "" #: ../networking_adv-features.rst:58 msgid "**physical network**" msgstr "" #: ../networking_adv-features.rst:58 msgid "" "A network connecting virtualization hosts (such as compute nodes) with each " "other and with other network resources. Each physical network might support " "multiple virtual networks. The provider extension and the plug-in " "configurations identify physical networks using simple string names." msgstr "" #: ../networking_adv-features.rst:65 msgid "**tenant network**" msgstr "" #: ../networking_adv-features.rst:65 msgid "" "A virtual network that a tenant or an administrator creates. The physical " "details of the network are not exposed to the tenant." msgstr "" #: ../networking_adv-features.rst:69 msgid "**provider network**" msgstr "" #: ../networking_adv-features.rst:69 msgid "" "A virtual network administratively created to map to a specific network in " "the data center, typically to enable direct access to non-OpenStack " "resources on that network. Tenants can be given access to provider networks." msgstr "" #: ../networking_adv-features.rst:75 msgid "**VLAN network**" msgstr "" #: ../networking_adv-features.rst:75 msgid "" "A virtual network implemented as packets on a specific physical network " "containing IEEE 802.1Q headers with a specific VID field value. VLAN " "networks sharing the same physical network are isolated from each other at " "L2 and can even have overlapping IP address spaces. Each distinct physical " "network supporting VLAN networks is treated as a separate VLAN trunk, with a " "distinct space of VID values. Valid VID values are 1 through 4094." msgstr "" #: ../networking_adv-features.rst:86 msgid "**flat network**" msgstr "" #: ../networking_adv-features.rst:86 msgid "" "A virtual network implemented as packets on a specific physical network " "containing no IEEE 802.1Q header. Each physical network can realize at most " "one flat network." msgstr "" #: ../networking_adv-features.rst:91 msgid "**local network**" msgstr "" #: ../networking_adv-features.rst:91 msgid "" "A virtual network that allows communication within each host, but not across " "a network. Local networks are intended mainly for single-node test " "scenarios, but can have other uses." msgstr "" #: ../networking_adv-features.rst:96 msgid "**GRE network**" msgstr "" #: ../networking_adv-features.rst:96 msgid "" "A virtual network implemented as network packets encapsulated using GRE. GRE " "networks are also referred to as *tunnels*. GRE tunnel packets are routed by " "the IP routing table for the host, so GRE networks are not associated by " "Networking with specific physical networks." msgstr "" #: ../networking_adv-features.rst:103 msgid "**Virtual Extensible LAN (VXLAN) network**" msgstr "" #: ../networking_adv-features.rst:104 msgid "" "VXLAN is a proposed encapsulation protocol for running an overlay network on " "existing Layer 3 infrastructure. An overlay network is a virtual network " "that is built on top of existing network Layer 2 and Layer 3 technologies to " "support elastic compute architectures." msgstr "" #: ../networking_adv-features.rst:112 msgid "" "The ML2, Open vSwitch, and Linux Bridge plug-ins support VLAN networks, flat " "networks, and local networks. Only the ML2 and Open vSwitch plug-ins " "currently support GRE and VXLAN networks, provided that the required " "features exist in the hosts Linux kernel, Open vSwitch, and iproute2 " "packages." msgstr "" #: ../networking_adv-features.rst:119 msgid "Provider attributes" msgstr "" #: ../networking_adv-features.rst:121 msgid "" "The provider extension extends the Networking network resource with these " "attributes:" msgstr "" # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_config-identity.pot (Administrator Guide 0.9) #-#-#-#-# #: ../networking_adv-features.rst:129 ../networking_adv-features.rst:711 #: ../networking_config-identity.rst:167 msgid "Attribute name" msgstr "" #: ../networking_adv-features.rst:131 msgid "Default Value" msgstr "" #: ../networking_adv-features.rst:133 msgid "provider: network\\_type" msgstr "" #: ../networking_adv-features.rst:134 ../networking_adv-features.rst:145 msgid "String" msgstr "" #: ../networking_adv-features.rst:135 ../networking_adv-features.rst:154 msgid "N/A" msgstr "" #: ../networking_adv-features.rst:136 msgid "" "The physical mechanism by which the virtual network is implemented. Possible " "values are ``flat``, ``vlan``, ``local``, ``gre``, and ``vxlan``, " "corresponding to flat networks, VLAN networks, local networks, GRE networks, " "and VXLAN networks as defined above. All types of provider networks can be " "created by administrators, while tenant networks can be implemented as " "``vlan``, ``gre``, ``vxlan``, or ``local`` network types depending on plug-" "in configuration." msgstr "" #: ../networking_adv-features.rst:144 msgid "provider: physical_network" msgstr "" #: ../networking_adv-features.rst:146 msgid "" "If a physical network named \"default\" has been configured and if provider:" "network_type is ``flat`` or ``vlan``, then \"default\" is used." msgstr "" #: ../networking_adv-features.rst:149 msgid "" "The name of the physical network over which the virtual network is " "implemented for flat and VLAN networks. Not applicable to the ``local`` or " "``gre`` network types." msgstr "" #: ../networking_adv-features.rst:152 msgid "provider:segmentation_id" msgstr "" #: ../networking_adv-features.rst:153 msgid "Integer" msgstr "" #: ../networking_adv-features.rst:155 msgid "" "For VLAN networks, the VLAN VID on the physical network that realizes the " "virtual network. Valid VLAN VIDs are 1 through 4094. For GRE networks, the " "tunnel ID. Valid tunnel IDs are any 32 bit unsigned integer. Not applicable " "to the ``flat`` or ``local`` network types." msgstr "" #: ../networking_adv-features.rst:161 msgid "" "To view or set provider extended attributes, a client must be authorized for " "the ``extension:provider_network:view`` and ``extension:provider_network:" "set`` actions in the Networking policy configuration. The default Networking " "configuration authorizes both actions for users with the admin role. An " "authorized client or an administrative user can view and set the provider " "extended attributes through Networking API calls. See the section called :" "ref:`Authentication and authorization` for details on policy configuration." msgstr "" #: ../networking_adv-features.rst:173 msgid "L3 routing and NAT" msgstr "" #: ../networking_adv-features.rst:175 msgid "" "The Networking API provides abstract L2 network segments that are decoupled " "from the technology used to implement the L2 network. Networking includes an " "API extension that provides abstract L3 routers that API users can " "dynamically provision and configure. These Networking routers can connect " "multiple L2 Networking networks and can also provide a gateway that connects " "one or more private L2 networks to a shared external network. For example, a " "public network for access to the Internet. See the `OpenStack Configuration " "Reference `_ for " "details on common models of deploying Networking L3 routers." msgstr "" #: ../networking_adv-features.rst:186 msgid "" "The L3 router provides basic NAT capabilities on gateway ports that uplink " "the router to external networks. This router SNATs all traffic by default " "and supports floating IPs, which creates a static one-to-one mapping from a " "public IP on the external network to a private IP on one of the other " "subnets attached to the router. This allows a tenant to selectively expose " "VMs on private networks to other hosts on the external network (and often to " "all hosts on the Internet). You can allocate and map floating IPs from one " "port to another, as needed." msgstr "" #: ../networking_adv-features.rst:196 msgid "Basic L3 operations" msgstr "" #: ../networking_adv-features.rst:198 msgid "" "External networks are visible to all users. However, the default policy " "settings enable only administrative users to create, update, and delete " "external networks." msgstr "" #: ../networking_adv-features.rst:202 msgid "" "This table shows example neutron commands that enable you to complete basic " "L3 operations:" msgstr "" # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_config-agents.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_use.pot (Administrator Guide 0.9) #-#-#-#-# #: ../networking_adv-features.rst:209 ../networking_adv-features.rst:377 #: ../networking_adv-features.rst:512 ../networking_adv-features.rst:804 #: ../networking_config-agents.rst:490 ../networking_use.rst:47 #: ../networking_use.rst:123 ../networking_use.rst:241 msgid "Operation" msgstr "" #: ../networking_adv-features.rst:211 msgid "Creates external networks." msgstr "" #: ../networking_adv-features.rst:216 msgid "Lists external networks." msgstr "" #: ../networking_adv-features.rst:220 msgid "" "Creates an internal-only router that connects to multiple L2 networks " "privately." msgstr "" #: ../networking_adv-features.rst:231 msgid "" "An internal router port can have only one IPv4 subnet and multiple IPv6 " "subnets that belong to the same network ID. When you call ``router-interface-" "add`` with an IPv6 subnet, this operation adds the interface to an existing " "internal port with the same network ID. If a port with the same network ID " "does not exist, a new port is created." msgstr "" #: ../networking_adv-features.rst:236 msgid "" "Connects a router to an external network, which enables that router to act " "as a NAT gateway for external connectivity." msgstr "" #: ../networking_adv-features.rst:242 msgid "" "The router obtains an interface with the gateway_ip address of the subnet " "and this interface is attached to a port on the L2 Networking network " "associated with the subnet. The router also gets a gateway interface to the " "specified external network. This provides SNAT connectivity to the external " "network as well as support for floating IPs allocated on that external " "networks. Commonly an external network maps to a network in the provider." msgstr "" #: ../networking_adv-features.rst:250 msgid "Lists routers." msgstr "" #: ../networking_adv-features.rst:254 msgid "Shows information for a specified router." msgstr "" #: ../networking_adv-features.rst:258 msgid "Shows all internal interfaces for a router." msgstr "" #: ../networking_adv-features.rst:263 msgid "" "Identifies the PORT_ID that represents the VM NIC to which the floating IP " "should map." msgstr "" #: ../networking_adv-features.rst:269 msgid "" "This port must be on an Networking subnet that is attached to a router " "uplinked to the external network used to create the floating IP. " "Conceptually, this is because the router must be able to perform the " "Destination NAT (DNAT) rewriting of packets from the floating IP address " "(chosen from a subnet on the external network) to the internal fixed IP " "(chosen from a private subnet that is behind the router)." msgstr "" #: ../networking_adv-features.rst:276 msgid "Creates a floating IP address and associates it with a port." msgstr "" #: ../networking_adv-features.rst:282 msgid "Creates a floating IP on a specific subnet in the external network." msgstr "" #: ../networking_adv-features.rst:287 msgid "" "If there are multiple subnets in the external network, you can choose a " "specific subnet based on quality and costs." msgstr "" #: ../networking_adv-features.rst:290 msgid "" "Creates a floating IP address and associates it with a port, in a single " "step." msgstr "" #: ../networking_adv-features.rst:294 msgid "Lists floating IPs" msgstr "" #: ../networking_adv-features.rst:298 msgid "Finds floating IP for a specified VM port." msgstr "" #: ../networking_adv-features.rst:302 msgid "Disassociates a floating IP address." msgstr "" #: ../networking_adv-features.rst:306 msgid "Deletes the floating IP address." msgstr "" #: ../networking_adv-features.rst:310 msgid "Clears the gateway." msgstr "" #: ../networking_adv-features.rst:314 msgid "Removes the interfaces from the router." msgstr "" #: ../networking_adv-features.rst:319 msgid "" "If this subnet ID is the last subnet on the port, this operation deletes the " "port itself." msgstr "" #: ../networking_adv-features.rst:321 msgid "Deletes the router." msgstr "" #: ../networking_adv-features.rst:327 msgid "Security groups" msgstr "" #: ../networking_adv-features.rst:329 msgid "" "Security groups and security group rules allow administrators and tenants to " "specify the type of traffic and direction (ingress/egress) that is allowed " "to pass through a port. A security group is a container for security group " "rules." msgstr "" #: ../networking_adv-features.rst:334 msgid "" "When a port is created in Networking it is associated with a security group. " "If a security group is not specified the port is associated with a 'default' " "security group. By default, this group drops all ingress traffic and allows " "all egress. Rules can be added to this group in order to change the behavior." msgstr "" #: ../networking_adv-features.rst:340 msgid "" "To use the Compute security group APIs or use Compute to orchestrate the " "creation of ports for instances on specific security groups, you must " "complete additional configuration. You must configure the ``/etc/nova/nova." "conf`` file and set the ``security_group_api=neutron`` option on every node " "that runs nova-compute and nova-api. After you make this change, restart " "nova-api and nova-compute to pick up this change. Then, you can use both the " "Compute and OpenStack Network security group APIs at the same time." msgstr "" #: ../networking_adv-features.rst:351 msgid "" "To use the Compute security group API with Networking, the Networking plug-" "in must implement the security group API. The following plug-ins currently " "implement this: ML2, Open vSwitch, Linux Bridge, NEC, and VMware NSX." msgstr "" #: ../networking_adv-features.rst:356 msgid "" "You must configure the correct firewall driver in the ``securitygroup`` " "section of the plug-in/agent configuration file. Some plug-ins and agents, " "such as Linux Bridge Agent and Open vSwitch Agent, use the no-operation " "driver as the default, which results in non-working security groups." msgstr "" #: ../networking_adv-features.rst:362 msgid "" "When using the security group API through Compute, security groups are " "applied to all ports on an instance. The reason for this is that Compute " "security group APIs are instances based and not port based as Networking." msgstr "" #: ../networking_adv-features.rst:368 msgid "Basic security group operations" msgstr "" #: ../networking_adv-features.rst:370 msgid "" "This table shows example neutron commands that enable you to complete basic " "security group operations:" msgstr "" #: ../networking_adv-features.rst:379 msgid "Creates a security group for our web servers." msgstr "" #: ../networking_adv-features.rst:383 msgid "Lists security groups." msgstr "" #: ../networking_adv-features.rst:387 msgid "Creates a security group rule to allow port 80 ingress." msgstr "" #: ../networking_adv-features.rst:392 msgid "Lists security group rules." msgstr "" #: ../networking_adv-features.rst:396 msgid "Deletes a security group rule." msgstr "" #: ../networking_adv-features.rst:400 msgid "Deletes a security group." msgstr "" #: ../networking_adv-features.rst:404 msgid "Creates a port and associates two security groups." msgstr "" #: ../networking_adv-features.rst:408 msgid "Removes security groups from a port." msgstr "" #: ../networking_adv-features.rst:414 msgid "Basic Load-Balancer-as-a-Service operations" msgstr "" #: ../networking_adv-features.rst:418 msgid "" "The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load " "balancers. The reference implementation is based on the HAProxy software " "load balancer." msgstr "" #: ../networking_adv-features.rst:422 msgid "" "This list shows example neutron commands that enable you to complete basic " "LBaaS operations:" msgstr "" #: ../networking_adv-features.rst:425 msgid "Creates a load balancer pool by using specific provider." msgstr "" #: ../networking_adv-features.rst:427 msgid "" ":option:`--provider` is an optional argument. If not used, the pool is " "created with default provider for LBaaS service. You should configure the " "default provider in the ``[service_providers]`` section of the ``neutron." "conf`` file. If no default provider is specified for LBaaS, the :option:`--" "provider` parameter is required for pool creation." msgstr "" #: ../networking_adv-features.rst:438 msgid "Associates two web servers with pool." msgstr "" #: ../networking_adv-features.rst:445 msgid "" "Creates a health monitor that checks to make sure our instances are still " "running on the specified protocol-port." msgstr "" #: ../networking_adv-features.rst:452 msgid "Associates a health monitor with pool." msgstr "" #: ../networking_adv-features.rst:458 msgid "" "Creates a virtual IP (VIP) address that, when accessed through the load " "balancer, directs the requests to one of the pool members." msgstr "" #: ../networking_adv-features.rst:467 msgid "Plug-in specific extensions" msgstr "" #: ../networking_adv-features.rst:469 msgid "" "Each vendor can choose to implement additional API extensions to the core " "API. This section describes the extensions for each plug-in." msgstr "" #: ../networking_adv-features.rst:473 msgid "VMware NSX extensions" msgstr "" #: ../networking_adv-features.rst:475 msgid "These sections explain NSX plug-in extensions." msgstr "" #: ../networking_adv-features.rst:478 msgid "VMware NSX QoS extension" msgstr "" #: ../networking_adv-features.rst:480 msgid "" "The VMware NSX QoS extension rate-limits network ports to guarantee a " "specific amount of bandwidth for each port. This extension, by default, is " "only accessible by a tenant with an admin role but is configurable through " "the ``policy.json`` file. To use this extension, create a queue and specify " "the min/max bandwidth rates (kbps) and optionally set the QoS Marking and " "DSCP value (if your network fabric uses these values to make forwarding " "decisions). Once created, you can associate a queue with a network. Then, " "when ports are created on that network they are automatically created and " "associated with the specific queue size that was associated with the " "network. Because one size queue for a every port on a network might not be " "optimal, a scaling factor from the nova flavor ``rxtx_factor`` is passed in " "from Compute when creating the port to scale the queue." msgstr "" #: ../networking_adv-features.rst:494 msgid "" "Lastly, if you want to set a specific baseline QoS policy for the amount of " "bandwidth a single port can use (unless a network queue is specified with " "the network a port is created on) a default queue can be created in " "Networking which then causes ports created to be associated with a queue of " "that size times the rxtx scaling factor. Note that after a network or " "default queue is specified, queues are added to ports that are subsequently " "created but are not added to existing ports." msgstr "" #: ../networking_adv-features.rst:503 msgid "Basic VMware NSX QoS operations" msgstr "" #: ../networking_adv-features.rst:505 msgid "" "This table shows example neutron commands that enable you to complete basic " "queue operations:" msgstr "" #: ../networking_adv-features.rst:514 msgid "Creates QoS queue (admin-only)." msgstr "" #: ../networking_adv-features.rst:518 msgid "Associates a queue with a network." msgstr "" #: ../networking_adv-features.rst:522 msgid "Creates a default system queue." msgstr "" #: ../networking_adv-features.rst:526 msgid "Lists QoS queues." msgstr "" #: ../networking_adv-features.rst:530 msgid "Deletes a QoS queue." msgstr "" #: ../networking_adv-features.rst:536 msgid "VMware NSX provider networks extension" msgstr "" #: ../networking_adv-features.rst:538 msgid "" "Provider networks can be implemented in different ways by the underlying NSX " "platform." msgstr "" #: ../networking_adv-features.rst:541 msgid "" "The *FLAT* and *VLAN* network types use bridged transport connectors. These " "network types enable the attachment of large number of ports. To handle the " "increased scale, the NSX plug-in can back a single OpenStack Network with a " "chain of NSX logical switches. You can specify the maximum number of ports " "on each logical switch in this chain on the ``max_lp_per_bridged_ls`` " "parameter, which has a default value of 5,000." msgstr "" #: ../networking_adv-features.rst:548 msgid "" "The recommended value for this parameter varies with the NSX version running " "in the back-end, as shown in the following table." msgstr "" #: ../networking_adv-features.rst:551 msgid "**Recommended values for max_lp_per_bridged_ls**" msgstr "" #: ../networking_adv-features.rst:554 msgid "NSX version" msgstr "" #: ../networking_adv-features.rst:554 msgid "Recommended Value" msgstr "" #: ../networking_adv-features.rst:556 msgid "2.x" msgstr "" #: ../networking_adv-features.rst:556 msgid "64" msgstr "" #: ../networking_adv-features.rst:558 msgid "3.0.x" msgstr "" #: ../networking_adv-features.rst:558 ../networking_adv-features.rst:560 msgid "5,000" msgstr "" #: ../networking_adv-features.rst:560 msgid "3.1.x" msgstr "" #: ../networking_adv-features.rst:562 msgid "10,000" msgstr "" #: ../networking_adv-features.rst:562 msgid "3.2.x" msgstr "" #: ../networking_adv-features.rst:565 msgid "" "In addition to these network types, the NSX plug-in also supports a special " "*l3_ext* network type, which maps external networks to specific NSX gateway " "services as discussed in the next section." msgstr "" #: ../networking_adv-features.rst:570 msgid "VMware NSX L3 extension" msgstr "" #: ../networking_adv-features.rst:572 msgid "" "NSX exposes its L3 capabilities through gateway services which are usually " "configured out of band from OpenStack. To use NSX with L3 capabilities, " "first create an L3 gateway service in the NSX Manager. Next, in ``/etc/" "neutron/plugins/vmware/nsx.ini`` set ``default_l3_gw_service_uuid`` to this " "value. By default, routers are mapped to this gateway service." msgstr "" #: ../networking_adv-features.rst:580 msgid "VMware NSX L3 extension operations" msgstr "" #: ../networking_adv-features.rst:582 msgid "Create external network and map it to a specific NSX gateway service:" msgstr "" #: ../networking_adv-features.rst:589 msgid "Terminate traffic on a specific VLAN from a NSX gateway service:" msgstr "" #: ../networking_adv-features.rst:597 msgid "Operational status synchronization in the VMware NSX plug-in" msgstr "" #: ../networking_adv-features.rst:599 msgid "" "Starting with the Havana release, the VMware NSX plug-in provides an " "asynchronous mechanism for retrieving the operational status for neutron " "resources from the NSX back-end; this applies to *network*, *port*, and " "*router* resources." msgstr "" #: ../networking_adv-features.rst:604 msgid "" "The back-end is polled periodically and the status for every resource is " "retrieved; then the status in the Networking database is updated only for " "the resources for which a status change occurred. As operational status is " "now retrieved asynchronously, performance for ``GET`` operations is " "consistently improved." msgstr "" #: ../networking_adv-features.rst:610 msgid "" "Data to retrieve from the back-end are divided in chunks in order to avoid " "expensive API requests; this is achieved leveraging NSX APIs response paging " "capabilities. The minimum chunk size can be specified using a configuration " "option; the actual chunk size is then determined dynamically according to: " "total number of resources to retrieve, interval between two synchronization " "task runs, minimum delay between two subsequent requests to the NSX back-end." msgstr "" #: ../networking_adv-features.rst:618 msgid "" "The operational status synchronization can be tuned or disabled using the " "configuration options reported in this table; it is however worth noting " "that the default values work fine in most cases." msgstr "" #: ../networking_adv-features.rst:626 msgid "Option name" msgstr "" #: ../networking_adv-features.rst:628 msgid "Default value" msgstr "" #: ../networking_adv-features.rst:629 msgid "Type and constraints" msgstr "" #: ../networking_adv-features.rst:631 msgid "``state_sync_interval``" msgstr "" #: ../networking_adv-features.rst:632 ../networking_adv-features.rst:641 #: ../networking_adv-features.rst:648 ../networking_adv-features.rst:655 #: ../networking_adv-features.rst:666 msgid "``nsx_sync``" msgstr "" #: ../networking_adv-features.rst:633 msgid "10 seconds" msgstr "" #: ../networking_adv-features.rst:634 ../networking_adv-features.rst:657 msgid "Integer; no constraint." msgstr "" #: ../networking_adv-features.rst:635 msgid "" "Interval in seconds between two run of the synchronization task. If the " "synchronization task takes more than ``state_sync_interval`` seconds to " "execute, a new instance of the task is started as soon as the other is " "completed. Setting the value for this option to 0 will disable the " "synchronization task." msgstr "" #: ../networking_adv-features.rst:640 msgid "``max_random_sync_delay``" msgstr "" #: ../networking_adv-features.rst:642 msgid "0 seconds" msgstr "" #: ../networking_adv-features.rst:643 msgid "Integer. Must not exceed ``min_sync_req_delay``" msgstr "" #: ../networking_adv-features.rst:644 msgid "" "When different from zero, a random delay between 0 and " "``max_random_sync_delay`` will be added before processing the next chunk." msgstr "" #: ../networking_adv-features.rst:647 msgid "``min_sync_req_delay``" msgstr "" #: ../networking_adv-features.rst:649 msgid "1 second" msgstr "" #: ../networking_adv-features.rst:650 msgid "Integer. Must not exceed ``state_sync_interval``." msgstr "" #: ../networking_adv-features.rst:651 msgid "" "The value of this option can be tuned according to the observed load on the " "NSX controllers. Lower values will result in faster synchronization, but " "might increase the load on the controller cluster." msgstr "" #: ../networking_adv-features.rst:654 msgid "``min_chunk_size``" msgstr "" #: ../networking_adv-features.rst:656 msgid "500 resources" msgstr "" #: ../networking_adv-features.rst:658 msgid "" "Minimum number of resources to retrieve from the back-end for each " "synchronization chunk. The expected number of synchronization chunks is " "given by the ratio between ``state_sync_interval`` and " "``min_sync_req_delay``. This size of a chunk might increase if the total " "number of resources is such that more than ``min_chunk_size`` resources must " "be fetched in one chunk with the current number of chunks." msgstr "" #: ../networking_adv-features.rst:665 msgid "``always_read_status``" msgstr "" #: ../networking_adv-features.rst:667 msgid "False" msgstr "" #: ../networking_adv-features.rst:668 msgid "Boolean; no constraint." msgstr "" #: ../networking_adv-features.rst:669 msgid "" "When this option is enabled, the operational status will always be retrieved " "from the NSX back-end ad every ``GET`` request. In this case it is advisable " "to disable the synchronization task." msgstr "" #: ../networking_adv-features.rst:673 msgid "" "When running multiple OpenStack Networking server instances, the status " "synchronization task should not run on every node; doing so sends " "unnecessary traffic to the NSX back-end and performs unnecessary DB " "operations. Set the ``state_sync_interval`` configuration option to a non-" "zero value exclusively on a node designated for back-end status " "synchronization." msgstr "" #: ../networking_adv-features.rst:680 msgid "" "The ``fields=status`` parameter in Networking API requests always triggers " "an explicit query to the NSX back end, even when you enable asynchronous " "state synchronization. For example, ``GET /v2.0/networks/NET_ID?" "fields=status&fields=name``." msgstr "" #: ../networking_adv-features.rst:686 msgid "Big Switch plug-in extensions" msgstr "" #: ../networking_adv-features.rst:688 msgid "" "This section explains the Big Switch neutron plug-in-specific extension." msgstr "" #: ../networking_adv-features.rst:691 msgid "Big Switch router rules" msgstr "" #: ../networking_adv-features.rst:693 msgid "" "Big Switch allows router rules to be added to each tenant router. These " "rules can be used to enforce routing policies such as denying traffic " "between subnets or traffic to external networks. By enforcing these at the " "router level, network segmentation policies can be enforced across many VMs " "that have differing security groups." msgstr "" #: ../networking_adv-features.rst:700 msgid "Router rule attributes" msgstr "" #: ../networking_adv-features.rst:702 msgid "" "Each tenant router has a set of router rules associated with it. Each router " "rule has the attributes in this table. Router rules and their attributes can " "be set using the :command:`neutron router-update` command, through the " "horizon interface or the Networking API." msgstr "" # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_config-identity.pot (Administrator Guide 0.9) #-#-#-#-# #: ../networking_adv-features.rst:712 ../networking_config-identity.rst:168 msgid "Required" msgstr "" #: ../networking_adv-features.rst:713 msgid "Input type" msgstr "" #: ../networking_adv-features.rst:715 msgid "source" msgstr "" # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# #: ../networking_adv-features.rst:716 ../networking_adv-features.rst:721 #: ../networking_adv-features.rst:726 ../telemetry-data-collection.rst:1116 #: ../telemetry-data-collection.rst:1120 msgid "Yes" msgstr "" #: ../networking_adv-features.rst:717 ../networking_adv-features.rst:722 msgid "A valid CIDR or one of the keywords 'any' or 'external'" msgstr "" #: ../networking_adv-features.rst:718 msgid "" "The network that a packet's source IP must match for the rule to be applied." msgstr "" #: ../networking_adv-features.rst:720 msgid "destination" msgstr "" #: ../networking_adv-features.rst:723 msgid "" "The network that a packet's destination IP must match for the rule to be " "applied." msgstr "" #: ../networking_adv-features.rst:725 msgid "action" msgstr "" #: ../networking_adv-features.rst:727 msgid "'permit' or 'deny'" msgstr "" #: ../networking_adv-features.rst:728 msgid "" "Determines whether or not the matched packets will allowed to cross the " "router." msgstr "" #: ../networking_adv-features.rst:730 msgid "nexthop" msgstr "" # #-#-#-#-# networking_adv-features.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# #: ../networking_adv-features.rst:731 ../telemetry-data-collection.rst:1124 #: ../telemetry-data-collection.rst:1128 msgid "No" msgstr "" #: ../networking_adv-features.rst:732 msgid "" "A plus-separated (+) list of next-hop IP addresses. For example, " "``1.1.1.1+1.1.1.2``." msgstr "" #: ../networking_adv-features.rst:734 msgid "" "Overrides the default virtual router used to handle traffic for packets that " "match the rule." msgstr "" #: ../networking_adv-features.rst:738 msgid "Order of rule processing" msgstr "" #: ../networking_adv-features.rst:740 msgid "" "The order of router rules has no effect. Overlapping rules are evaluated " "using longest prefix matching on the source and destination fields. The " "source field is matched first so it always takes higher precedence over the " "destination field. In other words, longest prefix matching is used on the " "destination field only if there are multiple matching rules with the same " "source." msgstr "" #: ../networking_adv-features.rst:748 msgid "Big Switch router rules operations" msgstr "" #: ../networking_adv-features.rst:750 msgid "" "Router rules are configured with a router update operation in OpenStack " "Networking. The update overrides any previous rules so all rules must be " "provided at the same time." msgstr "" #: ../networking_adv-features.rst:754 msgid "" "Update a router with rules to permit traffic by default but block traffic " "from external networks to the 10.10.10.0/24 subnet:" msgstr "" #: ../networking_adv-features.rst:763 msgid "Specify alternate next-hop addresses for a specific subnet:" msgstr "" #: ../networking_adv-features.rst:771 msgid "Block traffic between two subnets while allowing everything else:" msgstr "" #: ../networking_adv-features.rst:780 msgid "L3 metering" msgstr "" #: ../networking_adv-features.rst:782 msgid "" "The L3 metering API extension enables administrators to configure IP ranges " "and assign a specified label to them to be able to measure traffic that goes " "through a virtual router." msgstr "" #: ../networking_adv-features.rst:786 msgid "" "The L3 metering extension is decoupled from the technology that implements " "the measurement. Two abstractions have been added: One is the metering label " "that can contain metering rules. Because a metering label is associated with " "a tenant, all virtual routers in this tenant are associated with this label." msgstr "" #: ../networking_adv-features.rst:793 msgid "Basic L3 metering operations" msgstr "" #: ../networking_adv-features.rst:795 msgid "Only administrators can manage the L3 metering labels and rules." msgstr "" #: ../networking_adv-features.rst:797 msgid "" "This table shows example :command:`neutron` commands that enable you to " "complete basic L3 metering operations:" msgstr "" #: ../networking_adv-features.rst:806 msgid "Creates a metering label." msgstr "" #: ../networking_adv-features.rst:810 msgid "Lists metering labels." msgstr "" #: ../networking_adv-features.rst:814 msgid "Shows information for a specified label." msgstr "" #: ../networking_adv-features.rst:819 msgid "Deletes a metering label." msgstr "" #: ../networking_adv-features.rst:824 msgid "Creates a metering rule." msgstr "" #: ../networking_adv-features.rst:837 msgid "Lists metering all label rules." msgstr "" #: ../networking_adv-features.rst:841 msgid "Shows information for a specified label rule." msgstr "" #: ../networking_adv-features.rst:845 msgid "Deletes a metering label rule." msgstr "" #: ../networking_adv-features.rst:849 msgid "Lists the value of created metering label rules." msgstr "" #: ../networking_adv-operational-features.rst:3 msgid "Advanced operational features" msgstr "" #: ../networking_adv-operational-features.rst:6 msgid "Logging settings" msgstr "" #: ../networking_adv-operational-features.rst:8 msgid "" "Networking components use Python logging module to do logging. Logging " "configuration can be provided in ``neutron.conf`` or as command-line " "options. Command options override ones in ``neutron.conf``." msgstr "" #: ../networking_adv-operational-features.rst:12 msgid "" "To configure logging for Networking components, use one of these methods:" msgstr "" #: ../networking_adv-operational-features.rst:15 msgid "Provide logging settings in a logging configuration file." msgstr "" #: ../networking_adv-operational-features.rst:17 msgid "" "See `Python logging how-to `__ to " "learn more about logging." msgstr "" #: ../networking_adv-operational-features.rst:21 msgid "Provide logging setting in ``neutron.conf``." msgstr "" #: ../networking_adv-operational-features.rst:44 msgid "" "Notifications can be sent when Networking resources such as network, subnet " "and port are created, updated or deleted." msgstr "" #: ../networking_adv-operational-features.rst:48 msgid "Notification options" msgstr "" #: ../networking_adv-operational-features.rst:50 msgid "" "To support DHCP agent, ``rpc_notifier`` driver must be set. To set up the " "notification, edit notification options in ``neutron.conf``:" msgstr "" #: ../networking_adv-operational-features.rst:64 msgid "Setting cases" msgstr "" #: ../networking_adv-operational-features.rst:67 msgid "Logging and RPC" msgstr "" #: ../networking_adv-operational-features.rst:69 msgid "" "These options configure the Networking server to send notifications through " "logging and RPC. The logging options are described in OpenStack " "Configuration Reference . RPC notifications go to ``notifications.info`` " "queue bound to a topic exchange defined by ``control_exchange`` in ``neutron." "conf``." msgstr "" #: ../networking_adv-operational-features.rst:75 msgid "**Notification System Options**" msgstr "" #: ../networking_adv-operational-features.rst:77 msgid "" "A notification can be sent when a network, subnet, or port is created, " "updated or deleted. The notification system options are:" msgstr "" #: ../networking_adv-operational-features.rst:81 msgid "" "Defines the driver or drivers to handle the sending of a notification. The " "six available options are:" msgstr "" #: ../networking_adv-operational-features.rst:84 msgid "``messaging``" msgstr "" #: ../networking_adv-operational-features.rst:85 msgid "Send notifications using the 1.0 message format." msgstr "" #: ../networking_adv-operational-features.rst:87 msgid "" "Send notifications using the 2.0 message format (with a message envelope)." msgstr "" #: ../networking_adv-operational-features.rst:87 msgid "``messagingv2``" msgstr "" #: ../networking_adv-operational-features.rst:89 msgid "``routing``" msgstr "" #: ../networking_adv-operational-features.rst:90 msgid "Configurable routing notifier (by priority or event_type)." msgstr "" #: ../networking_adv-operational-features.rst:91 msgid "``log``" msgstr "" #: ../networking_adv-operational-features.rst:92 msgid "Publish notifications using Python logging infrastructure." msgstr "" #: ../networking_adv-operational-features.rst:93 msgid "``test``" msgstr "" #: ../networking_adv-operational-features.rst:94 msgid "Store notifications in memory for test verification." msgstr "" #: ../networking_adv-operational-features.rst:95 msgid "``noop``" msgstr "" #: ../networking_adv-operational-features.rst:95 msgid "``notification_driver``" msgstr "" #: ../networking_adv-operational-features.rst:96 msgid "Disable sending notifications entirely." msgstr "" #: ../networking_adv-operational-features.rst:97 msgid "``default_notification_level``" msgstr "" #: ../networking_adv-operational-features.rst:98 msgid "Is used to form topic names or to set a logging level." msgstr "" #: ../networking_adv-operational-features.rst:99 msgid "``default_publisher_id``" msgstr "" #: ../networking_adv-operational-features.rst:100 msgid "Is a part of the notification payload." msgstr "" #: ../networking_adv-operational-features.rst:102 msgid "" "AMQP topic used for OpenStack notifications. They can be comma-separated " "values. The actual topic names will be the values of " "``default_notification_level``." msgstr "" #: ../networking_adv-operational-features.rst:103 msgid "``notification_topics``" msgstr "" #: ../networking_adv-operational-features.rst:106 msgid "" "This is an option defined in oslo.messaging. It is the default exchange " "under which topics are scoped. May be overridden by an exchange name " "specified in the ``transport_url`` option. It is a string value." msgstr "" #: ../networking_adv-operational-features.rst:108 msgid "``control_exchange``" msgstr "" #: ../networking_adv-operational-features.rst:110 msgid "Below is a sample ``neutron.conf`` configuration file:" msgstr "" #: ../networking_arch.rst:3 msgid "Networking architecture" msgstr "" #: ../networking_arch.rst:5 msgid "" "Before you deploy Networking, it is useful to understand the Networking " "services and how they interact with the OpenStack components." msgstr "" #: ../networking_arch.rst:9 msgid "Overview" msgstr "" #: ../networking_arch.rst:11 msgid "" "Networking is a standalone component in the OpenStack modular architecture. " "It is positioned alongside OpenStack components such as Compute, Image " "service, Identity, or Dashboard. Like those components, a deployment of " "Networking often involves deploying several services to a variety of hosts." msgstr "" #: ../networking_arch.rst:17 msgid "" "The Networking server uses the neutron-server daemon to expose the " "Networking API and enable administration of the configured Networking plug-" "in. Typically, the plug-in requires access to a database for persistent " "storage (also similar to other OpenStack services)." msgstr "" #: ../networking_arch.rst:22 msgid "" "If your deployment uses a controller host to run centralized Compute " "components, you can deploy the Networking server to that same host. However, " "Networking is entirely standalone and can be deployed to a dedicated host. " "Depending on your configuration, Networking can also include the following " "agents:" msgstr "" #: ../networking_arch.rst:29 msgid "Agent" msgstr "" #: ../networking_arch.rst:31 msgid "**plug-in agent** (``neutron-*-agent``)" msgstr "" #: ../networking_arch.rst:32 msgid "" "Runs on each hypervisor to perform local vSwitch configuration. The agent " "that runs, depends on the plug-in that you use. Certain plug-ins do not " "require an agent." msgstr "" #: ../networking_arch.rst:37 msgid "**dhcp agent** (``neutron-dhcp-agent``)" msgstr "" #: ../networking_arch.rst:38 msgid "" "Provides DHCP services to tenant networks. Required by certain plug-ins." msgstr "" #: ../networking_arch.rst:41 msgid "**l3 agent** (``neutron-l3-agent``)" msgstr "" #: ../networking_arch.rst:42 msgid "" "Provides L3/NAT forwarding to provide external network access for VMs on " "tenant networks. Required by certain plug-ins." msgstr "" #: ../networking_arch.rst:46 msgid "**metering agent** (``neutron-metering-agent``)" msgstr "" #: ../networking_arch.rst:47 msgid "Provides L3 traffic metering for tenant networks." msgstr "" #: ../networking_arch.rst:51 msgid "" "These agents interact with the main neutron process through RPC (for " "example, RabbitMQ or Qpid) or through the standard Networking API. In " "addition, Networking integrates with OpenStack components in a number of " "ways:" msgstr "" #: ../networking_arch.rst:56 msgid "" "Networking relies on the Identity service (keystone) for the authentication " "and authorization of all API requests." msgstr "" #: ../networking_arch.rst:59 msgid "" "Compute (nova) interacts with Networking through calls to its standard API. " "As part of creating a VM, the ``nova-compute`` service communicates with the " "Networking API to plug each virtual NIC on the VM into a particular network." msgstr "" #: ../networking_arch.rst:64 msgid "" "The dashboard (horizon) integrates with the Networking API, enabling " "administrators and tenant users to create and manage network services " "through a web-based GUI." msgstr "" #: ../networking_arch.rst:69 msgid "VMware NSX integration" msgstr "" #: ../networking_arch.rst:71 msgid "" "OpenStack Networking uses the NSX plug-in to integrate with an existing " "VMware vCenter deployment. When installed on the network nodes, the NSX plug-" "in enables a NSX controller to centrally manage configuration settings and " "push them to managed network nodes. Network nodes are considered managed " "when they are added as hypervisors to the NSX controller." msgstr "" #: ../networking_arch.rst:78 msgid "" "The diagrams below depict some VMware NSX deployment examples. The first " "diagram illustrates the traffic flow between VMs on separate Compute nodes, " "and the second diagram between two VMs on a single Compute node. Note the " "placement of the VMware NSX plug-in and the neutron-server service on the " "network node. The green arrow indicates the management relationship between " "the NSX controller and the network node." msgstr "" #: ../networking_auth.rst:5 msgid "Authentication and authorization" msgstr "" #: ../networking_auth.rst:7 msgid "" "Networking uses the Identity service as the default authentication service. " "When the Identity service is enabled, users who submit requests to the " "Networking service must provide an authentication token in ``X-Auth-Token`` " "request header. Users obtain this token by authenticating with the Identity " "service endpoint. For more information about authentication with the " "Identity service, see `OpenStack Identity service API v2.0 Reference `__. When the Identity " "service is enabled, it is not mandatory to specify the tenant ID for " "resources in create requests because the tenant ID is derived from the " "authentication token." msgstr "" #: ../networking_auth.rst:19 msgid "" "The default authorization settings only allow administrative users to create " "resources on behalf of a different tenant. Networking uses information " "received from Identity to authorize user requests. Networking handles two " "kind of authorization policies:" msgstr "" #: ../networking_auth.rst:24 msgid "" "**Operation-based** policies specify access criteria for specific " "operations, possibly with fine-grained control over specific attributes." msgstr "" #: ../networking_auth.rst:28 msgid "" "**Resource-based** policies specify whether access to specific resource is " "granted or not according to the permissions configured for the resource " "(currently available only for the network resource). The actual " "authorization policies enforced in Networking might vary from deployment to " "deployment." msgstr "" #: ../networking_auth.rst:34 msgid "" "The policy engine reads entries from the ``policy.json`` file. The actual " "location of this file might vary from distribution to distribution. Entries " "can be updated while the system is running, and no service restart is " "required. Every time the policy file is updated, the policies are " "automatically reloaded. Currently the only way of updating such policies is " "to edit the policy file. In this section, the terms *policy* and *rule* " "refer to objects that are specified in the same way in the policy file. " "There are no syntax differences between a rule and a policy. A policy is " "something that is matched directly from the Networking policy engine. A rule " "is an element in a policy, which is evaluated. For instance in " "``create_subnet: [[\"admin_or_network_owner\"]]``, *create_subnet* is a " "policy, and *admin_or_network_owner* is a rule." msgstr "" #: ../networking_auth.rst:48 msgid "" "Policies are triggered by the Networking policy engine whenever one of them " "matches a Networking API operation or a specific attribute being used in a " "given operation. For instance the ``create_subnet`` policy is triggered " "every time a ``POST /v2.0/subnets`` request is sent to the Networking " "server; on the other hand ``create_network:shared`` is triggered every time " "the *shared* attribute is explicitly specified (and set to a value different " "from its default) in a ``POST /v2.0/networks`` request. It is also worth " "mentioning that policies can also be related to specific API extensions; for " "instance ``extension:provider_network:set`` is triggered if the attributes " "defined by the Provider Network extensions are specified in an API request." msgstr "" #: ../networking_auth.rst:61 msgid "" "An authorization policy can be composed by one or more rules. If more rules " "are specified then the evaluation policy succeeds if any of the rules " "evaluates successfully; if an API operation matches multiple policies, then " "all the policies must evaluate successfully. Also, authorization rules are " "recursive. Once a rule is matched, the rule(s) can be resolved to another " "rule, until a terminal rule is reached." msgstr "" #: ../networking_auth.rst:68 msgid "" "The Networking policy engine currently defines the following kinds of " "terminal rules:" msgstr "" #: ../networking_auth.rst:71 msgid "" "**Role-based rules** evaluate successfully if the user who submits the " "request has the specified role. For instance ``\"role:admin\"`` is " "successful if the user who submits the request is an administrator." msgstr "" #: ../networking_auth.rst:75 msgid "" "**Field-based rules** evaluate successfully if a field of the resource " "specified in the current request matches a specific value. For instance ``" "\"field:networks:shared=True\"`` is successful if the ``shared`` attribute " "of the ``network`` resource is set to true." msgstr "" #: ../networking_auth.rst:80 msgid "" "**Generic rules** compare an attribute in the resource with an attribute " "extracted from the user's security credentials and evaluates successfully if " "the comparison is successful. For instance ``\"tenant_id:%(tenant_id)s\"`` " "is successful if the tenant identifier in the resource is equal to the " "tenant identifier of the user submitting the request." msgstr "" #: ../networking_auth.rst:87 msgid "This extract is from the default ``policy.json`` file:" msgstr "" #: ../networking_auth.rst:89 msgid "" "A rule that evaluates successfully if the current user is an administrator " "or the owner of the resource specified in the request (tenant identifier is " "equal)." msgstr "" #: ../networking_auth.rst:126 msgid "" "The default policy that is always evaluated if an API operation does not " "match any of the policies in ``policy.json``." msgstr "" #: ../networking_auth.rst:163 msgid "" "This policy evaluates successfully if either *admin\\_or\\_owner*, or " "*shared* evaluates successfully." msgstr "" #: ../networking_auth.rst:177 msgid "" "This policy restricts the ability to manipulate the *shared* attribute for a " "network to administrators only." msgstr "" #: ../networking_auth.rst:201 msgid "" "This policy restricts the ability to manipulate the *mac\\_address* " "attribute for a port only to administrators and the owner of the network " "where the port is attached." msgstr "" #: ../networking_auth.rst:228 msgid "" "In some cases, some operations are restricted to administrators only. This " "example shows you how to modify a policy file to permit tenants to define " "networks, see their resources, and permit administrative users to perform " "all other operations:" msgstr "" #: ../networking_config-agents.rst:3 msgid "Configure neutron agents" msgstr "" #: ../networking_config-agents.rst:5 msgid "" "Plug-ins typically have requirements for particular software that must be " "run on each node that handles data packets. This includes any node that runs " "nova-compute and nodes that run dedicated OpenStack Networking service " "agents such as ``neutron-dhcp-agent``, ``neutron-l3-agent``, ``neutron-" "metering-agent`` or ``neutron-lbaas-agent``." msgstr "" #: ../networking_config-agents.rst:11 msgid "" "A data-forwarding node typically has a network interface with an IP address " "on the management network and another interface on the data network." msgstr "" #: ../networking_config-agents.rst:15 msgid "" "This section shows you how to install and configure a subset of the " "available plug-ins, which might include the installation of switching " "software (for example, ``Open vSwitch``) and as agents used to communicate " "with the ``neutron-server`` process running elsewhere in the data center." msgstr "" #: ../networking_config-agents.rst:21 msgid "Configure data-forwarding nodes" msgstr "" #: ../networking_config-agents.rst:24 msgid "Node set up: NSX plug-in" msgstr "" #: ../networking_config-agents.rst:26 msgid "" "If you use the NSX plug-in, you must also install Open vSwitch on each data-" "forwarding node. However, you do not need to install an additional agent on " "each node." msgstr "" #: ../networking_config-agents.rst:32 msgid "" "It is critical that you run an Open vSwitch version that is compatible with " "the current version of the NSX Controller software. Do not use the Open " "vSwitch version that is installed by default on Ubuntu. Instead, use the " "Open vSwitch version that is provided on the VMware support portal for your " "NSX Controller version." msgstr "" #: ../networking_config-agents.rst:38 msgid "**To set up each node for the NSX plug-in**" msgstr "" #: ../networking_config-agents.rst:40 msgid "" "Ensure that each data-forwarding node has an IP address on the management " "network, and an IP address on the data network that is used for tunneling " "data traffic. For full details on configuring your forwarding node, see the " "``NSX Administrator Guide``." msgstr "" #: ../networking_config-agents.rst:45 msgid "" "Use the ``NSX Administrator Guide`` to add the node as a Hypervisor by using " "the NSX Manager GUI. Even if your forwarding node has no VMs and is only " "used for services agents like ``neutron-dhcp-agent`` or ``neutron-lbaas-" "agent``, it should still be added to NSX as a Hypervisor." msgstr "" #: ../networking_config-agents.rst:50 msgid "" "After following the NSX Administrator Guide, use the page for this " "Hypervisor in the NSX Manager GUI to confirm that the node is properly " "connected to the NSX Controller Cluster and that the NSX Controller Cluster " "can see the ``br-int`` integration bridge." msgstr "" #: ../networking_config-agents.rst:56 msgid "Configure DHCP agent" msgstr "" #: ../networking_config-agents.rst:58 msgid "" "The DHCP service agent is compatible with all existing plug-ins and is " "required for all deployments where VMs should automatically receive IP " "addresses through DHCP." msgstr "" #: ../networking_config-agents.rst:62 msgid "**To install and configure the DHCP agent**" msgstr "" #: ../networking_config-agents.rst:64 msgid "" "You must configure the host running the neutron-dhcp-agent as a data " "forwarding node according to the requirements for your plug-in." msgstr "" #: ../networking_config-agents.rst:67 msgid "Install the DHCP agent:" msgstr "" #: ../networking_config-agents.rst:73 msgid "" "Update any options in the ``/etc/neutron/dhcp_agent.ini`` file that depend " "on the plug-in in use. See the sub-sections." msgstr "" #: ../networking_config-agents.rst:78 msgid "" "If you reboot a node that runs the DHCP agent, you must run the :command:" "`neutron-ovs-cleanup` command before the ``neutron-dhcp-agent`` service " "starts." msgstr "" #: ../networking_config-agents.rst:82 msgid "" "On Red Hat, SUSE, and Ubuntu based systems, the ``neutron-ovs-cleanup`` " "service runs the :command:`neutron-ovs-cleanup` command automatically. " "However, on Debian-based systems, you must manually run this command or " "write your own system script that runs on boot before the ``neutron-dhcp-" "agent`` service starts." msgstr "" #: ../networking_config-agents.rst:88 msgid "" "Networking dhcp-agent can use `dnsmasq `__ driver which supports stateful and stateless DHCPv6 for subnets " "created with ``--ipv6_address_mode`` set to ``dhcpv6-stateful`` or ``dhcpv6-" "stateless``." msgstr "" #: ../networking_config-agents.rst:106 msgid "" "If no dnsmasq process for subnet's network is launched, Networking will " "launch a new one on subnet's dhcp port in ``qdhcp-XXX`` namespace. If " "previous dnsmasq process is already launched, restart dnsmasq with a new " "configuration." msgstr "" #: ../networking_config-agents.rst:111 msgid "" "Networking will update dnsmasq process and restart it when subnet gets " "updated." msgstr "" #: ../networking_config-agents.rst:116 msgid "For dhcp-agent to operate in IPv6 mode use at least dnsmasq v2.63." msgstr "" #: ../networking_config-agents.rst:118 msgid "" "After a certain, configured timeframe, networks uncouple from DHCP agents " "when the agents are no longer in use. You can configure the DHCP agent to " "automatically detach from a network when the agent is out of service, or no " "longer needed." msgstr "" #: ../networking_config-agents.rst:123 msgid "" "This feature applies to all plug-ins that support DHCP scaling. For more " "information, see the `DHCP agent configuration options `__ listed in the OpenStack " "Configuration Reference." msgstr "" #: ../networking_config-agents.rst:129 msgid "DHCP agent setup: OVS plug-in" msgstr "" #: ../networking_config-agents.rst:131 msgid "" "These DHCP agent options are required in the ``/etc/neutron/dhcp_agent.ini`` " "file for the OVS plug-in:" msgstr "" #: ../networking_config-agents.rst:141 msgid "DHCP agent setup: NSX plug-in" msgstr "" #: ../networking_config-agents.rst:143 msgid "" "These DHCP agent options are required in the ``/etc/neutron/dhcp_agent.ini`` " "file for the NSX plug-in:" msgstr "" #: ../networking_config-agents.rst:154 msgid "Configure L3 agent" msgstr "" #: ../networking_config-agents.rst:156 msgid "" "The OpenStack Networking service has a widely used API extension to allow " "administrators and tenants to create routers to interconnect L2 networks, " "and floating IPs to make ports on private networks publicly accessible." msgstr "" #: ../networking_config-agents.rst:161 msgid "" "Many plug-ins rely on the L3 service agent to implement the L3 " "functionality. However, the following plug-ins already have built-in L3 " "capabilities:" msgstr "" #: ../networking_config-agents.rst:165 msgid "" "Big Switch/Floodlight plug-in, which supports both the open source " "`Floodlight `__ controller and " "the proprietary Big Switch controller." msgstr "" #: ../networking_config-agents.rst:171 msgid "" "Only the proprietary BigSwitch controller implements L3 functionality. When " "using Floodlight as your OpenFlow controller, L3 functionality is not " "available." msgstr "" #: ../networking_config-agents.rst:175 msgid "IBM SDN-VE plug-in" msgstr "" #: ../networking_config-agents.rst:177 msgid "MidoNet plug-in" msgstr "" #: ../networking_config-agents.rst:179 msgid "NSX plug-in" msgstr "" #: ../networking_config-agents.rst:181 msgid "PLUMgrid plug-in" msgstr "" #: ../networking_config-agents.rst:185 msgid "" "Do not configure or use ``neutron-l3-agent`` if you use one of these plug-" "ins." msgstr "" #: ../networking_config-agents.rst:188 msgid "**To install the L3 agent for all other plug-ins**" msgstr "" #: ../networking_config-agents.rst:190 msgid "Install the ``neutron-l3-agent`` binary on the network node:" msgstr "" #: ../networking_config-agents.rst:196 msgid "" "To uplink the node that runs ``neutron-l3-agent`` to the external network, " "create a bridge named ``br-ex`` and attach the NIC for the external network " "to this bridge." msgstr "" #: ../networking_config-agents.rst:200 msgid "" "For example, with Open vSwitch and NIC eth1 connected to the external " "network, run:" msgstr "" #: ../networking_config-agents.rst:208 msgid "" "When the ``br-ex`` port is added to the ``eth1`` interface, external " "communication is interrupted. To avoid this, edit the ``/etc/network/" "interfaces`` file to contain the following information:" msgstr "" #: ../networking_config-agents.rst:232 msgid "" "The external bridge configuration address is the external IP address. This " "address and gateway should be configured in ``/etc/network/interfaces``." msgstr "" #: ../networking_config-agents.rst:236 msgid "After editing the configuration, restart ``br-ex``:" msgstr "" #: ../networking_config-agents.rst:242 msgid "" "Do not manually configure an IP address on the NIC connected to the external " "network for the node running ``neutron-l3-agent``. Rather, you must have a " "range of IP addresses from the external network that can be used by " "OpenStack Networking for routers that uplink to the external network. This " "range must be large enough to have an IP address for each router in the " "deployment, as well as each floating IP." msgstr "" #: ../networking_config-agents.rst:249 msgid "" "The ``neutron-l3-agent`` uses the Linux IP stack and iptables to perform L3 " "forwarding and NAT. In order to support multiple routers with potentially " "overlapping IP addresses, ``neutron-l3-agent`` defaults to using Linux " "network namespaces to provide isolated forwarding contexts. As a result, the " "IP addresses of routers are not visible simply by running the :command:`ip " "addr list` or :command:`ifconfig` command on the node. Similarly, you cannot " "directly :command:`ping` fixed IPs." msgstr "" #: ../networking_config-agents.rst:257 msgid "" "To do either of these things, you must run the command within a particular " "network namespace for the router. The namespace has the name ``qrouter-" "ROUTER_UUID``. These example commands run in the router namespace with UUID " "47af3868-0fa8-4447-85f6-1304de32153b:" msgstr "" #: ../networking_config-agents.rst:272 msgid "" "For iproute version 3.12.0 and above, networking namespaces are configured " "to be deleted by default. This behavior can be changed for both DHCP and L3 " "agents. The configuration files are ``/etc/neutron/dhcp_agent.ini`` and ``/" "etc/neutron/l3_agent.ini`` respectively." msgstr "" #: ../networking_config-agents.rst:278 msgid "" "For DHCP namespace the configuration key: ``dhcp_delete_namespaces = True``. " "You can set it to ``False`` in case namespaces cannot be deleted cleanly on " "the host running the DHCP agent." msgstr "" #: ../networking_config-agents.rst:283 msgid "" "For L3 namespace, the configuration key: ``router_delete_namespaces = " "True``. You can set it to False in case namespaces cannot be deleted cleanly " "on the host running the L3 agent." msgstr "" #: ../networking_config-agents.rst:290 msgid "" "If you reboot a node that runs the L3 agent, you must run the :command:" "`neutron-ovs-cleanup` command before the ``neutron-l3-agent`` service starts." msgstr "" #: ../networking_config-agents.rst:294 msgid "" "On Red Hat, SUSE and Ubuntu based systems, the neutron-ovs-cleanup service " "runs the :command:`neutron-ovs-cleanup` command automatically. However, on " "Debian-based systems, you must manually run this command or write your own " "system script that runs on boot before the neutron-l3-agent service starts." msgstr "" #: ../networking_config-agents.rst:300 msgid "" "**How routers are assigned to L3 agents** By default, a router is assigned " "to the L3 agent with the least number of routers (LeastRoutersScheduler). " "This can be changed by altering the ``router_scheduler_driver`` setting in " "the configuration file." msgstr "" #: ../networking_config-agents.rst:306 msgid "Configure metering agent" msgstr "" #: ../networking_config-agents.rst:308 msgid "The Neutron Metering agent resides beside neutron-l3-agent." msgstr "" #: ../networking_config-agents.rst:310 msgid "**To install the metering agent and configure the node**" msgstr "" #: ../networking_config-agents.rst:312 msgid "Install the agent by running:" msgstr "" #: ../networking_config-agents.rst:318 msgid "" "If you use one of the following plug-ins, you need to configure the metering " "agent with these lines as well:" msgstr "" #: ../networking_config-agents.rst:321 msgid "An OVS-based plug-in such as OVS, NSX, NEC, BigSwitch/Floodlight:" msgstr "" #: ../networking_config-agents.rst:327 msgid "A plug-in that uses LinuxBridge:" msgstr "" #: ../networking_config-agents.rst:334 msgid "To use the reference implementation, you must set:" msgstr "" #: ../networking_config-agents.rst:341 msgid "" "Set the ``service_plugins`` option in the ``/etc/neutron/neutron.conf`` file " "on the host that runs ``neutron-server``:" msgstr "" #: ../networking_config-agents.rst:348 msgid "" "If this option is already defined, add ``metering`` to the list, using a " "comma as separator. For example:" msgstr "" #: ../networking_config-agents.rst:356 msgid "Configure Load-Balancer-as-a-Service (LBaaS v2)" msgstr "" #: ../networking_config-agents.rst:358 msgid "" "For the back end, use either Octavia or Haproxy. This example uses Octavia." msgstr "" #: ../networking_config-agents.rst:360 msgid "**To configure LBaaS V2**" msgstr "" #: ../networking_config-agents.rst:362 msgid "Install Octavia using your distribution's package manager." msgstr "" #: ../networking_config-agents.rst:365 msgid "" "Edit the ``/etc/neutron/neutron_lbaas.conf`` file and change the " "``service_provider`` parameter to enable Octavia:" msgstr "" #: ../networking_config-agents.rst:374 msgid "" "Edit the ``/etc/neutron/neutron.conf`` file and add the ``service_plugins`` " "parameter to enable the load-balancing plug-in:" msgstr "" #: ../networking_config-agents.rst:381 msgid "" "If this option is already defined, add the load-balancing plug-in to the " "list using a comma as a separator. For example:" msgstr "" # #-#-#-#-# networking_config-agents.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_introduction.pot (Administrator Guide 0.9) #-#-#-#-# #: ../networking_config-agents.rst:390 ../networking_introduction.rst:203 msgid "Create the required tables in the database:" msgstr "" #: ../networking_config-agents.rst:396 msgid "Restart the ``neutron-server`` service." msgstr "" #: ../networking_config-agents.rst:399 msgid "Enable load balancing in the Project section of the dashboard." msgstr "" #: ../networking_config-agents.rst:403 msgid "" "Horizon panels are enabled only for LBaaSV1. LBaaSV2 panels are still being " "developed." msgstr "" #: ../networking_config-agents.rst:406 msgid "" "By default, the ``enable_lb`` option is ``True`` in the `local_settings.py` " "file." msgstr "" #: ../networking_config-agents.rst:416 msgid "" "Apply the settings by restarting the web server. You can now view the Load " "Balancer management options in the Project view in the dashboard." msgstr "" #: ../networking_config-agents.rst:420 msgid "Configure Hyper-V L2 agent" msgstr "" #: ../networking_config-agents.rst:422 msgid "" "Before you install the OpenStack Networking Hyper-V L2 agent on a Hyper-V " "compute node, ensure the compute node has been configured correctly using " "these `instructions `__." msgstr "" #: ../networking_config-agents.rst:427 msgid "" "**To install the OpenStack Networking Hyper-V agent and configure the node**" msgstr "" #: ../networking_config-agents.rst:429 msgid "Download the OpenStack Networking code from the repository:" msgstr "" #: ../networking_config-agents.rst:436 msgid "Install the OpenStack Networking Hyper-V Agent:" msgstr "" #: ../networking_config-agents.rst:443 msgid "Copy the ``policy.json`` file:" msgstr "" #: ../networking_config-agents.rst:449 msgid "" "Create the ``C:\\etc\\neutron-hyperv-agent.conf`` file and add the proper " "configuration options and the `Hyper-V related options `__. Here is a sample config file:" msgstr "" #: ../networking_config-agents.rst:476 msgid "Start the OpenStack Networking Hyper-V agent:" msgstr "" #: ../networking_config-agents.rst:484 msgid "Basic operations on agents" msgstr "" #: ../networking_config-agents.rst:486 msgid "" "This table shows examples of Networking commands that enable you to complete " "basic operations on agents:" msgstr "" #: ../networking_config-agents.rst:492 msgid "List all available agents." msgstr "" #: ../networking_config-agents.rst:494 msgid "``$ neutron agent-list``" msgstr "" #: ../networking_config-agents.rst:496 msgid "Show information of a given agent." msgstr "" #: ../networking_config-agents.rst:499 msgid "``$ neutron agent-show AGENT_ID``" msgstr "" #: ../networking_config-agents.rst:501 msgid "" "Update the admin status and description for a specified agent. The command " "can be used to enable and disable agents by using :option:`--admin-state-up` " "parameter set to ``False`` or ``True``." msgstr "" #: ../networking_config-agents.rst:508 msgid "``$ neutron agent-update --admin`` ``-state-up False AGENT_ID``" msgstr "" #: ../networking_config-agents.rst:511 msgid "Delete a given agent. Consider disabling the agent before deletion." msgstr "" #: ../networking_config-agents.rst:514 msgid "``$ neutron agent-delete AGENT_ID``" msgstr "" #: ../networking_config-agents.rst:517 msgid "**Basic operations on Networking agents**" msgstr "" #: ../networking_config-agents.rst:519 msgid "" "See the `OpenStack Command-Line Interface Reference `__ for more information on Networking " "commands." msgstr "" #: ../networking_config-identity.rst:0 msgid "**nova.conf API and credential settings**" msgstr "" #: ../networking_config-identity.rst:3 msgid "Configure Identity service for Networking" msgstr "" #: ../networking_config-identity.rst:5 msgid "**To configure the Identity service for use with Networking**" msgstr "" #: ../networking_config-identity.rst:7 msgid "Create the ``get_id()`` function" msgstr "" #: ../networking_config-identity.rst:9 msgid "" "The ``get_id()`` function stores the ID of created objects, and removes the " "need to copy and paste object IDs in later steps:" msgstr "" #: ../networking_config-identity.rst:12 msgid "Add the following function to your ``.bashrc`` file:" msgstr "" #: ../networking_config-identity.rst:20 msgid "Source the ``.bashrc`` file:" msgstr "" #: ../networking_config-identity.rst:26 msgid "Create the Networking service entry" msgstr "" #: ../networking_config-identity.rst:28 msgid "" "Networking must be available in the Compute service catalog. Create the " "service:" msgstr "" #: ../networking_config-identity.rst:36 msgid "Create the Networking service endpoint entry" msgstr "" #: ../networking_config-identity.rst:38 msgid "" "The way that you create a Networking endpoint entry depends on whether you " "are using the SQL or the template catalog driver:" msgstr "" #: ../networking_config-identity.rst:41 msgid "" "If you are using the ``SQL driver``, run the following command with the " "specified region (``$REGION``), IP address of the Networking server (``" "$IP``), and service ID (``$NEUTRON_SERVICE_ID``, obtained in the previous " "step)." msgstr "" #: ../networking_config-identity.rst:63 msgid "" "If you are using the ``template driver``, specify the following parameters " "in your Compute catalog template file (``default_catalog.templates``), along " "with the region (``$REGION``) and IP address of the Networking server (``" "$IP``)." msgstr "" #: ../networking_config-identity.rst:84 msgid "Create the Networking service user" msgstr "" #: ../networking_config-identity.rst:86 msgid "" "You must provide admin user credentials that Compute and some internal " "Networking components can use to access the Networking API. Create a special " "``service`` tenant and a ``neutron`` user within this tenant, and assign an " "``admin`` role to this role." msgstr "" #: ../networking_config-identity.rst:91 msgid "Create the ``admin`` role:" msgstr "" #: ../networking_config-identity.rst:97 msgid "Create the ``neutron`` user:" msgstr "" #: ../networking_config-identity.rst:104 msgid "Create the ``service`` tenant:" msgstr "" #: ../networking_config-identity.rst:111 msgid "Establish the relationship among the tenant, user, and role:" msgstr "" #: ../networking_config-identity.rst:118 msgid "" "For information about how to create service entries and users, see the " "OpenStack Installation Guide for your distribution (`docs.openstack.org " "`__)." msgstr "" #: ../networking_config-identity.rst:125 msgid "" "If you use Networking, do not run the Compute ``nova-network`` service (like " "you do in traditional Compute deployments). Instead, Compute delegates most " "network-related decisions to Networking." msgstr "" #: ../networking_config-identity.rst:131 msgid "" "Uninstall ``nova-network`` and reboot any physical nodes that have been " "running ``nova-network`` before using them to run Networking. Inadvertently " "running the ``nova-network`` process while using Networking can cause " "problems, as can stale iptables rules pushed down by previously running " "``nova-network``." msgstr "" #: ../networking_config-identity.rst:137 msgid "" "Compute proxies tenant-facing API calls to manage security groups and " "floating IPs to Networking APIs. However, operator-facing tools such as " "``nova-manage``, are not proxied and should not be used." msgstr "" #: ../networking_config-identity.rst:143 msgid "" "When you configure networking, you must use this guide. Do not rely on " "Compute networking documentation or past experience with Compute. If a :" "command:`nova` command or configuration option related to networking is not " "mentioned in this guide, the command is probably not supported for use with " "Networking. In particular, you cannot use CLI tools like ``nova-manage`` and " "``nova`` to manage networks or IP addressing, including both fixed and " "floating IPs, with Networking." msgstr "" #: ../networking_config-identity.rst:151 msgid "" "To ensure that Compute works properly with Networking (rather than the " "legacy ``nova-network`` mechanism), you must adjust settings in the ``nova." "conf`` configuration file." msgstr "" #: ../networking_config-identity.rst:156 msgid "Networking API and credential configuration" msgstr "" #: ../networking_config-identity.rst:158 msgid "" "Each time you provision or de-provision a VM in Compute, ``nova-\\*`` " "services communicate with Networking using the standard API. For this to " "happen, you must configure the following items in the ``nova.conf`` file " "(used by each ``nova-compute`` and ``nova-api`` instance)." msgstr "" #: ../networking_config-identity.rst:169 msgid "``[DEFAULT] use_neutron``" msgstr "" #: ../networking_config-identity.rst:170 msgid "" "Modify from the default to ``True`` to indicate that Networking should be " "used rather than the traditional nova-network networking model." msgstr "" #: ../networking_config-identity.rst:173 msgid "``[neutron] url``" msgstr "" #: ../networking_config-identity.rst:174 msgid "" "Update to the host name/IP and port of the neutron-server instance for this " "deployment." msgstr "" #: ../networking_config-identity.rst:176 msgid "``[neutron] auth_strategy``" msgstr "" #: ../networking_config-identity.rst:177 msgid "Keep the default ``keystone`` value for all production deployments." msgstr "" #: ../networking_config-identity.rst:178 msgid "``[neutron] admin_tenant_name``" msgstr "" #: ../networking_config-identity.rst:179 msgid "" "Update to the name of the service tenant created in the above section on " "Identity configuration." msgstr "" #: ../networking_config-identity.rst:181 msgid "``[neutron] admin_username``" msgstr "" #: ../networking_config-identity.rst:182 msgid "" "Update to the name of the user created in the above section on Identity " "configuration." msgstr "" #: ../networking_config-identity.rst:184 msgid "``[neutron] admin_password``" msgstr "" #: ../networking_config-identity.rst:185 msgid "" "Update to the password of the user created in the above section on Identity " "configuration." msgstr "" #: ../networking_config-identity.rst:187 msgid "``[neutron] admin_auth_url``" msgstr "" #: ../networking_config-identity.rst:188 msgid "" "Update to the Identity server IP and port. This is the Identity (keystone) " "admin API server IP and port value, and not the Identity service API IP and " "port." msgstr "" #: ../networking_config-identity.rst:193 msgid "Configure security groups" msgstr "" #: ../networking_config-identity.rst:195 msgid "" "The Networking service provides security group functionality using a " "mechanism that is more flexible and powerful than the security group " "capabilities built into Compute. Therefore, if you use Networking, you " "should always disable built-in security groups and proxy all security group " "calls to the Networking API. If you do not, security policies will conflict " "by being simultaneously applied by both services." msgstr "" #: ../networking_config-identity.rst:202 msgid "" "To proxy security groups to Networking, use the following configuration " "values in the ``nova.conf`` file:" msgstr "" #: ../networking_config-identity.rst:205 msgid "**nova.conf security group settings**" msgstr "" # #-#-#-#-# networking_config-identity.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# networking_multi-dhcp-agents.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_share_replication.pot (Administrator Guide 0.9) #-#-#-#-# #: ../networking_config-identity.rst:208 ../networking_config-identity.rst:230 #: ../networking_multi-dhcp-agents.rst:54 #: ../shared_file_systems_share_replication.rst:62 msgid "Configuration" msgstr "" #: ../networking_config-identity.rst:208 ../networking_config-identity.rst:230 msgid "Item" msgstr "" #: ../networking_config-identity.rst:210 msgid "" "Update to ``nova.virt.firewall.NoopFirewallDriver``, so that nova-compute " "does not perform iptables-based filtering itself." msgstr "" #: ../networking_config-identity.rst:210 msgid "``firewall_driver``" msgstr "" #: ../networking_config-identity.rst:216 msgid "Configure metadata" msgstr "" #: ../networking_config-identity.rst:218 msgid "" "The Compute service allows VMs to query metadata associated with a VM by " "making a web request to a special 169.254.169.254 address. Networking " "supports proxying those requests to nova-api, even when the requests are " "made from isolated networks, or from multiple networks that use overlapping " "IP addresses." msgstr "" #: ../networking_config-identity.rst:224 msgid "" "To enable proxying the requests, you must update the following fields in " "``[neutron]`` section in the ``nova.conf``." msgstr "" #: ../networking_config-identity.rst:227 msgid "**nova.conf metadata settings**" msgstr "" #: ../networking_config-identity.rst:232 msgid "" "Update to ``true``, otherwise nova-api will not properly respond to requests " "from the neutron-metadata-agent." msgstr "" #: ../networking_config-identity.rst:232 msgid "``service_metadata_proxy``" msgstr "" #: ../networking_config-identity.rst:236 msgid "" "Update to a string \"password\" value. You must also configure the same " "value in the ``metadata_agent.ini`` file, to authenticate requests made for " "metadata." msgstr "" #: ../networking_config-identity.rst:236 msgid "``metadata_proxy_shared_secret``" msgstr "" #: ../networking_config-identity.rst:241 msgid "" "The default value of an empty string in both files will allow metadata to " "function, but will not be secure if any non-trusted entities have access to " "the metadata APIs exposed by nova-api." msgstr "" #: ../networking_config-identity.rst:250 msgid "" "As a precaution, even when using ``metadata_proxy_shared_secret``, we " "recommend that you do not expose metadata using the same nova-api instances " "that are used for tenants. Instead, you should run a dedicated set of nova-" "api instances for metadata that are available only on your management " "network. Whether a given nova-api instance exposes metadata APIs is " "determined by the value of ``enabled_apis`` in its ``nova.conf``." msgstr "" #: ../networking_config-identity.rst:259 msgid "Example nova.conf (for nova-compute and nova-api)" msgstr "" #: ../networking_config-identity.rst:261 msgid "" "Example values for the above settings, assuming a cloud controller node " "running Compute and Networking with an IP address of 192.168.1.2:" msgstr "" #: ../networking_config-plugins.rst:3 msgid "Plug-in configurations" msgstr "" #: ../networking_config-plugins.rst:5 msgid "" "For configurations options, see `Networking configuration options `__ in Configuration Reference. These " "sections explain how to configure specific plug-ins." msgstr "" #: ../networking_config-plugins.rst:11 msgid "Configure Big Switch (Floodlight REST Proxy) plug-in" msgstr "" #: ../networking_config-plugins.rst:13 msgid "Edit the ``/etc/neutron/neutron.conf`` file and add this line:" msgstr "" #: ../networking_config-plugins.rst:19 msgid "" "In the ``/etc/neutron/neutron.conf`` file, set the ``service_plugins`` " "option:" msgstr "" #: ../networking_config-plugins.rst:26 msgid "" "Edit the ``/etc/neutron/plugins/bigswitch/restproxy.ini`` file for the plug-" "in and specify a comma-separated list of controller\\_ip:port pairs:" msgstr "" #: ../networking_config-plugins.rst:33 msgid "" "For database configuration, see `Install Networking Services `__ in the Installation Guide in the `OpenStack Documentation index " "`__. (The link defaults to the Ubuntu version.)" msgstr "" #: ../networking_config-plugins.rst:39 msgid "Restart the ``neutron-server`` to apply the settings:" msgstr "" #: ../networking_config-plugins.rst:46 msgid "Configure Brocade plug-in" msgstr "" #: ../networking_config-plugins.rst:48 msgid "" "Install the Brocade-modified Python netconf client (ncclient) library, which " "is available at https://github.com/brocade/ncclient:" msgstr "" #: ../networking_config-plugins.rst:55 msgid "As root, run this command:" msgstr "" #: ../networking_config-plugins.rst:61 msgid "" "Edit the ``/etc/neutron/neutron.conf`` file and set the following option:" msgstr "" #: ../networking_config-plugins.rst:68 msgid "" "Edit the ``/etc/neutron/plugins/brocade/brocade.ini`` file for the Brocade " "plug-in and specify the admin user name, password, and IP address of the " "Brocade switch:" msgstr "" #: ../networking_config-plugins.rst:80 msgid "" "For database configuration, see `Install Networking Services `__ in any of the Installation Guides in the `OpenStack Documentation " "index `__. (The link defaults to the Ubuntu " "version.)" msgstr "" #: ../networking_config-plugins.rst:86 ../networking_config-plugins.rst:243 msgid "Restart the ``neutron-server`` service to apply the settings:" msgstr "" #: ../networking_config-plugins.rst:93 msgid "Configure NSX-mh plug-in" msgstr "" #: ../networking_config-plugins.rst:95 msgid "" "The instructions in this section refer to the VMware NSX-mh platform, " "formerly known as Nicira NVP." msgstr "" #: ../networking_config-plugins.rst:98 msgid "Install the NSX plug-in:" msgstr "" #: ../networking_config-plugins.rst:104 ../networking_config-plugins.rst:221 msgid "Edit the ``/etc/neutron/neutron.conf`` file and set this line:" msgstr "" #: ../networking_config-plugins.rst:110 msgid "Example ``neutron.conf`` file for NSX-mh integration:" msgstr "" #: ../networking_config-plugins.rst:118 msgid "" "To configure the NSX-mh controller cluster for OpenStack Networking, locate " "the ``[default]`` section in the ``/etc/neutron/plugins/vmware/nsx.ini`` " "file and add the following entries:" msgstr "" #: ../networking_config-plugins.rst:123 msgid "" "To establish and configure the connection with the controller cluster you " "must set some parameters, including NSX-mh API endpoints, access " "credentials, and optionally specify settings for HTTP timeouts, redirects " "and retries in case of connection failures:" msgstr "" #: ../networking_config-plugins.rst:137 msgid "" "To ensure correct operations, the ``nsx_user`` user must have administrator " "credentials on the NSX-mh platform." msgstr "" #: ../networking_config-plugins.rst:140 msgid "" "A controller API endpoint consists of the IP address and port for the " "controller; if you omit the port, port 443 is used. If multiple API " "endpoints are specified, it is up to the user to ensure that all these " "endpoints belong to the same controller cluster. The OpenStack Networking " "VMware NSX-mh plug-in does not perform this check, and results might be " "unpredictable." msgstr "" #: ../networking_config-plugins.rst:147 msgid "" "When you specify multiple API endpoints, the plug-in takes care of load " "balancing requests on the various API endpoints." msgstr "" #: ../networking_config-plugins.rst:150 msgid "" "The UUID of the NSX-mh transport zone that should be used by default when a " "tenant creates a network. You can get this value from the Transport Zones " "page for the NSX-mh manager:" msgstr "" #: ../networking_config-plugins.rst:154 msgid "" "Alternatively the transport zone identifier can be retrieved by query the " "NSX-mh API: ``/ws.v1/transport-zone``" msgstr "" #: ../networking_config-plugins.rst:167 msgid "" "Ubuntu packaging currently does not update the neutron init script to point " "to the NSX-mh configuration file. Instead, you must manually update ``/etc/" "default/neutron-server`` to add this line:" msgstr "" #: ../networking_config-plugins.rst:176 ../networking_config-plugins.rst:239 msgid "" "For database configuration, see `Install Networking Services `__ in the Installation Guide." msgstr "" #: ../networking_config-plugins.rst:180 msgid "Restart ``neutron-server`` to apply settings:" msgstr "" #: ../networking_config-plugins.rst:188 msgid "" "The neutron NSX-mh plug-in does not implement initial re-synchronization of " "Neutron resources. Therefore resources that might already exist in the " "database when Neutron is switched to the NSX-mh plug-in will not be created " "on the NSX-mh backend upon restart." msgstr "" #: ../networking_config-plugins.rst:194 msgid "Example ``nsx.ini`` file:" msgstr "" #: ../networking_config-plugins.rst:207 msgid "" "To debug :file:`nsx.ini` configuration issues, run this command from the " "host that runs neutron-server:" msgstr "" #: ../networking_config-plugins.rst:214 msgid "" "This command tests whether ``neutron-server`` can log into all of the NSX-mh " "controllers and the SQL server, and whether all UUID values are correct." msgstr "" #: ../networking_config-plugins.rst:219 msgid "Configure PLUMgrid plug-in" msgstr "" #: ../networking_config-plugins.rst:227 msgid "" "Edit the [PLUMgridDirector] section in the ``/etc/neutron/plugins/plumgrid/" "plumgrid.ini`` file and specify the IP address, port, admin user name, and " "password of the PLUMgrid Director:" msgstr "" #: ../networking_introduction.rst:3 msgid "Introduction to Networking" msgstr "" #: ../networking_introduction.rst:5 msgid "" "The Networking service, code-named neutron, provides an API that lets you " "define network connectivity and addressing in the cloud. The Networking " "service enables operators to leverage different networking technologies to " "power their cloud networking. The Networking service also provides an API to " "configure and manage a variety of network services ranging from L3 " "forwarding and NAT to load balancing, edge firewalls, and IPsec VPN." msgstr "" #: ../networking_introduction.rst:13 msgid "" "For a detailed description of the Networking API abstractions and their " "attributes, see the `OpenStack Networking API v2.0 Reference `__." msgstr "" #: ../networking_introduction.rst:19 msgid "" "If you use the Networking service, do not run the Compute ``nova-network`` " "service (like you do in traditional Compute deployments). When you configure " "networking, see the Compute-related topics in this Networking section." msgstr "" #: ../networking_introduction.rst:25 msgid "Networking API" msgstr "" #: ../networking_introduction.rst:27 msgid "" "Networking is a virtual network service that provides a powerful API to " "define the network connectivity and IP addressing that devices from other " "services, such as Compute, use." msgstr "" #: ../networking_introduction.rst:31 msgid "" "The Compute API has a virtual server abstraction to describe computing " "resources. Similarly, the Networking API has virtual network, subnet, and " "port abstractions to describe networking resources." msgstr "" # #-#-#-#-# networking_introduction.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../networking_introduction.rst:36 ../telemetry-measurements.rst:100 #: ../telemetry-measurements.rst:434 ../telemetry-measurements.rst:489 #: ../telemetry-measurements.rst:533 ../telemetry-measurements.rst:595 #: ../telemetry-measurements.rst:671 ../telemetry-measurements.rst:705 #: ../telemetry-measurements.rst:771 ../telemetry-measurements.rst:821 #: ../telemetry-measurements.rst:859 ../telemetry-measurements.rst:931 #: ../telemetry-measurements.rst:995 ../telemetry-measurements.rst:1082 #: ../telemetry-measurements.rst:1162 ../telemetry-measurements.rst:1257 #: ../telemetry-measurements.rst:1323 ../telemetry-measurements.rst:1374 #: ../telemetry-measurements.rst:1401 ../telemetry-measurements.rst:1425 #: ../telemetry-measurements.rst:1446 msgid "Resource" msgstr "" #: ../networking_introduction.rst:38 msgid "**Network**" msgstr "" #: ../networking_introduction.rst:38 msgid "" "An isolated L2 segment, analogous to VLAN in the physical networking world." msgstr "" #: ../networking_introduction.rst:41 msgid "**Subnet**" msgstr "" #: ../networking_introduction.rst:41 msgid "A block of v4 or v6 IP addresses and associated configuration state." msgstr "" #: ../networking_introduction.rst:44 msgid "**Port**" msgstr "" #: ../networking_introduction.rst:44 msgid "" "A connection point for attaching a single device, such as the NIC of a " "virtual server, to a virtual network. Also describes the associated network " "configuration, such as the MAC and IP addresses to be used on that port." msgstr "" #: ../networking_introduction.rst:50 msgid "**Networking resources**" msgstr "" #: ../networking_introduction.rst:52 msgid "" "To configure rich network topologies, you can create and configure networks " "and subnets and instruct other OpenStack services like Compute to attach " "virtual devices to ports on these networks." msgstr "" #: ../networking_introduction.rst:56 msgid "" "In particular, Networking supports each tenant having multiple private " "networks and enables tenants to choose their own IP addressing scheme, even " "if those IP addresses overlap with those that other tenants use." msgstr "" #: ../networking_introduction.rst:60 msgid "The Networking service:" msgstr "" #: ../networking_introduction.rst:62 msgid "" "Enables advanced cloud networking use cases, such as building multi-tiered " "web applications and enabling migration of applications to the cloud without " "changing IP addresses." msgstr "" #: ../networking_introduction.rst:66 msgid "Offers flexibility for administrators to customize network offerings." msgstr "" #: ../networking_introduction.rst:69 msgid "" "Enables developers to extend the Networking API. Over time, the extended " "functionality becomes part of the core Networking API." msgstr "" #: ../networking_introduction.rst:73 msgid "Configure SSL support for networking API" msgstr "" #: ../networking_introduction.rst:75 msgid "" "OpenStack Networking supports SSL for the Networking API server. By default, " "SSL is disabled but you can enable it in the ``neutron.conf`` file." msgstr "" #: ../networking_introduction.rst:79 msgid "Set these options to configure SSL:" msgstr "" #: ../networking_introduction.rst:82 msgid "Enables SSL on the networking API server." msgstr "" #: ../networking_introduction.rst:82 msgid "``use_ssl = True``" msgstr "" #: ../networking_introduction.rst:85 msgid "" "Certificate file that is used when you securely start the Networking API " "server." msgstr "" #: ../networking_introduction.rst:86 msgid "``ssl_cert_file = PATH_TO_CERTFILE``" msgstr "" #: ../networking_introduction.rst:89 msgid "" "Private key file that is used when you securely start the Networking API " "server." msgstr "" #: ../networking_introduction.rst:90 msgid "``ssl_key_file = PATH_TO_KEYFILE``" msgstr "" #: ../networking_introduction.rst:93 msgid "" "Optional. CA certificate file that is used when you securely start the " "Networking API server. This file verifies connecting clients. Set this " "option when API clients must authenticate to the API server by using SSL " "certificates that are signed by a trusted CA." msgstr "" #: ../networking_introduction.rst:96 msgid "``ssl_ca_file = PATH_TO_CAFILE``" msgstr "" #: ../networking_introduction.rst:99 msgid "" "The value of TCP\\_KEEPIDLE, in seconds, for each server socket when " "starting the API server. Not supported on OS X." msgstr "" #: ../networking_introduction.rst:100 msgid "``tcp_keepidle = 600``" msgstr "" #: ../networking_introduction.rst:103 msgid "Number of seconds to keep retrying to listen." msgstr "" #: ../networking_introduction.rst:103 msgid "``retry_until_window = 30``" msgstr "" #: ../networking_introduction.rst:106 msgid "Number of backlog requests with which to configure the socket." msgstr "" #: ../networking_introduction.rst:106 msgid "``backlog = 4096``" msgstr "" #: ../networking_introduction.rst:109 msgid "Load-Balancer-as-a-Service (LBaaS) overview" msgstr "" #: ../networking_introduction.rst:111 msgid "" "Load-Balancer-as-a-Service (LBaaS) enables Networking to distribute incoming " "requests evenly among designated instances. This distribution ensures that " "the workload is shared predictably among instances and enables more " "effective use of system resources. Use one of these load balancing methods " "to distribute incoming requests:" msgstr "" #: ../networking_introduction.rst:118 msgid "Rotates requests evenly between multiple instances." msgstr "" #: ../networking_introduction.rst:118 msgid "Round robin" msgstr "" #: ../networking_introduction.rst:121 msgid "" "Requests from a unique source IP address are consistently directed to the " "same instance." msgstr "" #: ../networking_introduction.rst:122 msgid "Source IP" msgstr "" #: ../networking_introduction.rst:125 msgid "" "Allocates requests to the instance with the least number of active " "connections." msgstr "" #: ../networking_introduction.rst:126 msgid "Least connections" msgstr "" #: ../networking_introduction.rst:129 msgid "Feature" msgstr "" #: ../networking_introduction.rst:131 msgid "**Monitors**" msgstr "" #: ../networking_introduction.rst:131 msgid "" "LBaaS provides availability monitoring with the ``ping``, TCP, HTTP and " "HTTPS GET methods. Monitors are implemented to determine whether pool " "members are available to handle requests." msgstr "" #: ../networking_introduction.rst:136 msgid "**Management**" msgstr "" #: ../networking_introduction.rst:136 msgid "" "LBaaS is managed using a variety of tool sets. The REST API is available for " "programmatic administration and scripting. Users perform administrative " "management of load balancers through either the CLI (``neutron``) or the " "OpenStack Dashboard." msgstr "" #: ../networking_introduction.rst:143 msgid "**Connection limits**" msgstr "" #: ../networking_introduction.rst:143 msgid "" "Ingress traffic can be shaped with *connection limits*. This feature allows " "workload control, and can also assist with mitigating DoS (Denial of " "Service) attacks." msgstr "" #: ../networking_introduction.rst:148 msgid "**Session persistence**" msgstr "" #: ../networking_introduction.rst:148 msgid "" "LBaaS supports session persistence by ensuring incoming requests are routed " "to the same instance within a pool of multiple instances. LBaaS supports " "routing decisions based on cookies and source IP address." msgstr "" #: ../networking_introduction.rst:157 msgid "Firewall-as-a-Service (FWaaS) overview" msgstr "" #: ../networking_introduction.rst:159 msgid "" "The Firewall-as-a-Service (FWaaS) plug-in adds perimeter firewall management " "to Networking. FWaaS uses iptables to apply firewall policy to all " "Networking routers within a project. FWaaS supports one firewall policy and " "logical firewall instance per project." msgstr "" #: ../networking_introduction.rst:164 msgid "" "Whereas security groups operate at the instance-level, FWaaS operates at the " "perimeter to filter traffic at the neutron router." msgstr "" #: ../networking_introduction.rst:169 msgid "" "FWaaS is currently in technical preview; untested operation is not " "recommended." msgstr "" #: ../networking_introduction.rst:172 msgid "" "The example diagram illustrates the flow of ingress and egress traffic for " "the VM2 instance:" msgstr "" #: ../networking_introduction.rst:178 msgid "Enable FWaaS" msgstr "" #: ../networking_introduction.rst:180 msgid "FWaaS management options are also available in the Dashboard." msgstr "" #: ../networking_introduction.rst:182 msgid "Enable the FWaaS plug-in in the ``/etc/neutron/neutron.conf`` file:" msgstr "" #: ../networking_introduction.rst:199 msgid "" "On Ubuntu, modify the ``[fwaas]`` section in the ``/etc/neutron/fwaas_driver." "ini`` file instead of ``/etc/neutron/neutron.conf``." msgstr "" #: ../networking_introduction.rst:209 msgid "" "Enable the option in the ``local_settings.py`` file, which is typically " "located on the controller node:" msgstr "" #: ../networking_introduction.rst:222 msgid "" "By default, ``enable_firewall`` option value is ``True`` in ``local_settings." "py`` file." msgstr "" #: ../networking_introduction.rst:225 msgid "Apply the settings by restarting the web server." msgstr "" #: ../networking_introduction.rst:227 msgid "" "Restart the ``neutron-l3-agent`` and ``neutron-server`` services to apply " "the settings." msgstr "" #: ../networking_introduction.rst:231 msgid "Configure Firewall-as-a-Service" msgstr "" #: ../networking_introduction.rst:233 msgid "" "Create the firewall rules and create a policy that contains them. Then, " "create a firewall that applies the policy." msgstr "" #: ../networking_introduction.rst:236 msgid "Create a firewall rule:" msgstr "" #: ../networking_introduction.rst:246 msgid "" "The Networking client requires a protocol value; if the rule is protocol " "agnostic, you can use the ``any`` value." msgstr "" #: ../networking_introduction.rst:251 msgid "" "When the source or destination IP address are not of the same IP version " "(for example, IPv6), the command returns an error." msgstr "" #: ../networking_introduction.rst:254 msgid "Create a firewall policy:" msgstr "" #: ../networking_introduction.rst:261 msgid "" "Separate firewall rule IDs or names with spaces. The order in which you " "specify the rules is important." msgstr "" #: ../networking_introduction.rst:264 msgid "" "You can create a firewall policy without any rules and add rules later, as " "follows:" msgstr "" #: ../networking_introduction.rst:267 msgid "To add multiple rules, use the update operation." msgstr "" #: ../networking_introduction.rst:269 msgid "To add a single rule, use the insert-rule operation." msgstr "" #: ../networking_introduction.rst:271 msgid "" "For more details, see `Networking command-line client `_ in the OpenStack Command-Line Interface " "Reference." msgstr "" #: ../networking_introduction.rst:277 msgid "" "FWaaS always adds a default ``deny all`` rule at the lowest precedence of " "each policy. Consequently, a firewall policy with no rules blocks all " "traffic by default." msgstr "" #: ../networking_introduction.rst:281 msgid "Create a firewall:" msgstr "" #: ../networking_introduction.rst:289 msgid "" "The firewall remains in PENDING\\_CREATE state until you create a Networking " "router and attach an interface to it." msgstr "" #: ../networking_introduction.rst:293 msgid "Allowed-address-pairs" msgstr "" #: ../networking_introduction.rst:295 msgid "" "``Allowed-address-pairs`` enables you to specify mac_address and " "ip_address(cidr) pairs that pass through a port regardless of subnet. This " "enables the use of protocols such as VRRP, which floats an IP address " "between two instances to enable fast data plane failover." msgstr "" #: ../networking_introduction.rst:302 msgid "" "Currently, only the ML2, Open vSwitch, and VMware NSX plug-ins support the " "allowed-address-pairs extension." msgstr "" #: ../networking_introduction.rst:305 msgid "**Basic allowed-address-pairs operations.**" msgstr "" #: ../networking_introduction.rst:307 msgid "Create a port with a specified allowed address pair:" msgstr "" #: ../networking_introduction.rst:314 msgid "Update a port by adding allowed address pairs:" msgstr "" #: ../networking_introduction.rst:323 msgid "Virtual-Private-Network-as-a-Service (VPNaaS)" msgstr "" #: ../networking_introduction.rst:325 msgid "" "The VPNaaS extension enables OpenStack tenants to extend private networks " "across the internet." msgstr "" #: ../networking_introduction.rst:328 msgid "This extension introduces these resources:" msgstr "" #: ../networking_introduction.rst:330 msgid "" ":term:`service`. A parent object that associates VPN with a specific subnet " "and router." msgstr "" #: ../networking_introduction.rst:333 msgid "" "The Internet Key Exchange (IKE) policy that identifies the authentication " "and encryption algorithm to use during phase one and two negotiation of a " "VPN connection." msgstr "" #: ../networking_introduction.rst:337 msgid "" "The IP security policy that specifies the authentication and encryption " "algorithm and encapsulation mode to use for the established VPN connection." msgstr "" #: ../networking_introduction.rst:341 msgid "" "Details for the site-to-site IPsec connection, including the peer CIDRs, " "MTU, authentication mode, peer address, DPD settings, and status." msgstr "" #: ../networking_introduction.rst:344 msgid "This initial implementation of the VPNaaS extension provides:" msgstr "" #: ../networking_introduction.rst:346 msgid "Site-to-site VPN that connects two private networks." msgstr "" #: ../networking_introduction.rst:348 msgid "Multiple VPN connections per tenant." msgstr "" #: ../networking_introduction.rst:350 msgid "" "IKEv1 policy support with 3des, aes-128, aes-256, or aes-192 encryption." msgstr "" #: ../networking_introduction.rst:352 msgid "" "IPSec policy support with 3des, aes-128, aes-192, or aes-256 encryption, " "sha1 authentication, ESP, AH, or AH-ESP transform protocol, and tunnel or " "transport mode encapsulation." msgstr "" #: ../networking_introduction.rst:356 msgid "" "Dead Peer Detection (DPD) with hold, clear, restart, disabled, or restart-by-" "peer actions." msgstr "" #: ../networking_multi-dhcp-agents.rst:3 msgid "Scalable and highly available DHCP agents" msgstr "" #: ../networking_multi-dhcp-agents.rst:5 msgid "" "This section describes how to use the agent management (alias agent) and " "scheduler (alias agent_scheduler) extensions for DHCP agents scalability and " "HA." msgstr "" #: ../networking_multi-dhcp-agents.rst:11 msgid "" "Use the :command:`neutron ext-list` client command to check if these " "extensions are enabled:" msgstr "" #: ../networking_multi-dhcp-agents.rst:33 msgid "There will be three hosts in the setup." msgstr "" #: ../networking_multi-dhcp-agents.rst:41 msgid "OpenStack controller host - controlnod" msgstr "" #: ../networking_multi-dhcp-agents.rst:42 msgid "" "Runs the Networking, Identity, and Compute services that are required to " "deploy VMs. The node must have at least one network interface that is " "connected to the Management Network. Note that ``nova-network`` should not " "be running because it is replaced by Neutron." msgstr "" #: ../networking_multi-dhcp-agents.rst:47 msgid "Runs ``nova-compute``, the Neutron L2 agent and DHCP agent" msgstr "" #: ../networking_multi-dhcp-agents.rst:49 msgid "Same as HostA" msgstr "" #: ../networking_multi-dhcp-agents.rst:51 msgid "**Hosts for demo**" msgstr "" #: ../networking_multi-dhcp-agents.rst:56 msgid "**controlnode: neutron server**" msgstr "" #: ../networking_multi-dhcp-agents.rst:58 #: ../networking_multi-dhcp-agents.rst:85 msgid "Neutron configuration file ``/etc/neutron/neutron.conf``:" msgstr "" #: ../networking_multi-dhcp-agents.rst:69 #: ../networking_multi-dhcp-agents.rst:95 msgid "" "Update the plug-in configuration file ``/etc/neutron/plugins/linuxbridge/" "linuxbridge_conf.ini``:" msgstr "" #: ../networking_multi-dhcp-agents.rst:83 msgid "**HostA and HostB: L2 agent**" msgstr "" #: ../networking_multi-dhcp-agents.rst:109 msgid "Update the nova configuration file ``/etc/nova/nova.conf``:" msgstr "" #: ../networking_multi-dhcp-agents.rst:125 msgid "**HostA and HostB: DHCP agent**" msgstr "" #: ../networking_multi-dhcp-agents.rst:127 msgid "Update the DHCP configuration file ``/etc/neutron/dhcp_agent.ini``:" msgstr "" #: ../networking_multi-dhcp-agents.rst:135 msgid "Commands in agent management and scheduler extensions" msgstr "" #: ../networking_multi-dhcp-agents.rst:137 msgid "" "The following commands require the tenant running the command to have an " "admin role." msgstr "" #: ../networking_multi-dhcp-agents.rst:142 msgid "" "Ensure that the following environment variables are set. These are used by " "the various clients to access the Identity service." msgstr "" #: ../networking_multi-dhcp-agents.rst:152 msgid "**Settings**" msgstr "" #: ../networking_multi-dhcp-agents.rst:154 msgid "To experiment, you need VMs and a neutron network:" msgstr "" #: ../networking_multi-dhcp-agents.rst:176 msgid "**Manage agents in neutron deployment**" msgstr "" #: ../networking_multi-dhcp-agents.rst:178 msgid "" "Every agent that supports these extensions will register itself with the " "neutron server when it starts up." msgstr "" #: ../networking_multi-dhcp-agents.rst:181 msgid "List all agents:" msgstr "" #: ../networking_multi-dhcp-agents.rst:196 msgid "" "The output shows information for four agents. The ``alive`` field shows " "``:-)`` if the agent reported its state within the period defined by the " "``agent_down_time`` option in the ``neutron.conf`` file. Otherwise the " "``alive`` is ``xxx``." msgstr "" #: ../networking_multi-dhcp-agents.rst:201 msgid "List the DHCP agents that host a specified network:" msgstr "" #: ../networking_multi-dhcp-agents.rst:203 msgid "" "In some deployments, one DHCP agent is not enough to hold all network data. " "In addition, you must have a backup for it even when the deployment is " "small. The same network can be assigned to more than one DHCP agent and one " "DHCP agent can host more than one network." msgstr "" #: ../networking_multi-dhcp-agents.rst:208 msgid "List DHCP agents that host a specified network:" msgstr "" #: ../networking_multi-dhcp-agents.rst:220 msgid "List the networks hosted by a given DHCP agent:" msgstr "" #: ../networking_multi-dhcp-agents.rst:222 msgid "This command is to show which networks a given dhcp agent is managing." msgstr "" #: ../networking_multi-dhcp-agents.rst:236 msgid "Show agent details." msgstr "" #: ../networking_multi-dhcp-agents.rst:238 msgid "The :command:`agent-show` command shows details for a specified agent:" msgstr "" #: ../networking_multi-dhcp-agents.rst:267 msgid "" "In this output, ``heartbeat_timestamp`` is the time on the neutron server. " "You do not need to synchronize all agents to this time for this extension to " "run correctly. ``configurations`` describes the static configuration for the " "agent or run time data. This agent is a DHCP agent and it hosts one network, " "one subnet, and three ports." msgstr "" #: ../networking_multi-dhcp-agents.rst:273 msgid "" "Different types of agents show different details. The following output shows " "information for a Linux bridge agent:" msgstr "" #: ../networking_multi-dhcp-agents.rst:301 msgid "" "The output shows ``bridge-mapping`` and the number of virtual network " "devices on this L2 agent." msgstr "" #: ../networking_multi-dhcp-agents.rst:304 msgid "**Manage assignment of networks to DHCP agent**" msgstr "" #: ../networking_multi-dhcp-agents.rst:306 msgid "" "Now that you have run the :command:`net-list-on-dhcp-agent` and :command:" "`dhcp-agent-list-hosting-net` commands, you can add a network to a DHCP " "agent and remove one from it." msgstr "" #: ../networking_multi-dhcp-agents.rst:310 msgid "Default scheduling." msgstr "" #: ../networking_multi-dhcp-agents.rst:312 msgid "" "When you create a network with one port, you can schedule it to an active " "DHCP agent. If many active DHCP agents are running, select one randomly. You " "can design more sophisticated scheduling algorithms in the same way as nova-" "schedule later on." msgstr "" #: ../networking_multi-dhcp-agents.rst:330 msgid "" "It is allocated to DHCP agent on HostA. If you want to validate the behavior " "through the :command:`dnsmasq` command, you must create a subnet for the " "network because the DHCP agent starts the dnsmasq service only if there is a " "DHCP." msgstr "" #: ../networking_multi-dhcp-agents.rst:335 msgid "Assign a network to a given DHCP agent." msgstr "" #: ../networking_multi-dhcp-agents.rst:337 msgid "To add another DHCP agent to host the network, run this command:" msgstr "" #: ../networking_multi-dhcp-agents.rst:352 msgid "Both DHCP agents host the ``net2`` network." msgstr "" #: ../networking_multi-dhcp-agents.rst:354 msgid "Remove a network from a specified DHCP agent." msgstr "" #: ../networking_multi-dhcp-agents.rst:356 msgid "" "This command is the sibling command for the previous one. Remove ``net2`` " "from the DHCP agent for HostA:" msgstr "" #: ../networking_multi-dhcp-agents.rst:372 msgid "" "You can see that only the DHCP agent for HostB is hosting the ``net2`` " "network." msgstr "" #: ../networking_multi-dhcp-agents.rst:375 msgid "**HA of DHCP agents**" msgstr "" #: ../networking_multi-dhcp-agents.rst:377 msgid "" "Boot a VM on net2. Let both DHCP agents host ``net2``. Fail the agents in " "turn to see if the VM can still get the desired IP." msgstr "" #: ../networking_multi-dhcp-agents.rst:380 msgid "Boot a VM on net2:" msgstr "" #: ../networking_multi-dhcp-agents.rst:415 msgid "Make sure both DHCP agents hosting ``net2``:" msgstr "" #: ../networking_multi-dhcp-agents.rst:417 msgid "Use the previous commands to assign the network to agents." msgstr "" #: ../networking_multi-dhcp-agents.rst:430 msgid "**Test the HA**" msgstr "" #: ../networking_multi-dhcp-agents.rst:432 msgid "" "Log in to the ``myserver4`` VM, and run ``udhcpc``, ``dhclient`` or other " "DHCP client." msgstr "" #: ../networking_multi-dhcp-agents.rst:435 msgid "" "Stop the DHCP agent on HostA. Besides stopping the ``neutron-dhcp-agent`` " "binary, you must stop the ``dnsmasq`` processes." msgstr "" #: ../networking_multi-dhcp-agents.rst:438 msgid "Run a DHCP client in VM to see if it can get the wanted IP." msgstr "" #: ../networking_multi-dhcp-agents.rst:440 msgid "Stop the DHCP agent on HostB too." msgstr "" #: ../networking_multi-dhcp-agents.rst:442 msgid "Run ``udhcpc`` in the VM; it cannot get the wanted IP." msgstr "" #: ../networking_multi-dhcp-agents.rst:444 msgid "Start DHCP agent on HostB. The VM gets the wanted IP again." msgstr "" #: ../networking_multi-dhcp-agents.rst:446 msgid "**Disable and remove an agent**" msgstr "" #: ../networking_multi-dhcp-agents.rst:448 msgid "" "An administrator might want to disable an agent if a system hardware or " "software upgrade is planned. Some agents that support scheduling also " "support disabling and enabling agents, such as L3 and DHCP agents. After the " "agent is disabled, the scheduler does not schedule new resources to the " "agent. After the agent is disabled, you can safely remove the agent. Remove " "the resources on the agent before you delete the agent." msgstr "" #: ../networking_multi-dhcp-agents.rst:455 msgid "To run the following commands, you must stop the DHCP agent on HostA." msgstr "" #: ../networking_multi-dhcp-agents.rst:486 msgid "" "After deletion, if you restart the DHCP agent, it appears on the agent list " "again." msgstr "" #: ../networking_use.rst:3 msgid "Use Networking" msgstr "" #: ../networking_use.rst:5 msgid "" "You can manage OpenStack Networking services by using the service command. " "For example:" msgstr "" #: ../networking_use.rst:15 msgid "Log files are in the ``/var/log/neutron`` directory." msgstr "" #: ../networking_use.rst:17 msgid "Configuration files are in the ``/etc/neutron`` directory." msgstr "" #: ../networking_use.rst:19 msgid "" "Administrators and tenants can use OpenStack Networking to build rich " "network topologies. Administrators can create network connectivity on behalf " "of tenants." msgstr "" #: ../networking_use.rst:24 msgid "Core Networking API features" msgstr "" #: ../networking_use.rst:26 msgid "" "After you install and configure Networking, tenants and administrators can " "perform create-read-update-delete (CRUD) API networking operations by using " "the Networking API directly or neutron command-line interface (CLI). The " "neutron CLI is a wrapper around the Networking API. Every Networking API " "call has a corresponding neutron command." msgstr "" #: ../networking_use.rst:32 msgid "" "The CLI includes a number of options. For details, see the `OpenStack End " "User Guide `__." msgstr "" #: ../networking_use.rst:36 msgid "Basic Networking operations" msgstr "" #: ../networking_use.rst:38 msgid "" "To learn about advanced capabilities available through the neutron command-" "line interface (CLI), read the networking section in the `OpenStack End User " "Guide `__." msgstr "" #: ../networking_use.rst:43 msgid "" "This table shows example neutron commands that enable you to complete basic " "network operations:" msgstr "" #: ../networking_use.rst:49 msgid "Creates a network." msgstr "" #: ../networking_use.rst:51 msgid "``$ neutron net-create net1``" msgstr "" #: ../networking_use.rst:53 msgid "Creates a subnet that is associated with net1." msgstr "" #: ../networking_use.rst:56 msgid "``$ neutron subnet-create`` ``net1 10.0.0.0/24``" msgstr "" #: ../networking_use.rst:59 msgid "Lists ports for a specified tenant." msgstr "" #: ../networking_use.rst:62 msgid "``$ neutron port-list``" msgstr "" #: ../networking_use.rst:64 msgid "" "Lists ports for a specified tenant and displays the ``id``, ``fixed_ips``, " "and ``device_owner`` columns." msgstr "" #: ../networking_use.rst:71 msgid "``$ neutron port-list -c id`` ``-c fixed_ips -c device_owner``" msgstr "" #: ../networking_use.rst:74 msgid "Shows information for a specified port." msgstr "" #: ../networking_use.rst:76 msgid "``$ neutron port-show PORT_ID``" msgstr "" #: ../networking_use.rst:79 msgid "**Basic Networking operations**" msgstr "" #: ../networking_use.rst:83 msgid "" "The ``device_owner`` field describes who owns the port. A port whose " "``device_owner`` begins with:" msgstr "" #: ../networking_use.rst:86 msgid "``network`` is created by Networking." msgstr "" #: ../networking_use.rst:88 msgid "``compute`` is created by Compute." msgstr "" #: ../networking_use.rst:91 msgid "Administrative operations" msgstr "" #: ../networking_use.rst:93 msgid "" "The administrator can run any :command:`neutron` command on behalf of " "tenants by specifying an Identity ``tenant_id`` in the command, as follows:" msgstr "" #: ../networking_use.rst:109 msgid "" "To view all tenant IDs in Identity, run the following command as an Identity " "service admin user:" msgstr "" #: ../networking_use.rst:117 msgid "Advanced Networking operations" msgstr "" #: ../networking_use.rst:119 msgid "" "This table shows example Networking commands that enable you to complete " "advanced network operations:" msgstr "" #: ../networking_use.rst:125 msgid "Creates a network that all tenants can use." msgstr "" #: ../networking_use.rst:128 msgid "``$ neutron net-create`` ``--shared public-net``" msgstr "" #: ../networking_use.rst:131 msgid "Creates a subnet with a specified gateway IP address." msgstr "" #: ../networking_use.rst:134 msgid "``$ neutron subnet-create`` ``--gateway 10.0.0.254 net1 10.0.0.0/24``" msgstr "" #: ../networking_use.rst:137 msgid "Creates a subnet that has no gateway IP address." msgstr "" #: ../networking_use.rst:140 msgid "``$ neutron subnet-create`` ``--no-gateway net1 10.0.0.0/24``" msgstr "" #: ../networking_use.rst:143 msgid "Creates a subnet with DHCP disabled." msgstr "" #: ../networking_use.rst:146 msgid "``$ neutron subnet-create`` ``net1 10.0.0.0/24 --enable-dhcp False``" msgstr "" #: ../networking_use.rst:149 msgid "Specifies a set of host routes" msgstr "" #: ../networking_use.rst:151 msgid "" "``$ neutron subnet-create`` ``test-net1 40.0.0.0/24 --host-routes`` " "``type=dict list=true`` ``destination=40.0.1.0/24,`` ``nexthop=40.0.0.2``" msgstr "" #: ../networking_use.rst:157 msgid "Creates a subnet with a specified set of dns name servers." msgstr "" #: ../networking_use.rst:161 msgid "" "``$ neutron subnet-create test-net1`` ``40.0.0.0/24 --dns-nameservers`` " "``list=true 8.8.4.4 8.8.8.8``" msgstr "" #: ../networking_use.rst:165 msgid "Displays all ports and IPs allocated on a network." msgstr "" #: ../networking_use.rst:168 msgid "``$ neutron port-list --network_id NET_ID``" msgstr "" #: ../networking_use.rst:171 msgid "**Advanced Networking operations**" msgstr "" #: ../networking_use.rst:174 msgid "Use Compute with Networking" msgstr "" #: ../networking_use.rst:177 msgid "Basic Compute and Networking operations" msgstr "" #: ../networking_use.rst:179 msgid "" "This table shows example neutron and nova commands that enable you to " "complete basic VM networking operations:" msgstr "" #: ../networking_use.rst:183 msgid "Action" msgstr "" #: ../networking_use.rst:185 msgid "Checks available networks." msgstr "" #: ../networking_use.rst:187 msgid "``$ neutron net-list``" msgstr "" #: ../networking_use.rst:189 msgid "Boots a VM with a single NIC on a selected Networking network." msgstr "" #: ../networking_use.rst:192 msgid "" "``$ nova boot --image IMAGE --flavor`` ``FLAVOR --nic net-id=NET_ID VM_NAME``" msgstr "" #: ../networking_use.rst:195 msgid "" "Searches for ports with a ``device_id`` that matches the Compute instance " "UUID. See :ref: `Create and delete VMs`" msgstr "" #: ../networking_use.rst:200 msgid "``$ neutron port-list --device_id VM_ID``" msgstr "" #: ../networking_use.rst:202 msgid "Searches for ports, but shows only the ``mac_address`` of the port." msgstr "" #: ../networking_use.rst:206 msgid "``$ neutron port-list --field`` ``mac_address --device_id VM_ID``" msgstr "" #: ../networking_use.rst:209 msgid "Temporarily disables a port from sending traffic." msgstr "" #: ../networking_use.rst:212 msgid "``$ neutron port-update PORT_ID`` ``--admin_state_up False``" msgstr "" #: ../networking_use.rst:216 msgid "**Basic Compute and Networking operations**" msgstr "" #: ../networking_use.rst:220 msgid "The ``device_id`` can also be a logical router ID." msgstr "" #: ../networking_use.rst:224 msgid "" "When you boot a Compute VM, a port on the network that corresponds to the VM " "NIC is automatically created and associated with the default security group. " "You can configure `security group rules <#enabling_ping_and_ssh>`__ to " "enable users to access the VM." msgstr "" #: ../networking_use.rst:235 msgid "Advanced VM creation operations" msgstr "" #: ../networking_use.rst:237 msgid "" "This table shows example nova and neutron commands that enable you to " "complete advanced VM creation operations:" msgstr "" #: ../networking_use.rst:243 msgid "Boots a VM with multiple NICs." msgstr "" #: ../networking_use.rst:246 msgid "" "``$ nova boot --image IMAGE --flavor`` ``FLAVOR --nic net-id=NET1-ID --nic`` " "``net-id=NET2-ID VM_NAME``" msgstr "" #: ../networking_use.rst:250 msgid "" "Boots a VM with a specific IP address. Note that you cannot use the ``--num-" "instances`` parameter in this case." msgstr "" #: ../networking_use.rst:255 msgid "" "``$ nova boot --image IMAGE --flavor`` ``FLAVOR --nic net-id=NET-ID,`` ``v4-" "fixed-ip=IP-ADDR VM_NAME``" msgstr "" #: ../networking_use.rst:259 msgid "" "Boots a VM that connects to all networks that are accessible to the tenant " "who submits the request (without the ``--nic`` option)." msgstr "" #: ../networking_use.rst:264 msgid "``$ nova boot --image IMAGE --flavor`` ``FLAVOR VM_NAME``" msgstr "" #: ../networking_use.rst:268 msgid "**Advanced VM creation operations**" msgstr "" #: ../networking_use.rst:272 msgid "" "Cloud images that distribution vendors offer usually have only one active " "NIC configured. When you boot with multiple NICs, you must configure " "additional interfaces on the image or the NICs are not reachable." msgstr "" #: ../networking_use.rst:277 msgid "" "The following Debian/Ubuntu-based example shows how to set up the interfaces " "within the instance in the ``/etc/network/interfaces`` file. You must apply " "this configuration to the image." msgstr "" #: ../networking_use.rst:294 msgid "Enable ping and SSH on VMs (security groups)" msgstr "" #: ../networking_use.rst:296 msgid "" "You must configure security group rules depending on the type of plug-in you " "are using. If you are using a plug-in that:" msgstr "" #: ../networking_use.rst:299 msgid "" "Implements Networking security groups, you can configure security group " "rules directly by using the :command:`neutron security-group-rule-create` " "command. This example enables ``ping`` and ``ssh`` access to your VMs." msgstr "" #: ../networking_use.rst:313 msgid "" "Does not implement Networking security groups, you can configure security " "group rules by using the :command:`nova secgroup-add-rule` or :command:`euca-" "authorize` command. These :command:`nova` commands enable ``ping`` and " "``ssh`` access to your VMs." msgstr "" #: ../networking_use.rst:325 msgid "" "If your plug-in implements Networking security groups, you can also leverage " "Compute security groups by setting ``security_group_api = neutron`` in the " "``nova.conf`` file. After you set this option, all Compute security group " "commands are proxied to Networking." msgstr "" #: ../objectstorage-admin.rst:3 msgid "System administration for Object Storage" msgstr "" #: ../objectstorage-admin.rst:5 msgid "" "By understanding Object Storage concepts, you can better monitor and " "administer your storage solution. The majority of the administration " "information is maintained in developer documentation at `docs.openstack.org/" "developer/swift/ `__." msgstr "" #: ../objectstorage-admin.rst:10 msgid "" "See the `OpenStack Configuration Reference `__ for a list of configuration options " "for Object Storage." msgstr "" #: ../objectstorage-monitoring.rst:3 msgid "Object Storage monitoring" msgstr "" #: ../objectstorage-monitoring.rst:7 msgid "" "This section was excerpted from a blog post by `Darrell Bishop `_ and has since " "been edited." msgstr "" #: ../objectstorage-monitoring.rst:11 msgid "" "An OpenStack Object Storage cluster is a collection of many daemons that " "work together across many nodes. With so many different components, you must " "be able to tell what is going on inside the cluster. Tracking server-level " "meters like CPU utilization, load, memory consumption, disk usage and " "utilization, and so on is necessary, but not sufficient." msgstr "" #: ../objectstorage-monitoring.rst:18 msgid "Swift Recon" msgstr "" #: ../objectstorage-monitoring.rst:20 msgid "" "The Swift Recon middleware (see `Defining Storage Policies `_) provides " "general machine statistics, such as load average, socket statistics, ``/proc/" "meminfo`` contents, as well as Swift-specific meters:" msgstr "" #: ../objectstorage-monitoring.rst:25 msgid "The ``MD5`` sum of each ring file." msgstr "" #: ../objectstorage-monitoring.rst:27 msgid "The most recent object replication time." msgstr "" #: ../objectstorage-monitoring.rst:29 msgid "Count of each type of quarantined file: Account, container, or object." msgstr "" #: ../objectstorage-monitoring.rst:32 msgid "Count of \"async_pendings\" (deferred container updates) on disk." msgstr "" #: ../objectstorage-monitoring.rst:34 msgid "" "Swift Recon is middleware that is installed in the object servers pipeline " "and takes one required option: A local cache directory. To track " "``async_pendings``, you must set up an additional cron job for each object " "server. You access data by either sending HTTP requests directly to the " "object server or using the ``swift-recon`` command-line client." msgstr "" #: ../objectstorage-monitoring.rst:41 msgid "" "There are Object Storage cluster statistics but the typical server meters " "overlap with existing server monitoring systems. To get the Swift-specific " "meters into a monitoring system, they must be polled. Swift Recon acts as a " "middleware meters collector. The process that feeds meters to your " "statistics system, such as ``collectd`` and ``gmond``, should already run on " "the storage node. You can choose to either talk to Swift Recon or collect " "the meters directly." msgstr "" #: ../objectstorage-monitoring.rst:51 msgid "Swift-Informant" msgstr "" #: ../objectstorage-monitoring.rst:53 msgid "" "Swift-Informant middleware (see `swift-informant `_) has real-time visibility into Object Storage " "client requests. It sits in the pipeline for the proxy server, and after " "each request to the proxy server it sends three meters to a ``StatsD`` " "server:" msgstr "" #: ../objectstorage-monitoring.rst:59 msgid "" "A counter increment for a meter like ``obj.GET.200`` or ``cont.PUT.404``." msgstr "" #: ../objectstorage-monitoring.rst:62 msgid "" "Timing data for a meter like ``acct.GET.200`` or ``obj.GET.200``. [The " "README says the meters look like ``duration.acct.GET.200``, but I do not see " "the ``duration`` in the code. I am not sure what the Etsy server does but " "our StatsD server turns timing meters into five derivative meters with new " "segments appended, so it probably works as coded. The first meter turns into " "``acct.GET.200.lower``, ``acct.GET.200.upper``, ``acct.GET.200.mean``, " "``acct.GET.200.upper_90``, and ``acct.GET.200.count``]." msgstr "" #: ../objectstorage-monitoring.rst:71 msgid "" "A counter increase by the bytes transferred for a meter like ``tfer.obj." "PUT.201``." msgstr "" #: ../objectstorage-monitoring.rst:74 msgid "" "This is used for receiving information on the quality of service clients " "experience with the timing meters, as well as sensing the volume of the " "various modifications of a request server type, command, and response code. " "Swift-Informant requires no change to core Object Storage code because it is " "implemented as middleware. However, it gives no insight into the workings of " "the cluster past the proxy server. If the responsiveness of one storage node " "degrades, you can only see that some of the requests are bad, either as high " "latency or error status codes." msgstr "" #: ../objectstorage-monitoring.rst:85 msgid "Statsdlog" msgstr "" #: ../objectstorage-monitoring.rst:87 msgid "" "The `Statsdlog `_ project " "increments StatsD counters based on logged events. Like Swift-Informant, it " "is also non-intrusive, however statsdlog can track events from all Object " "Storage daemons, not just proxy-server. The daemon listens to a UDP stream " "of syslog messages, and StatsD counters are incremented when a log line " "matches a regular expression. Meter names are mapped to regex match patterns " "in a JSON file, allowing flexible configuration of what meters are extracted " "from the log stream." msgstr "" #: ../objectstorage-monitoring.rst:96 msgid "" "Currently, only the first matching regex triggers a StatsD counter " "increment, and the counter is always incremented by one. There is no way to " "increment a counter by more than one or send timing data to StatsD based on " "the log line content. The tool could be extended to handle more meters for " "each line and data extraction, including timing data. But a coupling would " "still exist between the log textual format and the log parsing regexes, " "which would themselves be more complex to support multiple matches for each " "line and data extraction. Also, log processing introduces a delay between " "the triggering event and sending the data to StatsD. It would be preferable " "to increment error counters where they occur and send timing data as soon as " "it is known to avoid coupling between a log string and a parsing regex and " "prevent a time delay between events and sending data to StatsD." msgstr "" #: ../objectstorage-monitoring.rst:110 msgid "" "The next section describes another method for gathering Object Storage " "operational meters." msgstr "" #: ../objectstorage-monitoring.rst:114 msgid "Swift StatsD logging" msgstr "" #: ../objectstorage-monitoring.rst:116 msgid "" "StatsD (see http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-" "everything/) was designed for application code to be deeply instrumented. " "Meters are sent in real-time by the code that just noticed or did something. " "The overhead of sending a meter is extremely low: a ``sendto`` of one UDP " "packet. If that overhead is still too high, the StatsD client library can " "send only a random portion of samples and StatsD approximates the actual " "number when flushing meters upstream." msgstr "" #: ../objectstorage-monitoring.rst:125 msgid "" "To avoid the problems inherent with middleware-based monitoring and after-" "the-fact log processing, the sending of StatsD meters is integrated into " "Object Storage itself. The submitted change set (see ``_) currently reports 124 meters across 15 Object " "Storage daemons and the tempauth middleware. Details of the meters tracked " "are in the `Administrator's Guide `_." msgstr "" #: ../objectstorage-monitoring.rst:133 msgid "" "The sending of meters is integrated with the logging framework. To enable, " "configure ``log_statsd_host`` in the relevant config file. You can also " "specify the port and a default sample rate. The specified default sample " "rate is used unless a specific call to a statsd logging method (see the list " "below) overrides it. Currently, no logging calls override the sample rate, " "but it is conceivable that some meters may require accuracy " "(``sample_rate=1``) while others may not." msgstr "" #: ../objectstorage-monitoring.rst:149 msgid "" "Then the LogAdapter object returned by ``get_logger()``, usually stored in " "``self.logger``, has these new methods:" msgstr "" #: ../objectstorage-monitoring.rst:152 msgid "" "``set_statsd_prefix(self, prefix)`` Sets the client library stat prefix " "value which gets prefixed to every meter. The default prefix is the ``name`` " "of the logger such as ``object-server``, ``container-auditor``, and so on. " "This is currently used to turn ``proxy-server`` into one of ``proxy-server." "Account``, ``proxy-server.Container``, or ``proxy-server.Object`` as soon as " "the Controller object is determined and instantiated for the request." msgstr "" #: ../objectstorage-monitoring.rst:160 msgid "" "``update_stats(self, metric, amount, sample_rate=1)`` Increments the " "supplied meter by the given amount. This is used when you need to add or " "subtract more that one from a counter, like incrementing ``suffix.hashes`` " "by the number of computed hashes in the object replicator." msgstr "" #: ../objectstorage-monitoring.rst:166 msgid "" "``increment(self, metric, sample_rate=1)`` Increments the given counter " "meter by one." msgstr "" #: ../objectstorage-monitoring.rst:169 msgid "" "``decrement(self, metric, sample_rate=1)`` Lowers the given counter meter by " "one." msgstr "" #: ../objectstorage-monitoring.rst:172 msgid "" "``timing(self, metric, timing_ms, sample_rate=1)`` Record that the given " "meter took the supplied number of milliseconds." msgstr "" #: ../objectstorage-monitoring.rst:175 msgid "" "``timing_since(self, metric, orig_time, sample_rate=1)`` Convenience method " "to record a timing meter whose value is \"now\" minus an existing timestamp." msgstr "" #: ../objectstorage-monitoring.rst:181 msgid "" "These logging methods may safely be called anywhere you have a logger " "object. If StatsD logging has not been configured, the methods are no-ops. " "This avoids messy conditional logic each place a meter is recorded. These " "example usages show the new logging methods:" msgstr "" #: ../objectstorage-troubleshoot.rst:0 msgid "" "**Description of configuration options for [drive-audit] in drive-audit." "conf**" msgstr "" #: ../objectstorage-troubleshoot.rst:3 msgid "Troubleshoot Object Storage" msgstr "" #: ../objectstorage-troubleshoot.rst:5 msgid "" "For Object Storage, everything is logged in ``/var/log/syslog`` (or " "``messages`` on some distros). Several settings enable further customization " "of logging, such as ``log_name``, ``log_facility``, and ``log_level``, " "within the object server configuration files." msgstr "" #: ../objectstorage-troubleshoot.rst:11 msgid "Drive failure" msgstr "" #: ../objectstorage-troubleshoot.rst:16 msgid "Drive failure can prevent Object Storage performing replication." msgstr "" #: ../objectstorage-troubleshoot.rst:21 msgid "" "In the event that a drive has failed, the first step is to make sure the " "drive is unmounted. This will make it easier for Object Storage to work " "around the failure until it has been resolved. If the drive is going to be " "replaced immediately, then it is just best to replace the drive, format it, " "remount it, and let replication fill it up." msgstr "" #: ../objectstorage-troubleshoot.rst:27 msgid "" "If you cannot replace the drive immediately, then it is best to leave it " "unmounted, and remove the drive from the ring. This will allow all the " "replicas that were on that drive to be replicated elsewhere until the drive " "is replaced. Once the drive is replaced, it can be re-added to the ring." msgstr "" #: ../objectstorage-troubleshoot.rst:33 msgid "" "You can look at error messages in the ``/var/log/kern.log`` file for hints " "of drive failure." msgstr "" #: ../objectstorage-troubleshoot.rst:37 msgid "Server failure" msgstr "" #: ../objectstorage-troubleshoot.rst:42 msgid "" "The server is potentially offline, and may have failed, or require a reboot." msgstr "" #: ../objectstorage-troubleshoot.rst:48 msgid "" "If a server is having hardware issues, it is a good idea to make sure the " "Object Storage services are not running. This will allow Object Storage to " "work around the failure while you troubleshoot." msgstr "" #: ../objectstorage-troubleshoot.rst:52 msgid "" "If the server just needs a reboot, or a small amount of work that should " "only last a couple of hours, then it is probably best to let Object Storage " "work around the failure and get the machine fixed and back online. When the " "machine comes back online, replication will make sure that anything that is " "missing during the downtime will get updated." msgstr "" #: ../objectstorage-troubleshoot.rst:58 msgid "" "If the server has more serious issues, then it is probably best to remove " "all of the server's devices from the ring. Once the server has been repaired " "and is back online, the server's devices can be added back into the ring. It " "is important that the devices are reformatted before putting them back into " "the ring as it is likely to be responsible for a different set of partitions " "than before." msgstr "" #: ../objectstorage-troubleshoot.rst:66 msgid "Detect failed drives" msgstr "" #: ../objectstorage-troubleshoot.rst:71 msgid "" "When drives fail, it can be difficult to detect that a drive has failed, and " "the details of the failure." msgstr "" #: ../objectstorage-troubleshoot.rst:77 msgid "" "It has been our experience that when a drive is about to fail, error " "messages appear in the ``/var/log/kern.log`` file. There is a script called " "``swift-drive-audit`` that can be run via cron to watch for bad drives. If " "errors are detected, it will unmount the bad drive, so that Object Storage " "can work around it. The script takes a configuration file with the following " "settings:" msgstr "" #: ../objectstorage-troubleshoot.rst:89 msgid "``device_dir = /srv/node``" msgstr "" #: ../objectstorage-troubleshoot.rst:90 msgid "Directory devices are mounted under" msgstr "" #: ../objectstorage-troubleshoot.rst:91 msgid "``error_limit = 1``" msgstr "" #: ../objectstorage-troubleshoot.rst:92 msgid "Number of errors to find before a device is unmounted" msgstr "" #: ../objectstorage-troubleshoot.rst:93 msgid "``log_address = /dev/log``" msgstr "" #: ../objectstorage-troubleshoot.rst:94 msgid "Location where syslog sends the logs to" msgstr "" #: ../objectstorage-troubleshoot.rst:95 msgid "``log_facility = LOG_LOCAL0``" msgstr "" #: ../objectstorage-troubleshoot.rst:96 msgid "Syslog log facility" msgstr "" #: ../objectstorage-troubleshoot.rst:97 msgid "``log_file_pattern = /var/log/kern.*[!.][!g][!z]``" msgstr "" #: ../objectstorage-troubleshoot.rst:98 msgid "" "Location of the log file with globbing pattern to check against device " "errors locate device blocks with errors in the log file" msgstr "" #: ../objectstorage-troubleshoot.rst:100 msgid "``log_level = INFO``" msgstr "" #: ../objectstorage-troubleshoot.rst:101 msgid "Logging level" msgstr "" #: ../objectstorage-troubleshoot.rst:102 msgid "``log_max_line_length = 0``" msgstr "" #: ../objectstorage-troubleshoot.rst:103 msgid "" "Caps the length of log lines to the value given; no limit if set to 0, the " "default." msgstr "" #: ../objectstorage-troubleshoot.rst:105 msgid "``log_to_console = False``" msgstr "" #: ../objectstorage-troubleshoot.rst:106 ../objectstorage-troubleshoot.rst:112 #: ../objectstorage-troubleshoot.rst:114 msgid "No help text available for this option." msgstr "" #: ../objectstorage-troubleshoot.rst:107 msgid "``minutes = 60``" msgstr "" #: ../objectstorage-troubleshoot.rst:108 msgid "Number of minutes to look back in ``/var/log/kern.log``" msgstr "" #: ../objectstorage-troubleshoot.rst:109 msgid "``recon_cache_path = /var/cache/swift``" msgstr "" #: ../objectstorage-troubleshoot.rst:110 msgid "Directory where stats for a few items will be stored" msgstr "" #: ../objectstorage-troubleshoot.rst:111 msgid "``regex_pattern_1 = \\berror\\b.*\\b(dm-[0-9]{1,2}\\d?)\\b``" msgstr "" #: ../objectstorage-troubleshoot.rst:113 msgid "``unmount_failed_device = True``" msgstr "" #: ../objectstorage-troubleshoot.rst:118 msgid "" "This script has only been tested on Ubuntu 10.04; use with caution on other " "operating systems in production." msgstr "" #: ../objectstorage-troubleshoot.rst:122 msgid "Emergency recovery of ring builder files" msgstr "" #: ../objectstorage-troubleshoot.rst:127 msgid "" "An emergency might prevent a successful backup from restoring the cluster to " "operational status." msgstr "" #: ../objectstorage-troubleshoot.rst:133 msgid "" "You should always keep a backup of swift ring builder files. However, if an " "emergency occurs, this procedure may assist in returning your cluster to an " "operational state." msgstr "" #: ../objectstorage-troubleshoot.rst:137 msgid "" "Using existing swift tools, there is no way to recover a builder file from a " "``ring.gz`` file. However, if you have a knowledge of Python, it is possible " "to construct a builder file that is pretty close to the one you have lost." msgstr "" #: ../objectstorage-troubleshoot.rst:144 msgid "" "This procedure is a last-resort for emergency circumstances. It requires " "knowledge of the swift python code and may not succeed." msgstr "" #: ../objectstorage-troubleshoot.rst:147 msgid "Load the ring and a new ringbuilder object in a Python REPL:" msgstr "" #: ../objectstorage-troubleshoot.rst:154 msgid "Start copying the data we have in the ring into the builder:" msgstr "" #: ../objectstorage-troubleshoot.rst:177 msgid "" "For ``min_part_hours`` you either have to remember what the value you used " "was, or just make up a new one:" msgstr "" #: ../objectstorage-troubleshoot.rst:184 msgid "" "Validate the builder. If this raises an exception, check your previous code:" msgstr "" #: ../objectstorage-troubleshoot.rst:191 msgid "" "After it validates, save the builder and create a new ``account.builder``:" msgstr "" #: ../objectstorage-troubleshoot.rst:199 msgid "" "You should now have a file called ``account.builder`` in the current working " "directory. Run :command:`swift-ring-builder account.builder write_ring` and " "compare the new ``account.ring.gz`` to the ``account.ring.gz`` that you " "started from. They probably are not byte-for-byte identical, but if you load " "them in a REPL and their ``_replica2part2dev_id`` and ``devs`` attributes " "are the same (or nearly so), then you are in good shape." msgstr "" #: ../objectstorage-troubleshoot.rst:207 msgid "" "Repeat the procedure for ``container.ring.gz`` and ``object.ring.gz``, and " "you might get usable builder files." msgstr "" #: ../objectstorage.rst:3 msgid "Object Storage" msgstr "" #: ../objectstorage_EC.rst:3 msgid "Erasure coding" msgstr "" #: ../objectstorage_EC.rst:5 msgid "" "Erasure coding is a set of algorithms that allows the reconstruction of " "missing data from a set of original data. In theory, erasure coding uses " "less capacity with similar durability characteristics as replicas. From an " "application perspective, erasure coding support is transparent. Object " "Storage (swift) implements erasure coding as a Storage Policy. See `Storage " "Policies `_ for more details." msgstr "" #: ../objectstorage_EC.rst:14 msgid "" "There is no external API related to erasure coding. Create a container using " "a Storage Policy; the interaction with the cluster is the same as any other " "durability policy. Because support implements as a Storage Policy, you can " "isolate all storage devices that associate with your cluster's erasure " "coding capability. It is entirely possible to share devices between storage " "policies, but for erasure coding it may make more sense to use not only " "separate devices but possibly even entire nodes dedicated for erasure coding." msgstr "" #: ../objectstorage_EC.rst:25 msgid "" "The erasure code support in Object Storage is considered beta in Kilo. Most " "major functionality is included, but it has not been tested or validated at " "large scale. This feature relies on ``ssync`` for durability. We recommend " "deployers do extensive testing and not deploy production data using an " "erasure code storage policy. If any bugs are found during testing, please " "report them to https://bugs.launchpad.net/swift" msgstr "" #: ../objectstorage_account_reaper.rst:3 msgid "Account reaper" msgstr "" #: ../objectstorage_account_reaper.rst:5 msgid "" "The purpose of the account reaper is to remove data from the deleted " "accounts." msgstr "" #: ../objectstorage_account_reaper.rst:7 msgid "" "A reseller marks an account for deletion by issuing a ``DELETE`` request on " "the account's storage URL. This action sets the ``status`` column of the " "account_stat table in the account database and replicas to ``DELETED``, " "marking the account's data for deletion." msgstr "" #: ../objectstorage_account_reaper.rst:12 msgid "" "Typically, a specific retention time or undelete are not provided. However, " "you can set a ``delay_reaping`` value in the ``[account-reaper]`` section of " "the ``account-server.conf`` file to delay the actual deletion of data. At " "this time, to undelete you have to update the account database replicas " "directly, set the status column to an empty string and update the " "put_timestamp to be greater than the delete_timestamp." msgstr "" #: ../objectstorage_account_reaper.rst:22 msgid "" "It is on the development to-do list to write a utility that performs this " "task, preferably through a REST call." msgstr "" #: ../objectstorage_account_reaper.rst:25 msgid "" "The account reaper runs on each account server and scans the server " "occasionally for account databases marked for deletion. It only fires up on " "the accounts for which the server is the primary node, so that multiple " "account servers aren't trying to do it simultaneously. Using multiple " "servers to delete one account might improve the deletion speed but requires " "coordination to avoid duplication. Speed really is not a big concern with " "data deletion, and large accounts aren't deleted often." msgstr "" #: ../objectstorage_account_reaper.rst:33 msgid "" "Deleting an account is simple. For each account container, all objects are " "deleted and then the container is deleted. Deletion requests that fail will " "not stop the overall process but will cause the overall process to fail " "eventually (for example, if an object delete times out, you will not be able " "to delete the container or the account). The account reaper keeps trying to " "delete an account until it is empty, at which point the database reclaim " "process within the db\\_replicator will remove the database files." msgstr "" #: ../objectstorage_account_reaper.rst:42 msgid "" "A persistent error state may prevent the deletion of an object or container. " "If this happens, you will see a message in the log, for example:" msgstr "" #: ../objectstorage_account_reaper.rst:49 msgid "" "You can control when this is logged with the ``reap_warn_after`` value in " "the ``[account-reaper]`` section of the ``account-server.conf`` file. The " "default value is 30 days." msgstr "" #: ../objectstorage_arch.rst:3 msgid "Cluster architecture" msgstr "" #: ../objectstorage_arch.rst:6 msgid "Access tier" msgstr "" #: ../objectstorage_arch.rst:7 msgid "" "Large-scale deployments segment off an access tier, which is considered the " "Object Storage system's central hub. The access tier fields the incoming API " "requests from clients and moves data in and out of the system. This tier " "consists of front-end load balancers, ssl-terminators, and authentication " "services. It runs the (distributed) brain of the Object Storage system: the " "proxy server processes." msgstr "" #: ../objectstorage_arch.rst:16 msgid "" "If you want to use OpenStack Identity API v3 for authentication, you have " "the following options available in ``/etc/swift/dispersion.conf``: " "``auth_version``, ``user_domain_name``, ``project_domain_name``, and " "``project_name``." msgstr "" #: ../objectstorage_arch.rst:21 msgid "**Object Storage architecture**" msgstr "" #: ../objectstorage_arch.rst:27 msgid "" "Because access servers are collocated in their own tier, you can scale out " "read/write access regardless of the storage capacity. For example, if a " "cluster is on the public Internet, requires SSL termination, and has a high " "demand for data access, you can provision many access servers. However, if " "the cluster is on a private network and used primarily for archival " "purposes, you need fewer access servers." msgstr "" #: ../objectstorage_arch.rst:34 msgid "" "Since this is an HTTP addressable storage service, you may incorporate a " "load balancer into the access tier." msgstr "" #: ../objectstorage_arch.rst:37 msgid "" "Typically, the tier consists of a collection of 1U servers. These machines " "use a moderate amount of RAM and are network I/O intensive. Since these " "systems field each incoming API request, you should provision them with two " "high-throughput (10GbE) interfaces - one for the incoming ``front-end`` " "requests and the other for the ``back-end`` access to the object storage " "nodes to put and fetch data." msgstr "" #: ../objectstorage_arch.rst:45 ../objectstorage_arch.rst:78 msgid "Factors to consider" msgstr "" #: ../objectstorage_arch.rst:47 msgid "" "For most publicly facing deployments as well as private deployments " "available across a wide-reaching corporate network, you use SSL to encrypt " "traffic to the client. SSL adds significant processing load to establish " "sessions between clients, which is why you have to provision more capacity " "in the access layer. SSL may not be required for private deployments on " "trusted networks." msgstr "" #: ../objectstorage_arch.rst:55 msgid "Storage nodes" msgstr "" #: ../objectstorage_arch.rst:57 msgid "" "In most configurations, each of the five zones should have an equal amount " "of storage capacity. Storage nodes use a reasonable amount of memory and " "CPU. Metadata needs to be readily available to return objects quickly. The " "object stores run services not only to field incoming requests from the " "access tier, but to also run replicators, auditors, and reapers. You can " "provision object stores provisioned with single gigabit or 10 gigabit " "network interface depending on the expected workload and desired performance." msgstr "" #: ../objectstorage_arch.rst:66 msgid "**Object Storage (swift)**" msgstr "" #: ../objectstorage_arch.rst:73 msgid "" "Currently, a 2 TB or 3 TB SATA disk delivers good performance for the price. " "You can use desktop-grade drives if you have responsive remote hands in the " "datacenter and enterprise-grade drives if you don't." msgstr "" #: ../objectstorage_arch.rst:80 msgid "" "You should keep in mind the desired I/O performance for single-threaded " "requests. This system does not use RAID, so a single disk handles each " "request for an object. Disk performance impacts single-threaded response " "rates." msgstr "" #: ../objectstorage_arch.rst:85 msgid "" "To achieve apparent higher throughput, the object storage system is designed " "to handle concurrent uploads/downloads. The network I/O capacity (1GbE, " "bonded 1GbE pair, or 10GbE) should match your desired concurrent throughput " "needs for reads and writes." msgstr "" #: ../objectstorage_auditors.rst:3 msgid "Object Auditor" msgstr "" #: ../objectstorage_auditors.rst:5 msgid "" "On system failures, the XFS file system can sometimes truncate files it is " "trying to write and produce zero-byte files. The object-auditor will catch " "these problems but in the case of a system crash it is advisable to run an " "extra, less rate limited sweep, to check for these specific files. You can " "run this command as follows:" msgstr "" #: ../objectstorage_auditors.rst:17 msgid "" "\"-z\" means to only check for zero-byte files at 1000 files per second." msgstr "" #: ../objectstorage_auditors.rst:19 msgid "" "It is useful to run the object auditor on a specific device or set of " "devices. You can run the object-auditor once as follows:" msgstr "" #: ../objectstorage_auditors.rst:29 msgid "" "This will run the object auditor on only the ``sda`` and ``sdb`` devices. " "This parameter accepts a comma-separated list of values." msgstr "" #: ../objectstorage_characteristics.rst:3 msgid "Object Storage characteristics" msgstr "" #: ../objectstorage_characteristics.rst:5 msgid "The key characteristics of Object Storage are that:" msgstr "" #: ../objectstorage_characteristics.rst:7 msgid "All objects stored in Object Storage have a URL." msgstr "" #: ../objectstorage_characteristics.rst:9 msgid "" "All objects stored are replicated 3✕ in as-unique-as-possible zones, which " "can be defined as a group of drives, a node, a rack, and so on." msgstr "" #: ../objectstorage_characteristics.rst:12 msgid "All objects have their own metadata." msgstr "" #: ../objectstorage_characteristics.rst:14 msgid "" "Developers interact with the object storage system through a RESTful HTTP " "API." msgstr "" #: ../objectstorage_characteristics.rst:17 msgid "Object data can be located anywhere in the cluster." msgstr "" #: ../objectstorage_characteristics.rst:19 msgid "" "The cluster scales by adding additional nodes without sacrificing " "performance, which allows a more cost-effective linear storage expansion " "than fork-lift upgrades." msgstr "" #: ../objectstorage_characteristics.rst:23 msgid "Data does not have to be migrated to an entirely new storage system." msgstr "" #: ../objectstorage_characteristics.rst:25 msgid "New nodes can be added to the cluster without downtime." msgstr "" #: ../objectstorage_characteristics.rst:27 msgid "Failed nodes and disks can be swapped out without downtime." msgstr "" #: ../objectstorage_characteristics.rst:29 msgid "" "It runs on industry-standard hardware, such as Dell, HP, and Supermicro." msgstr "" #: ../objectstorage_characteristics.rst:34 msgid "Object Storage (swift)" msgstr "" #: ../objectstorage_characteristics.rst:38 msgid "" "Developers can either write directly to the Swift API or use one of the many " "client libraries that exist for all of the popular programming languages, " "such as Java, Python, Ruby, and C#. Amazon S3 and RackSpace Cloud Files " "users should be very familiar with Object Storage. Users new to object " "storage systems will have to adjust to a different approach and mindset than " "those required for a traditional filesystem." msgstr "" #: ../objectstorage_components.rst:3 msgid "Components" msgstr "" #: ../objectstorage_components.rst:5 msgid "" "Object Storage uses the following components to deliver high availability, " "high durability, and high concurrency:" msgstr "" #: ../objectstorage_components.rst:8 msgid "**Proxy servers** - Handle all of the incoming API requests." msgstr "" #: ../objectstorage_components.rst:10 msgid "**Rings** - Map logical names of data to locations on particular disks." msgstr "" #: ../objectstorage_components.rst:13 msgid "" "**Zones** - Isolate data from other zones. A failure in one zone does not " "impact the rest of the cluster as data replicates across zones." msgstr "" #: ../objectstorage_components.rst:17 msgid "" "**Accounts and containers** - Each account and container are individual " "databases that are distributed across the cluster. An account database " "contains the list of containers in that account. A container database " "contains the list of objects in that container." msgstr "" #: ../objectstorage_components.rst:22 msgid "**Objects** - The data itself." msgstr "" #: ../objectstorage_components.rst:24 msgid "" "**Partitions** - A partition stores objects, account databases, and " "container databases and helps manage locations where data lives in the " "cluster." msgstr "" #: ../objectstorage_components.rst:31 msgid "**Object Storage building blocks**" msgstr "" #: ../objectstorage_components.rst:37 msgid "Proxy servers" msgstr "" #: ../objectstorage_components.rst:39 msgid "" "Proxy servers are the public face of Object Storage and handle all of the " "incoming API requests. Once a proxy server receives a request, it determines " "the storage node based on the object's URL, for example: https://swift." "example.com/v1/account/container/object. Proxy servers also coordinate " "responses, handle failures, and coordinate timestamps." msgstr "" #: ../objectstorage_components.rst:45 msgid "" "Proxy servers use a shared-nothing architecture and can be scaled as needed " "based on projected workloads. A minimum of two proxy servers should be " "deployed for redundancy. If one proxy server fails, the others take over." msgstr "" #: ../objectstorage_components.rst:50 msgid "" "For more information concerning proxy server configuration, see " "`Configuration Reference `_." msgstr "" #: ../objectstorage_components.rst:55 msgid "Rings" msgstr "" #: ../objectstorage_components.rst:57 msgid "" "A ring represents a mapping between the names of entities stored on disks " "and their physical locations. There are separate rings for accounts, " "containers, and objects. When other components need to perform any operation " "on an object, container, or account, they need to interact with the " "appropriate ring to determine their location in the cluster." msgstr "" #: ../objectstorage_components.rst:63 msgid "" "The ring maintains this mapping using zones, devices, partitions, and " "replicas. Each partition in the ring is replicated, by default, three times " "across the cluster, and partition locations are stored in the mapping " "maintained by the ring. The ring is also responsible for determining which " "devices are used for handoff in failure scenarios." msgstr "" #: ../objectstorage_components.rst:69 msgid "" "Data can be isolated into zones in the ring. Each partition replica is " "guaranteed to reside in a different zone. A zone could represent a drive, a " "server, a cabinet, a switch, or even a data center." msgstr "" #: ../objectstorage_components.rst:73 msgid "" "The partitions of the ring are equally divided among all of the devices in " "the Object Storage installation. When partitions need to be moved around " "(for example, if a device is added to the cluster), the ring ensures that a " "minimum number of partitions are moved at a time, and only one replica of a " "partition is moved at a time." msgstr "" #: ../objectstorage_components.rst:79 msgid "" "You can use weights to balance the distribution of partitions on drives " "across the cluster. This can be useful, for example, when differently sized " "drives are used in a cluster." msgstr "" #: ../objectstorage_components.rst:83 msgid "" "The ring is used by the proxy server and several background processes (like " "replication)." msgstr "" #: ../objectstorage_components.rst:89 msgid "**The ring**" msgstr "" #: ../objectstorage_components.rst:93 msgid "" "These rings are externally managed. The server processes themselves do not " "modify the rings, they are instead given new rings modified by other tools." msgstr "" #: ../objectstorage_components.rst:97 msgid "" "The ring uses a configurable number of bits from an ``MD5`` hash for a path " "as a partition index that designates a device. The number of bits kept from " "the hash is known as the partition power, and 2 to the partition power " "indicates the partition count. Partitioning the full ``MD5`` hash ring " "allows other parts of the cluster to work in batches of items at once which " "ends up either more efficient or at least less complex than working with " "each item separately or the entire cluster all at once." msgstr "" #: ../objectstorage_components.rst:105 msgid "" "Another configurable value is the replica count, which indicates how many of " "the partition-device assignments make up a single ring. For a given " "partition number, each replica's device will not be in the same zone as any " "other replica's device. Zones can be used to group devices based on physical " "locations, power separations, network separations, or any other attribute " "that would improve the availability of multiple replicas at the same time." msgstr "" #: ../objectstorage_components.rst:114 msgid "Zones" msgstr "" #: ../objectstorage_components.rst:116 msgid "" "Object Storage allows configuring zones in order to isolate failure " "boundaries. If possible, each data replica resides in a separate zone. At " "the smallest level, a zone could be a single drive or a grouping of a few " "drives. If there were five object storage servers, then each server would " "represent its own zone. Larger deployments would have an entire rack (or " "multiple racks) of object servers, each representing a zone. The goal of " "zones is to allow the cluster to tolerate significant outages of storage " "servers without losing all replicas of the data." msgstr "" #: ../objectstorage_components.rst:128 msgid "**Zones**" msgstr "" #: ../objectstorage_components.rst:134 msgid "Accounts and containers" msgstr "" #: ../objectstorage_components.rst:136 msgid "" "Each account and container is an individual SQLite database that is " "distributed across the cluster. An account database contains the list of " "containers in that account. A container database contains the list of " "objects in that container." msgstr "" #: ../objectstorage_components.rst:144 msgid "**Accounts and containers**" msgstr "" #: ../objectstorage_components.rst:149 msgid "" "To keep track of object data locations, each account in the system has a " "database that references all of its containers, and each container database " "references each object." msgstr "" #: ../objectstorage_components.rst:154 msgid "Partitions" msgstr "" #: ../objectstorage_components.rst:156 msgid "" "A partition is a collection of stored data. This includes account databases, " "container databases, and objects. Partitions are core to the replication " "system." msgstr "" #: ../objectstorage_components.rst:160 msgid "" "Think of a partition as a bin moving throughout a fulfillment center " "warehouse. Individual orders get thrown into the bin. The system treats that " "bin as a cohesive entity as it moves throughout the system. A bin is easier " "to deal with than many little things. It makes for fewer moving parts " "throughout the system." msgstr "" #: ../objectstorage_components.rst:166 msgid "" "System replicators and object uploads/downloads operate on partitions. As " "the system scales up, its behavior continues to be predictable because the " "number of partitions is a fixed number." msgstr "" #: ../objectstorage_components.rst:170 msgid "" "Implementing a partition is conceptually simple, a partition is just a " "directory sitting on a disk with a corresponding hash table of what it " "contains." msgstr "" #: ../objectstorage_components.rst:177 msgid "**Partitions**" msgstr "" #: ../objectstorage_components.rst:183 msgid "Replicators" msgstr "" #: ../objectstorage_components.rst:185 msgid "" "In order to ensure that there are three copies of the data everywhere, " "replicators continuously examine each partition. For each local partition, " "the replicator compares it against the replicated copies in the other zones " "to see if there are any differences." msgstr "" #: ../objectstorage_components.rst:190 msgid "" "The replicator knows if replication needs to take place by examining hashes. " "A hash file is created for each partition, which contains hashes of each " "directory in the partition. Each of the three hash files is compared. For a " "given partition, the hash files for each of the partition's copies are " "compared. If the hashes are different, then it is time to replicate, and the " "directory that needs to be replicated is copied over." msgstr "" #: ../objectstorage_components.rst:198 msgid "" "This is where partitions come in handy. With fewer things in the system, " "larger chunks of data are transferred around (rather than lots of little TCP " "connections, which is inefficient) and there is a consistent number of " "hashes to compare." msgstr "" #: ../objectstorage_components.rst:203 msgid "" "The cluster eventually has a consistent behavior where the newest data has a " "priority." msgstr "" #: ../objectstorage_components.rst:209 msgid "**Replication**" msgstr "" #: ../objectstorage_components.rst:214 msgid "" "If a zone goes down, one of the nodes containing a replica notices and " "proactively copies data to a handoff location." msgstr "" #: ../objectstorage_components.rst:218 msgid "Use cases" msgstr "" #: ../objectstorage_components.rst:220 msgid "" "The following sections show use cases for object uploads and downloads and " "introduce the components." msgstr "" #: ../objectstorage_components.rst:225 msgid "Upload" msgstr "" #: ../objectstorage_components.rst:227 msgid "" "A client uses the REST API to make a HTTP request to PUT an object into an " "existing container. The cluster receives the request. First, the system must " "figure out where the data is going to go. To do this, the account name, " "container name, and object name are all used to determine the partition " "where this object should live." msgstr "" #: ../objectstorage_components.rst:233 msgid "" "Then a lookup in the ring figures out which storage nodes contain the " "partitions in question." msgstr "" #: ../objectstorage_components.rst:236 msgid "" "The data is then sent to each storage node where it is placed in the " "appropriate partition. At least two of the three writes must be successful " "before the client is notified that the upload was successful." msgstr "" #: ../objectstorage_components.rst:240 msgid "" "Next, the container database is updated asynchronously to reflect that there " "is a new object in it." msgstr "" #: ../objectstorage_components.rst:246 msgid "**Object Storage in use**" msgstr "" #: ../objectstorage_components.rst:252 msgid "Download" msgstr "" #: ../objectstorage_components.rst:254 msgid "" "A request comes in for an account/container/object. Using the same " "consistent hashing, the partition name is generated. A lookup in the ring " "reveals which storage nodes contain that partition. A request is made to one " "of the storage nodes to fetch the object and, if that fails, requests are " "made to the other nodes." msgstr "" #: ../objectstorage_features.rst:3 msgid "Features and benefits" msgstr "" #: ../objectstorage_features.rst:9 msgid "Features" msgstr "" #: ../objectstorage_features.rst:10 msgid "Benefits" msgstr "" #: ../objectstorage_features.rst:11 msgid "Leverages commodity hardware" msgstr "" #: ../objectstorage_features.rst:12 msgid "No lock-in, lower price/GB." msgstr "" #: ../objectstorage_features.rst:13 msgid "HDD/node failure agnostic" msgstr "" #: ../objectstorage_features.rst:14 msgid "Self-healing, reliable, data redundancy protects from failures." msgstr "" #: ../objectstorage_features.rst:15 msgid "Unlimited storage" msgstr "" #: ../objectstorage_features.rst:16 msgid "" "Large and flat namespace, highly scalable read/write access, able to serve " "content directly from storage system." msgstr "" #: ../objectstorage_features.rst:18 msgid "Multi-dimensional scalability" msgstr "" #: ../objectstorage_features.rst:19 msgid "" "Scale-out architecture: Scale vertically and horizontally-distributed " "storage. Backs up and archives large amounts of data with linear performance." msgstr "" #: ../objectstorage_features.rst:22 msgid "Account/container/object structure" msgstr "" #: ../objectstorage_features.rst:23 msgid "" "No nesting, not a traditional file system: Optimized for scale, it scales to " "multiple petabytes and billions of objects." msgstr "" #: ../objectstorage_features.rst:25 msgid "Built-in replication 3✕ + data redundancy (compared with 2✕ on RAID)" msgstr "" #: ../objectstorage_features.rst:27 msgid "" "A configurable number of accounts, containers and object copies for high " "availability." msgstr "" #: ../objectstorage_features.rst:29 msgid "Easily add capacity (unlike RAID resize)" msgstr "" #: ../objectstorage_features.rst:30 msgid "Elastic data scaling with ease." msgstr "" #: ../objectstorage_features.rst:31 msgid "No central database" msgstr "" #: ../objectstorage_features.rst:32 msgid "Higher performance, no bottlenecks." msgstr "" #: ../objectstorage_features.rst:33 msgid "RAID not required" msgstr "" #: ../objectstorage_features.rst:34 msgid "Handle many small, random reads and writes efficiently." msgstr "" #: ../objectstorage_features.rst:35 msgid "Built-in management utilities" msgstr "" #: ../objectstorage_features.rst:36 msgid "" "Account management: Create, add, verify, and delete users; Container " "management: Upload, download, and verify; Monitoring: Capacity, host, " "network, log trawling, and cluster health." msgstr "" #: ../objectstorage_features.rst:39 msgid "Drive auditing" msgstr "" #: ../objectstorage_features.rst:40 msgid "Detect drive failures preempting data corruption." msgstr "" #: ../objectstorage_features.rst:41 msgid "Expiring objects" msgstr "" #: ../objectstorage_features.rst:42 msgid "" "Users can set an expiration time or a TTL on an object to control access." msgstr "" #: ../objectstorage_features.rst:44 msgid "Direct object access" msgstr "" #: ../objectstorage_features.rst:45 msgid "Enable direct browser access to content, such as for a control panel." msgstr "" #: ../objectstorage_features.rst:47 msgid "Realtime visibility into client requests" msgstr "" #: ../objectstorage_features.rst:48 msgid "Know what users are requesting." msgstr "" #: ../objectstorage_features.rst:49 msgid "Supports S3 API" msgstr "" #: ../objectstorage_features.rst:50 msgid "Utilize tools that were designed for the popular S3 API." msgstr "" #: ../objectstorage_features.rst:51 msgid "Restrict containers per account" msgstr "" #: ../objectstorage_features.rst:52 msgid "Limit access to control usage by user." msgstr "" #: ../objectstorage_features.rst:53 msgid "Support for NetApp, Nexenta, Solidfire" msgstr "" #: ../objectstorage_features.rst:54 msgid "Unified support for block volumes using a variety of storage systems." msgstr "" #: ../objectstorage_features.rst:56 msgid "Snapshot and backup API for block volumes." msgstr "" #: ../objectstorage_features.rst:57 msgid "Data protection and recovery for VM data." msgstr "" #: ../objectstorage_features.rst:58 msgid "Standalone volume API available" msgstr "" #: ../objectstorage_features.rst:59 msgid "Separate endpoint and API for integration with other compute systems." msgstr "" #: ../objectstorage_features.rst:61 msgid "Integration with Compute" msgstr "" #: ../objectstorage_features.rst:62 msgid "" "Fully integrated with Compute for attaching block volumes and reporting on " "usage." msgstr "" #: ../objectstorage_intro.rst:3 msgid "Introduction to Object Storage" msgstr "" #: ../objectstorage_intro.rst:5 msgid "" "OpenStack Object Storage (swift) is used for redundant, scalable data " "storage using clusters of standardized servers to store petabytes of " "accessible data. It is a long-term storage system for large amounts of " "static data which can be retrieved and updated. Object Storage uses a " "distributed architecture with no central point of control, providing greater " "scalability, redundancy, and permanence. Objects are written to multiple " "hardware devices, with the OpenStack software responsible for ensuring data " "replication and integrity across the cluster. Storage clusters scale " "horizontally by adding new nodes. Should a node fail, OpenStack works to " "replicate its content from other active nodes. Because OpenStack uses " "software logic to ensure data replication and distribution across different " "devices, inexpensive commodity hard drives and servers can be used in lieu " "of more expensive equipment." msgstr "" #: ../objectstorage_intro.rst:20 msgid "" "Object Storage is ideal for cost effective, scale-out storage. It provides a " "fully distributed, API-accessible storage platform that can be integrated " "directly into applications or used for backup, archiving, and data retention." msgstr "" #: ../objectstorage_large-objects.rst:3 msgid "Large object support" msgstr "" #: ../objectstorage_large-objects.rst:5 msgid "" "Object Storage (swift) uses segmentation to support the upload of large " "objects. By default, Object Storage limits the download size of a single " "object to 5GB. Using segmentation, uploading a single object is virtually " "unlimited. The segmentation process works by fragmenting the object, and " "automatically creating a file that sends the segments together as a single " "object. This option offers greater upload speed with the possibility of " "parallel uploads." msgstr "" #: ../objectstorage_large-objects.rst:14 msgid "Large objects" msgstr "" #: ../objectstorage_large-objects.rst:15 msgid "The large object is comprised of two types of objects:" msgstr "" #: ../objectstorage_large-objects.rst:17 msgid "" "**Segment objects** store the object content. You can divide your content " "into segments, and upload each segment into its own segment object. Segment " "objects do not have any special features. You create, update, download, and " "delete segment objects just as you would normal objects." msgstr "" #: ../objectstorage_large-objects.rst:23 msgid "" "A **manifest object** links the segment objects into one logical large " "object. When you download a manifest object, Object Storage concatenates and " "returns the contents of the segment objects in the response body of the " "request. The manifest object types are:" msgstr "" #: ../objectstorage_large-objects.rst:28 msgid "**Static large objects**" msgstr "" #: ../objectstorage_large-objects.rst:29 msgid "**Dynamic large objects**" msgstr "" #: ../objectstorage_large-objects.rst:31 msgid "" "To find out more information on large object support, see `Large objects " "`_ in the End User Guide, or `Large Object Support `_ in the " "developer documentation." msgstr "" #: ../objectstorage_replication.rst:3 msgid "Replication" msgstr "" #: ../objectstorage_replication.rst:5 msgid "" "Because each replica in Object Storage functions independently and clients " "generally require only a simple majority of nodes to respond to consider an " "operation successful, transient failures like network partitions can quickly " "cause replicas to diverge. These differences are eventually reconciled by " "asynchronous, peer-to-peer replicator processes. The replicator processes " "traverse their local file systems and concurrently perform operations in a " "manner that balances load across physical disks." msgstr "" #: ../objectstorage_replication.rst:14 msgid "" "Replication uses a push model, with records and files generally only being " "copied from local to remote replicas. This is important because data on the " "node might not belong there (as in the case of hand offs and ring changes), " "and a replicator cannot know which data it should pull in from elsewhere in " "the cluster. Any node that contains data must ensure that data gets to where " "it belongs. The ring handles replica placement." msgstr "" #: ../objectstorage_replication.rst:21 msgid "" "To replicate deletions in addition to creations, every deleted record or " "file in the system is marked by a tombstone. The replication process cleans " "up tombstones after a time period known as the ``consistency window``. This " "window defines the duration of the replication and how long transient " "failure can remove a node from the cluster. Tombstone cleanup must be tied " "to replication to reach replica convergence." msgstr "" #: ../objectstorage_replication.rst:28 msgid "" "If a replicator detects that a remote drive has failed, the replicator uses " "the ``get_more_nodes`` interface for the ring to choose an alternate node " "with which to synchronize. The replicator can maintain desired levels of " "replication during disk failures, though some replicas might not be in an " "immediately usable location." msgstr "" #: ../objectstorage_replication.rst:36 msgid "" "The replicator does not maintain desired levels of replication when failures " "such as entire node failures occur; most failures are transient." msgstr "" #: ../objectstorage_replication.rst:40 msgid "The main replication types are:" msgstr "" #: ../objectstorage_replication.rst:43 ../objectstorage_replication.rst:49 msgid "Database replication" msgstr "" #: ../objectstorage_replication.rst:43 msgid "Replicates containers and objects." msgstr "" #: ../objectstorage_replication.rst:46 ../objectstorage_replication.rst:76 msgid "Object replication" msgstr "" #: ../objectstorage_replication.rst:46 msgid "Replicates object data." msgstr "" #: ../objectstorage_replication.rst:51 msgid "" "Database replication completes a low-cost hash comparison to determine " "whether two replicas already match. Normally, this check can quickly verify " "that most databases in the system are already synchronized. If the hashes " "differ, the replicator synchronizes the databases by sharing records added " "since the last synchronization point." msgstr "" #: ../objectstorage_replication.rst:57 msgid "" "This synchronization point is a high water mark that notes the last record " "at which two databases were known to be synchronized, and is stored in each " "database as a tuple of the remote database ID and record ID. Database IDs " "are unique across all replicas of the database, and record IDs are " "monotonically increasing integers. After all new records are pushed to the " "remote database, the entire synchronization table of the local database is " "pushed, so the remote database can guarantee that it is synchronized with " "everything with which the local database was previously synchronized." msgstr "" #: ../objectstorage_replication.rst:67 msgid "" "If a replica is missing, the whole local database file is transmitted to the " "peer by using rsync(1) and is assigned a new unique ID." msgstr "" #: ../objectstorage_replication.rst:70 msgid "" "In practice, database replication can process hundreds of databases per " "concurrency setting per second (up to the number of available CPUs or disks) " "and is bound by the number of database transactions that must be performed." msgstr "" #: ../objectstorage_replication.rst:78 msgid "" "The initial implementation of object replication performed an rsync to push " "data from a local partition to all remote servers where it was expected to " "reside. While this worked at small scale, replication times skyrocketed once " "directory structures could no longer be held in RAM. This scheme was " "modified to save a hash of the contents for each suffix directory to a per-" "partition hashes file. The hash for a suffix directory is no longer valid " "when the contents of that suffix directory is modified." msgstr "" #: ../objectstorage_replication.rst:87 msgid "" "The object replication process reads in hash files and calculates any " "invalidated hashes. Then, it transmits the hashes to each remote server that " "should hold the partition, and only suffix directories with differing hashes " "on the remote server are rsynced. After pushing files to the remote server, " "the replication process notifies it to recalculate hashes for the rsynced " "suffix directories." msgstr "" #: ../objectstorage_replication.rst:94 msgid "" "The number of uncached directories that object replication must traverse, " "usually as a result of invalidated suffix directory hashes, impedes " "performance. To provide acceptable replication speeds, object replication is " "designed to invalidate around 2 percent of the hash space on a normal node " "each day." msgstr "" #: ../objectstorage_ringbuilder.rst:3 msgid "Ring-builder" msgstr "" #: ../objectstorage_ringbuilder.rst:5 msgid "" "Use the swift-ring-builder utility to build and manage rings. This utility " "assigns partitions to devices and writes an optimized Python structure to a " "gzipped, serialized file on disk for transmission to the servers. The server " "processes occasionally check the modification time of the file and reload in-" "memory copies of the ring structure as needed. If you use a slightly older " "version of the ring, one of the three replicas for a partition subset will " "be incorrect because of the way the ring-builder manages changes to the " "ring. You can work around this issue." msgstr "" #: ../objectstorage_ringbuilder.rst:15 msgid "" "The ring-builder also keeps its own builder file with the ring information " "and additional data required to build future rings. It is very important to " "keep multiple backup copies of these builder files. One option is to copy " "the builder files out to every server while copying the ring files " "themselves. Another is to upload the builder files into the cluster itself. " "If you lose the builder file, you have to create a new ring from scratch. " "Nearly all partitions would be assigned to different devices and, therefore, " "nearly all of the stored data would have to be replicated to new locations. " "So, recovery from a builder file loss is possible, but data would be " "unreachable for an extended time." msgstr "" #: ../objectstorage_ringbuilder.rst:27 msgid "Ring data structure" msgstr "" #: ../objectstorage_ringbuilder.rst:29 msgid "" "The ring data structure consists of three top level fields: a list of " "devices in the cluster, a list of lists of device ids indicating partition " "to device assignments, and an integer indicating the number of bits to shift " "an MD5 hash to calculate the partition for the hash." msgstr "" #: ../objectstorage_ringbuilder.rst:35 msgid "Partition assignment list" msgstr "" #: ../objectstorage_ringbuilder.rst:37 msgid "" "This is a list of ``array('H')`` of devices ids. The outermost list contains " "an ``array('H')`` for each replica. Each ``array('H')`` has a length equal " "to the partition count for the ring. Each integer in the ``array('H')`` is " "an index into the above list of devices. The partition list is known " "internally to the Ring class as ``_replica2part2dev_id``." msgstr "" #: ../objectstorage_ringbuilder.rst:43 msgid "" "So, to create a list of device dictionaries assigned to a partition, the " "Python code would look like:" msgstr "" #: ../objectstorage_ringbuilder.rst:51 msgid "" "That code is a little simplistic because it does not account for the removal " "of duplicate devices. If a ring has more replicas than devices, a partition " "will have more than one replica on a device." msgstr "" #: ../objectstorage_ringbuilder.rst:55 msgid "" "``array('H')`` is used for memory conservation as there may be millions of " "partitions." msgstr "" #: ../objectstorage_ringbuilder.rst:59 msgid "Overload" msgstr "" #: ../objectstorage_ringbuilder.rst:61 msgid "" "The ring builder tries to keep replicas as far apart as possible while still " "respecting device weights. When it can not do both, the overload factor " "determines what happens. Each device takes an extra fraction of its desired " "partitions to allow for replica dispersion; after that extra fraction is " "exhausted, replicas are placed closer together than optimal." msgstr "" #: ../objectstorage_ringbuilder.rst:68 msgid "" "The overload factor lets the operator trade off replica dispersion " "(durability) against data dispersion (uniform disk usage)." msgstr "" #: ../objectstorage_ringbuilder.rst:71 msgid "" "The default overload factor is 0, so device weights are strictly followed." msgstr "" #: ../objectstorage_ringbuilder.rst:74 msgid "" "With an overload factor of 0.1, each device accepts 10% more partitions than " "it otherwise would, but only if it needs to maintain partition dispersion." msgstr "" #: ../objectstorage_ringbuilder.rst:78 msgid "" "For example, consider a 3-node cluster of machines with equal-size disks; " "node A has 12 disks, node B has 12 disks, and node C has 11 disks. The ring " "has an overload factor of 0.1 (10%)." msgstr "" #: ../objectstorage_ringbuilder.rst:82 msgid "" "Without the overload, some partitions would end up with replicas only on " "nodes A and B. However, with the overload, every device can accept up to 10% " "more partitions for the sake of dispersion. The missing disk in C means " "there is one disk's worth of partitions to spread across the remaining 11 " "disks, which gives each disk in C an extra 9.09% load. Since this is less " "than the 10% overload, there is one replica of each partition on each node." msgstr "" #: ../objectstorage_ringbuilder.rst:90 msgid "" "However, this does mean that the disks in node C have more data than the " "disks in nodes A and B. If 80% full is the warning threshold for the " "cluster, node C's disks reach 80% full while A and B's disks are only 72.7% " "full." msgstr "" #: ../objectstorage_ringbuilder.rst:97 msgid "Replica counts" msgstr "" #: ../objectstorage_ringbuilder.rst:99 msgid "" "To support the gradual change in replica counts, a ring can have a real " "number of replicas and is not restricted to an integer number of replicas." msgstr "" #: ../objectstorage_ringbuilder.rst:103 msgid "" "A fractional replica count is for the whole ring and not for individual " "partitions. It indicates the average number of replicas for each partition. " "For example, a replica count of 3.2 means that 20 percent of partitions have " "four replicas and 80 percent have three replicas." msgstr "" #: ../objectstorage_ringbuilder.rst:108 msgid "The replica count is adjustable. For example:" msgstr "" #: ../objectstorage_ringbuilder.rst:115 msgid "" "You must rebalance the replica ring in globally distributed clusters. " "Operators of these clusters generally want an equal number of replicas and " "regions. Therefore, when an operator adds or removes a region, the operator " "adds or removes a replica. Removing unneeded replicas saves on the cost of " "disks." msgstr "" #: ../objectstorage_ringbuilder.rst:121 msgid "" "You can gradually increase the replica count at a rate that does not " "adversely affect cluster performance. For example:" msgstr "" #: ../objectstorage_ringbuilder.rst:134 msgid "" "Changes take effect after the ring is rebalanced. Therefore, if you intend " "to change from 3 replicas to 3.01 but you accidentally type 2.01, no data is " "lost." msgstr "" #: ../objectstorage_ringbuilder.rst:138 msgid "" "Additionally, the :command:`swift-ring-builder X.builder create` command can " "now take a decimal argument for the number of replicas." msgstr "" #: ../objectstorage_ringbuilder.rst:142 msgid "Partition shift value" msgstr "" #: ../objectstorage_ringbuilder.rst:144 msgid "" "The partition shift value is known internally to the Ring class as " "``_part_shift``. This value is used to shift an MD5 hash to calculate the " "partition where the data for that hash should reside. Only the top four " "bytes of the hash is used in this process. For example, to compute the " "partition for the ``/account/container/object`` path using Python:" msgstr "" #: ../objectstorage_ringbuilder.rst:156 msgid "" "For a ring generated with part\\_power P, the partition shift value is ``32 " "- P``." msgstr "" #: ../objectstorage_ringbuilder.rst:160 msgid "Build the ring" msgstr "" #: ../objectstorage_ringbuilder.rst:162 msgid "The ring builder process includes these high-level steps:" msgstr "" #: ../objectstorage_ringbuilder.rst:164 msgid "" "The utility calculates the number of partitions to assign to each device " "based on the weight of the device. For example, for a partition at the power " "of 20, the ring has 1,048,576 partitions. One thousand devices of equal " "weight each want 1,048.576 partitions. The devices are sorted by the number " "of partitions they desire and kept in order throughout the initialization " "process." msgstr "" #: ../objectstorage_ringbuilder.rst:173 msgid "" "Each device is also assigned a random tiebreaker value that is used when two " "devices desire the same number of partitions. This tiebreaker is not stored " "on disk anywhere, and so two different rings created with the same " "parameters will have different partition assignments. For repeatable " "partition assignments, ``RingBuilder.rebalance()`` takes an optional seed " "value that seeds the Python pseudo-random number generator." msgstr "" #: ../objectstorage_ringbuilder.rst:181 msgid "" "The ring builder assigns each partition replica to the device that requires " "most partitions at that point while keeping it as far away as possible from " "other replicas. The ring builder prefers to assign a replica to a device in " "a region that does not already have a replica. If no such region is " "available, the ring builder searches for a device in a different zone, or on " "a different server. If it does not find one, it looks for a device with no " "replicas. Finally, if all options are exhausted, the ring builder assigns " "the replica to the device that has the fewest replicas already assigned." msgstr "" #: ../objectstorage_ringbuilder.rst:193 msgid "" "The ring builder assigns multiple replicas to one device only if the ring " "has fewer devices than it has replicas." msgstr "" #: ../objectstorage_ringbuilder.rst:196 msgid "" "When building a new ring from an old ring, the ring builder recalculates the " "desired number of partitions that each device wants." msgstr "" #: ../objectstorage_ringbuilder.rst:199 msgid "" "The ring builder unassigns partitions and gathers these partitions for " "reassignment, as follows:" msgstr "" #: ../objectstorage_ringbuilder.rst:202 msgid "" "The ring builder unassigns any assigned partitions from any removed devices " "and adds these partitions to the gathered list." msgstr "" #: ../objectstorage_ringbuilder.rst:204 msgid "" "The ring builder unassigns any partition replicas that can be spread out for " "better durability and adds these partitions to the gathered list." msgstr "" #: ../objectstorage_ringbuilder.rst:207 msgid "" "The ring builder unassigns random partitions from any devices that have more " "partitions than they need and adds these partitions to the gathered list." msgstr "" #: ../objectstorage_ringbuilder.rst:211 msgid "" "The ring builder reassigns the gathered partitions to devices by using a " "similar method to the one described previously." msgstr "" #: ../objectstorage_ringbuilder.rst:214 msgid "" "When the ring builder reassigns a replica to a partition, the ring builder " "records the time of the reassignment. The ring builder uses this value when " "it gathers partitions for reassignment so that no partition is moved twice " "in a configurable amount of time. The RingBuilder class knows this " "configurable amount of time as ``min_part_hours``. The ring builder ignores " "this restriction for replicas of partitions on removed devices because " "removal of a device happens on device failure only, and reassignment is the " "only choice." msgstr "" #: ../objectstorage_ringbuilder.rst:223 msgid "" "These steps do not always perfectly rebalance a ring due to the random " "nature of gathering partitions for reassignment. To help reach a more " "balanced ring, the rebalance process is repeated until near perfect (less " "than 1 percent off) or when the balance does not improve by at least 1 " "percent (indicating we probably cannot get perfect balance due to wildly " "imbalanced zones or too many partitions recently moved)." msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:3 msgid "Configure tenant-specific image locations with Object Storage" msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:5 msgid "" "For some deployers, it is not ideal to store all images in one place to " "enable all tenants and users to access them. You can configure the Image " "service to store image data in tenant-specific image locations. Then, only " "the following tenants can use the Image service to access the created image:" msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:11 msgid "The tenant who owns the image" msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:12 msgid "" "Tenants that are defined in ``swift_store_admin_tenants`` and that have " "admin-level accounts" msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:15 msgid "**To configure tenant-specific image locations**" msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:17 msgid "" "Configure swift as your ``default_store`` in the ``glance-api.conf`` file." msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:20 msgid "Set these configuration options in the ``glance-api.conf`` file:" msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:23 msgid "" "Set to ``True`` to enable tenant-specific storage locations. Default is " "``False``." msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:24 msgid "swift_store_multi_tenant" msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:27 msgid "" "Specify a list of tenant IDs that can grant read and write access to all " "Object Storage containers that are created by the Image service." msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:28 msgid "swift_store_admin_tenants" msgstr "" #: ../objectstorage_tenant_specific_image_storage.rst:30 msgid "" "With this configuration, images are stored in an Object Storage service " "(swift) endpoint that is pulled from the service catalog for the " "authenticated user." msgstr "" #: ../orchestration-auth-model.rst:5 msgid "Orchestration authorization model" msgstr "" #: ../orchestration-auth-model.rst:8 msgid "" "The Orchestration authorization model defines the authorization process for " "requests during deferred operations. A common example is an auto-scaling " "group update. During the auto-scaling update operation, the Orchestration " "service requests resources of other components (such as servers from Compute " "or networks from Networking) to extend or reduce the capacity of an auto-" "scaling group." msgstr "" #: ../orchestration-auth-model.rst:16 msgid "The Orchestration service provides the following authorization models:" msgstr "" #: ../orchestration-auth-model.rst:18 ../orchestration-auth-model.rst:23 msgid "Password authorization" msgstr "" #: ../orchestration-auth-model.rst:20 ../orchestration-auth-model.rst:52 msgid "OpenStack Identity trusts authorization" msgstr "" #: ../orchestration-auth-model.rst:25 msgid "" "The Orchestration service supports password authorization. Password " "authorization requires that a user pass a username and password to the " "Orchestration service. Encrypted password are stored in the database, and " "used for deferred operations." msgstr "" #: ../orchestration-auth-model.rst:31 msgid "Password authorization involves the following steps:" msgstr "" #: ../orchestration-auth-model.rst:33 msgid "" "A user requests stack creation, by providing a token and username and " "password. The Dashboard or python-heatclient requests the token on the " "user's behalf." msgstr "" #: ../orchestration-auth-model.rst:37 msgid "" "If the stack contains any resources that require deferred operations, then " "the orchestration engine fails its validation checks if the user did not " "provide a valid username/password." msgstr "" #: ../orchestration-auth-model.rst:41 msgid "" "The username/password are encrypted and stored in the Orchestration database." msgstr "" #: ../orchestration-auth-model.rst:44 msgid "Orchestration creates a stack." msgstr "" #: ../orchestration-auth-model.rst:46 msgid "" "Later, the Orchestration service retrieves the credentials and requests " "another token on behalf of the user. The token is not limited in scope and " "provides access to all the roles of the stack owner." msgstr "" #: ../orchestration-auth-model.rst:54 msgid "" "A trust is an OpenStack Identity extension that enables delegation, and " "optionally impersonation through the OpenStack Identity service. The key " "terminology is *trustor* (the user delegating) and *trustee* (the user being " "delegated to)." msgstr "" #: ../orchestration-auth-model.rst:59 msgid "" "To create a trust, the *trustor* (in this case, the user creating the stack " "in the Orchestration service) provides the OpenStack Identity service with " "the following information:" msgstr "" #: ../orchestration-auth-model.rst:63 msgid "" "The ID of the *trustee* (who you want to delegate to, in this case, the " "Orchestration service user)." msgstr "" #: ../orchestration-auth-model.rst:66 msgid "" "The roles to be delegated. Configure roles through the ``heat.conf`` file. " "Ensure the configuration contains whatever roles are required to perform the " "deferred operations on the user's behalf. For example, launching an " "OpenStack Compute instance in response to an auto-scaling event." msgstr "" #: ../orchestration-auth-model.rst:72 msgid "Whether to enable impersonation." msgstr "" #: ../orchestration-auth-model.rst:74 msgid "" "The OpenStack Identity service provides a *trust id*, which is consumed by " "*only* the trustee to obtain a *trust scoped token*. This token is limited " "in scope, such that the trustee has limited access to those roles delegated. " "In addition, the trustee has effective impersonation of the trustor user if " "it was selected when creating the trust. For more information, see the :ref:" "`Identity management ` section." msgstr "" #: ../orchestration-auth-model.rst:83 msgid "Trusts authorization involves the following steps:" msgstr "" #: ../orchestration-auth-model.rst:85 msgid "" "A user creates a stack through an API request (only the token is required)." msgstr "" #: ../orchestration-auth-model.rst:88 msgid "" "The Orchestration service uses the token to create a trust between the stack " "owner (trustor) and the Orchestration service user (trustee). The service " "delegates a special role (or roles) as defined in the " "*trusts_delegated_roles* list in the Orchestration configuration file. By " "default, the Orchestration service sets all the roles from trustor available " "for trustee. Deployers might modify this list to reflect a local RBAC " "policy. For example, to ensure that the heat process can access only those " "services that are expected while impersonating a stack owner." msgstr "" #: ../orchestration-auth-model.rst:98 msgid "" "Orchestration stores the encrypted *trust id* in the Orchestration database." msgstr "" #: ../orchestration-auth-model.rst:101 msgid "" "When a deferred operation is required, the Orchestration service retrieves " "the *trust id* and requests a trust scoped token which enables the service " "user to impersonate the stack owner during the deferred operation. " "Impersonation is helpful, for example, so the service user can launch " "Compute instances on behalf of the stack owner in response to an auto-" "scaling event." msgstr "" #: ../orchestration-auth-model.rst:109 msgid "Authorization model configuration" msgstr "" #: ../orchestration-auth-model.rst:111 msgid "" "Initially, the password authorization model was the default authorization " "model. Since the Kilo release, the Identity trusts authorization model is " "enabled for the Orchestration service by default." msgstr "" #: ../orchestration-auth-model.rst:116 msgid "" "To enable the password authorization model, change the following parameter " "in the ``heat.conf`` file:" msgstr "" #: ../orchestration-auth-model.rst:123 msgid "" "To enable the trusts authorization model, change the following parameter in " "the ``heat.conf`` file:" msgstr "" #: ../orchestration-auth-model.rst:130 msgid "" "To specify the trustor roles that it delegates to trustee during " "authorization, specify the ``trusts_delegated_roles`` parameter in the " "``heat.conf`` file. If ``trusts_delegated_roles`` is not defined, then all " "the trustor roles are delegated to trustee." msgstr "" #: ../orchestration-auth-model.rst:137 msgid "" "The trustor delegated roles must be pre-configured in the OpenStack Identity " "service before using them in the Orchestration service." msgstr "" #: ../orchestration-introduction.rst:5 msgid "" "The OpenStack Orchestration service, a tool for orchestrating clouds, " "automatically configures and deploys resources in stacks. The deployments " "can be simple, such as deploying WordPress on Ubuntu with an SQL back end, " "or complex, such as starting a server group that auto scales by starting and " "stopping using real-time CPU loading information from the Telemetry service." msgstr "" #: ../orchestration-introduction.rst:12 msgid "" "Orchestration stacks are defined with templates, which are non-procedural " "documents. Templates describe tasks in terms of resources, parameters, " "inputs, constraints, and dependencies. When the Orchestration service was " "originally introduced, it worked with AWS CloudFormation templates, which " "are in the JSON format." msgstr "" #: ../orchestration-introduction.rst:18 msgid "" "The Orchestration service also runs Heat Orchestration Template (HOT) " "templates that are written in YAML. YAML is a terse notation that loosely " "follows structural conventions (colons, returns, indentation) that are " "similar to Python or Ruby. Therefore, it is easier to write, parse, grep, " "generate with tools, and maintain source-code management systems." msgstr "" #: ../orchestration-introduction.rst:24 msgid "" "Orchestration can be accessed through a CLI and RESTful queries. The " "Orchestration service provides both an OpenStack-native REST API and a " "CloudFormation-compatible Query API. The Orchestration service is also " "integrated with the OpenStack dashboard to perform stack functions through a " "web interface." msgstr "" #: ../orchestration-introduction.rst:30 msgid "" "For more information about using the Orchestration service through the " "command line, see the `OpenStack Command-Line Interface Reference `_." msgstr "" #: ../orchestration-stack-domain-users.rst:5 msgid "Stack domain users" msgstr "" #: ../orchestration-stack-domain-users.rst:7 msgid "" "Stack domain users allow the Orchestration service to authorize and start " "the following operations within booted virtual machines:" msgstr "" #: ../orchestration-stack-domain-users.rst:11 msgid "" "Provide metadata to agents inside instances. Agents poll for changes and " "apply the configuration that is expressed in the metadata to the instance." msgstr "" #: ../orchestration-stack-domain-users.rst:15 msgid "" "Detect when an action is complete. Typically, software configuration on a " "virtual machine after it is booted. Compute moves the VM state to \"Active\" " "as soon as it creates it, not when the Orchestration service has fully " "configured it." msgstr "" #: ../orchestration-stack-domain-users.rst:20 msgid "" "Provide application level status or meters from inside the instance. For " "example, allow auto-scaling actions to be performed in response to some " "measure of performance or quality of service." msgstr "" #: ../orchestration-stack-domain-users.rst:24 msgid "" "The Orchestration service provides APIs that enable all of these operations, " "but all of those APIs require authentication. For example, credentials to " "access the instance that the agent is running upon. The heat-cfntools agents " "use signed requests, which require an ec2 key pair created through Identity. " "The key pair is then used to sign requests to the Orchestration " "CloudFormation and CloudWatch compatible APIs, which are authenticated " "through signature validation. Signature validation uses the Identity " "ec2tokens extension." msgstr "" #: ../orchestration-stack-domain-users.rst:34 msgid "" "Stack domain users encapsulate all stack-defined users (users who are " "created as a result of data that is contained in an Orchestration template) " "in a separate domain. The separate domain is created specifically to contain " "data related to the Orchestration stacks only. A user is created, which is " "the *domain admin*, and Orchestration uses the *domain admin* to manage the " "lifecycle of the users in the stack *user domain*." msgstr "" #: ../orchestration-stack-domain-users.rst:43 msgid "Stack domain users configuration" msgstr "" #: ../orchestration-stack-domain-users.rst:45 msgid "" "To configure stack domain user, the Orchestration service completes the " "following tasks:" msgstr "" #: ../orchestration-stack-domain-users.rst:48 msgid "" "A special OpenStack Identity service domain is created. For example, a " "domain that is called ``heat`` and the ID is set with the " "``stack_user_domain`` option in the :file:`heat.conf` file." msgstr "" #: ../orchestration-stack-domain-users.rst:51 msgid "" "A user with sufficient permissions to create and delete projects and users " "in the ``heat`` domain is created." msgstr "" #: ../orchestration-stack-domain-users.rst:53 msgid "" "The username and password for the domain admin user is set in the :file:" "`heat.conf` file (``stack_domain_admin`` and " "``stack_domain_admin_password``). This user administers *stack domain users* " "on behalf of stack owners, so they no longer need to be administrators " "themselves. The risk of this escalation path is limited because the " "``heat_domain_admin`` is only given administrative permission for the " "``heat`` domain." msgstr "" #: ../orchestration-stack-domain-users.rst:61 msgid "To set up stack domain users, complete the following steps:" msgstr "" #: ../orchestration-stack-domain-users.rst:63 msgid "Create the domain:" msgstr "" #: ../orchestration-stack-domain-users.rst:65 msgid "" "``$OS_TOKEN`` refers to a token. For example, the service admin token or " "some other valid token for a user with sufficient roles to create users and " "domains. ``$KS_ENDPOINT_V3`` refers to the v3 OpenStack Identity endpoint " "(for example, ``http://keystone_address:5000/v3`` where *keystone_address* " "is the IP address or resolvable name for the Identity service)." msgstr "" #: ../orchestration-stack-domain-users.rst:79 msgid "" "The domain ID is returned by this command, and is referred to as ``" "$HEAT_DOMAIN_ID`` below." msgstr "" #: ../orchestration-stack-domain-users.rst:82 msgid "Create the user:" msgstr "" #: ../orchestration-stack-domain-users.rst:91 msgid "" "The user ID is returned by this command and is referred to as ``" "$DOMAIN_ADMIN_ID`` below." msgstr "" #: ../orchestration-stack-domain-users.rst:94 msgid "Make the user a domain admin:" msgstr "" #: ../orchestration-stack-domain-users.rst:102 msgid "" "Then you must add the domain ID, username and password from these steps to " "the :file:`heat.conf` file:" msgstr "" #: ../orchestration-stack-domain-users.rst:112 msgid "Usage workflow" msgstr "" #: ../orchestration-stack-domain-users.rst:114 msgid "The following steps are run during stack creation:" msgstr "" #: ../orchestration-stack-domain-users.rst:116 msgid "" "Orchestration creates a new *stack domain project* in the ``heat`` domain if " "the stack contains any resources that require creation of a *stack domain " "user*." msgstr "" #: ../orchestration-stack-domain-users.rst:120 msgid "" "For any resources that require a user, the Orchestration service creates the " "user in the *stack domain project*. The *stack domain project* is associated " "with the Orchestration stack in the Orchestration database, but is separate " "and unrelated (from an authentication perspective) to the stack owners " "project. The users who are created in the stack domain are still assigned " "the ``heat_stack_user`` role, so the API surface they can access is limited " "through the :file:`policy.json` file. For more information, see :ref:" "`OpenStack Identity documentation `." msgstr "" #: ../orchestration-stack-domain-users.rst:131 msgid "" "When API requests are processed, the Orchestration service performs an " "internal lookup, and allows stack details for a given stack to be retrieved. " "Details are retrieved from the database for both the stack owner's project " "(the default API path to the stack) and the stack domain project, subject to " "the :file:`policy.json` restrictions." msgstr "" #: ../orchestration-stack-domain-users.rst:138 msgid "" "This means there are now two paths that can result in the same data being " "retrieved through the Orchestration API. The following example is for " "resource-metadata::" msgstr "" #: ../orchestration-stack-domain-users.rst:145 msgid "or::" msgstr "" #: ../orchestration-stack-domain-users.rst:150 msgid "" "The stack owner uses the former (via ``heat resource-metadata {stack_name} " "{resource_name}``), and any agents in the instance use the latter." msgstr "" #: ../orchestration.rst:5 msgid "Orchestration" msgstr "" #: ../orchestration.rst:7 msgid "" "Orchestration is an orchestration engine that provides the possibility to " "launch multiple composite cloud applications based on templates in the form " "of text files that can be treated like code. A native Heat Orchestration " "Template (HOT) format is evolving, but it also endeavors to provide " "compatibility with the AWS CloudFormation template format, so that many " "existing CloudFormation templates can be launched on OpenStack." msgstr "" #: ../shared_file_systems.rst:5 msgid "Shared File Systems" msgstr "" #: ../shared_file_systems.rst:7 msgid "" "Shared File Systems service provides a set of services for management of " "shared file systems in a multi-tenant cloud environment. The service " "resembles OpenStack block-based storage management from the OpenStack Block " "Storage service project. With the Shared File Systems service, you can " "create a remote file system, mount the file system on your instances, and " "then read and write data from your instances to and from your file system." msgstr "" #: ../shared_file_systems.rst:14 msgid "" "The Shared File Systems service serves same purpose as the Amazon Elastic " "File System (EFS) does." msgstr "" #: ../shared_file_systems_cgroups.rst:7 msgid "" "Consistency groups enable you to create snapshots from multiple file system " "shares at the same point in time. For example, a database might place its " "tables, logs, and configurations on separate shares. Store logs, tables, and " "configurations at the same point in time to effectively restore a database." msgstr "" #: ../shared_file_systems_cgroups.rst:13 msgid "" "The Shared File System service allows you to create a snapshot of the " "consistency group and restore all shares that were associated with a " "consistency group." msgstr "" #: ../shared_file_systems_cgroups.rst:19 msgid "" "The **consistency groups and snapshots** are an **experimental** Shared File " "Systems API in the Liberty release. Contributors can change or remove the " "experimental part of the Shared File Systems API in further releases without " "maintaining backward compatibility. Experimental APIs have an ``X-OpenStack-" "Manila-API-Experimental: true`` header in their HTTP requests." msgstr "" #: ../shared_file_systems_cgroups.rst:32 msgid "" "Before using consistency groups, make sure the Shared File System driver " "that you are running has consistency group support. You can check it in the " "``manila-scheduler`` service reports. The ``consistency_group_support`` can " "have the following values:" msgstr "" #: ../shared_file_systems_cgroups.rst:37 msgid "" "``pool`` or ``host``. Consistency groups are supported. Specifies the level " "of consistency groups support." msgstr "" #: ../shared_file_systems_cgroups.rst:40 msgid "``false``. Consistency groups are not supported." msgstr "" #: ../shared_file_systems_cgroups.rst:42 msgid "" "The :command:`manila cg-create` command creates a new consistency group. " "With this command, you can specify a share network, and one or more share " "types. In the example a consistency group ``cgroup1`` was created by " "specifying two comma-separated share types:" msgstr "" #: ../shared_file_systems_cgroups.rst:67 msgid "Check that consistency group status is ``available``:" msgstr "" #: ../shared_file_systems_cgroups.rst:89 msgid "" "To add a share to the consistency group, create a share by adding the :" "option:`--consistency-group` option where you specify the ID of the " "consistency group in ``available`` status:" msgstr "" #: ../shared_file_systems_cgroups.rst:125 msgid "" "Administrators can rename the consistency group, or change its description " "using the :command:`manila cg-update` command. Delete the group with the :" "command:`manila cg-delete` command." msgstr "" #: ../shared_file_systems_cgroups.rst:129 msgid "" "As an administrator, you can also reset the state of a consistency group and " "force delete a specified consistency group in any state. Use the ``policy." "json`` file to grant permissions for these actions to other roles." msgstr "" #: ../shared_file_systems_cgroups.rst:133 msgid "" "Use :command:`manila cg-reset-state [--state ] ` " "to update the state of a consistency group explicitly. A valid value of a " "status are ``available``, ``error``, ``creating``, ``deleting``, " "``error_deleting``. If no state is provided, ``available`` will be used." msgstr "" #: ../shared_file_systems_cgroups.rst:142 msgid "" "Use :command:`manila cg-delete " "[ ...]` to soft-delete one or more consistency groups." msgstr "" #: ../shared_file_systems_cgroups.rst:147 msgid "" "A consistency group can be deleted only if it has no dependent :ref:" "`shared_file_systems_cgsnapshots`." msgstr "" #: ../shared_file_systems_cgroups.rst:154 msgid "" "Use :command:`manila cg-delete --force " "[ ...]` to force-delete a specified consistency group in " "any state." msgstr "" #: ../shared_file_systems_cgroups.rst:165 msgid "Consistency group snapshots" msgstr "" #: ../shared_file_systems_cgroups.rst:167 msgid "" "To create a snapshot, specify the ID or name of the consistency group. After " "creating a consistency group snapshot, it is possible to generate a new " "consistency group." msgstr "" #: ../shared_file_systems_cgroups.rst:171 msgid "Create a snapshot of consistency group ``cgroup1``:" msgstr "" #: ../shared_file_systems_cgroups.rst:188 msgid "Check the status of created consistency group snapshot:" msgstr "" #: ../shared_file_systems_cgroups.rst:205 msgid "" "Administrators can rename a consistency group snapshot, change its " "description using the :command:`cg-snapshot-update` command, or delete it " "with the :command:`cg-snapshot-delete` command." msgstr "" #: ../shared_file_systems_cgroups.rst:209 msgid "" "A consistency group snapshot can have ``members``. To add a member, include " "the :option:`--consistency-group` optional parameter in the create share " "command. This ID must match the ID of the consistency group from which the " "consistency group snapshot was created. Then, while restoring data, and " "operating with consistency group snapshots, you can quickly find which " "shares belong to a specified consistency group." msgstr "" #: ../shared_file_systems_cgroups.rst:216 msgid "" "You created the share ``Share2`` in ``cgroup1`` consistency group. Since you " "made a snapshot of it, you can see that the only member of the consistency " "group snapshot is ``Share2`` share:" msgstr "" #: ../shared_file_systems_cgroups.rst:229 msgid "" "After you create a consistency group snapshot, you can create a consistency " "group from the new snapshot:" msgstr "" #: ../shared_file_systems_cgroups.rst:252 msgid "Check the consistency group list. Two groups now appear:" msgstr "" #: ../shared_file_systems_cgroups.rst:264 msgid "" "Check a list of the shares. New share with ``ba52454e-2ea3-47fa-" "a683-3176a01295e6`` ID appeared after the consistency group ``cgroup2`` was " "built from a snapshot with a member." msgstr "" #: ../shared_file_systems_cgroups.rst:278 msgid "Print detailed information about new share:" msgstr "" #: ../shared_file_systems_cgroups.rst:282 msgid "" "Pay attention on the ``source_cgsnapshot_member_id`` and " "``consistency_group_id`` fields in a new share. It has " "``source_cgsnapshot_member_id`` that is equal to the ID of the consistency " "group snapshot and ``consistency_group_id`` that is equal to the ID of " "``cgroup2`` created from a snapshot." msgstr "" #: ../shared_file_systems_cgroups.rst:318 msgid "" "As an administrator, you can also reset the state of a consistency group " "snapshot with the :command:`cg-snapshot-reset-state` command, and force " "delete a specified consistency group snapshot in any state using the :" "command:`cg-snapshot-delete` command with the :option:`--force` key. Use the " "``policy.json`` file to grant permissions for these actions to other roles." msgstr "" #: ../shared_file_systems_crud_share.rst:5 msgid "Share basic operations" msgstr "" #: ../shared_file_systems_crud_share.rst:8 msgid "General concepts" msgstr "" #: ../shared_file_systems_crud_share.rst:10 msgid "" "To create a file share, and access it, the following general concepts are " "prerequisite knowledge:" msgstr "" #: ../shared_file_systems_crud_share.rst:13 #: ../shared_file_systems_crud_share.rst:89 #: ../shared_file_systems_crud_share.rst:216 msgid "" "To create a share, use :command:`manila create` command and specify the " "required arguments: the size of the share and the shared file system " "protocol. ``NFS``, ``CIFS``, ``GlusterFS``, ``HDFS``, or ``CephFS`` share " "file system protocols are supported." msgstr "" #: ../shared_file_systems_crud_share.rst:18 msgid "You can also optionally specify the share network and the share type." msgstr "" #: ../shared_file_systems_crud_share.rst:20 #: ../shared_file_systems_crud_share.rst:111 msgid "" "After the share becomes available, use the :command:`manila show` command to " "get the share export locations." msgstr "" #: ../shared_file_systems_crud_share.rst:23 msgid "" "After getting the share export locations, you can create an :ref:`access " "rule ` for the share, mount it and work with files on the " "remote file system." msgstr "" #: ../shared_file_systems_crud_share.rst:27 msgid "" "There are big number of the share drivers created by different vendors in " "the Shared File Systems service. As a Python class, each share driver can be " "set for the :ref:`back end ` and run in " "the back end to manage the share operations." msgstr "" #: ../shared_file_systems_crud_share.rst:32 msgid "Initially there are two driver modes for the back ends:" msgstr "" #: ../shared_file_systems_crud_share.rst:34 msgid "no share servers mode" msgstr "" #: ../shared_file_systems_crud_share.rst:35 msgid "share servers mode" msgstr "" #: ../shared_file_systems_crud_share.rst:37 msgid "" "Each share driver supports one or two of possible back end modes that can be " "configured in the ``manila.conf`` file. The configuration option " "``driver_handles_share_servers`` in the ``manila.conf`` file sets the share " "servers mode or no share servers mode, and defines the driver mode for share " "storage lifecycle management:" msgstr "" #: ../shared_file_systems_crud_share.rst:44 msgid "Config option" msgstr "" #: ../shared_file_systems_crud_share.rst:44 msgid "Mode" msgstr "" #: ../shared_file_systems_crud_share.rst:46 msgid "" "An administrator rather than a share driver manages the bare metal storage " "with some net interface instead of the presence of the share servers." msgstr "" #: ../shared_file_systems_crud_share.rst:46 msgid "driver_handles_share_servers = False" msgstr "" #: ../shared_file_systems_crud_share.rst:46 msgid "no share servers" msgstr "" #: ../shared_file_systems_crud_share.rst:55 msgid "" "The share driver creates the share server and manages, or handles, the share " "server life cycle." msgstr "" #: ../shared_file_systems_crud_share.rst:55 msgid "driver_handles_share_servers = True" msgstr "" #: ../shared_file_systems_crud_share.rst:55 msgid "share servers" msgstr "" #: ../shared_file_systems_crud_share.rst:63 msgid "" "It is :ref:`the share types ` which have " "the extra specifications that help scheduler to filter back ends and choose " "the appropriate back end for the user that requested to create a share. The " "required extra boolean specification for each share type is " "``driver_handles_share_servers``. As an administrator, you can create the " "share types with the specifications you need. For details of managing the " "share types and configuration the back ends, see :ref:" "`shared_file_systems_share_types` and :ref:" "`shared_file_systems_multi_backend` documentation." msgstr "" #: ../shared_file_systems_crud_share.rst:72 msgid "You can create a share in two described above modes:" msgstr "" #: ../shared_file_systems_crud_share.rst:74 msgid "" "in a no share servers mode without specifying the share network and " "specifying the share type with ``driver_handles_share_servers = False`` " "parameter. See subsection :ref:`create_share_in_no_share_server_mode`." msgstr "" #: ../shared_file_systems_crud_share.rst:78 msgid "" "in a share servers mode with specifying the share network and the share type " "with ``driver_handles_share_servers = True`` parameter. See subsection :ref:" "`create_share_in_share_server_mode`." msgstr "" #: ../shared_file_systems_crud_share.rst:85 msgid "Create a share in no share servers mode" msgstr "" #: ../shared_file_systems_crud_share.rst:87 msgid "To create a file share in no share servers mode, you need to:" msgstr "" #: ../shared_file_systems_crud_share.rst:94 msgid "" "You should specify the :ref:`share type ` " "with ``driver_handles_share_servers = False`` extra specification." msgstr "" #: ../shared_file_systems_crud_share.rst:97 msgid "" "You must not specify the ``share network`` because no share servers are " "created. In this mode the Shared File Systems service expects that " "administrator has some bare metal storage with some net interface." msgstr "" #: ../shared_file_systems_crud_share.rst:101 #: ../shared_file_systems_crud_share.rst:227 msgid "" "The :command:`manila create` command creates a share. This command does the " "following things:" msgstr "" #: ../shared_file_systems_crud_share.rst:104 msgid "" "The :ref:`manila-scheduler ` service will " "find the back end with ``driver_handles_share_servers = False`` mode due to " "filtering the extra specifications of the share type." msgstr "" #: ../shared_file_systems_crud_share.rst:108 msgid "" "The share is created using the storage that is specified in the found back " "end." msgstr "" #: ../shared_file_systems_crud_share.rst:114 msgid "" "In the example to create a share, the created already share type named " "``my_type`` with ``driver_handles_share_servers = False`` extra " "specification is used." msgstr "" #: ../shared_file_systems_crud_share.rst:118 #: ../shared_file_systems_crud_share.rst:252 msgid "Check share types that exist, run:" msgstr "" #: ../shared_file_systems_crud_share.rst:129 msgid "" "Create a private share with ``my_type`` share type, NFS shared file system " "protocol, and size 1 GB:" msgstr "" #: ../shared_file_systems_crud_share.rst:164 msgid "New share ``Share2`` should have a status ``available``:" msgstr "" #: ../shared_file_systems_crud_share.rst:212 msgid "Create a share in share servers mode" msgstr "" #: ../shared_file_systems_crud_share.rst:214 msgid "To create a file share in share servers mode, you need to:" msgstr "" #: ../shared_file_systems_crud_share.rst:221 msgid "" "You should specify the :ref:`share type ` " "with ``driver_handles_share_servers = True`` extra specification." msgstr "" #: ../shared_file_systems_crud_share.rst:224 msgid "" "You should specify the :ref:`share network " "`." msgstr "" #: ../shared_file_systems_crud_share.rst:230 msgid "" "The :ref:`manila-scheduler ` service will " "find the back end with ``driver_handles_share_servers = True`` mode due to " "filtering the extra specifications of the share type." msgstr "" #: ../shared_file_systems_crud_share.rst:234 msgid "" "The share driver will create a share server with the share network. For " "details of creating the resources, see the `documentation `_ of the " "specific share driver." msgstr "" #: ../shared_file_systems_crud_share.rst:239 msgid "" "After the share becomes available, use the :command:`manila show` command to " "get the share export location." msgstr "" #: ../shared_file_systems_crud_share.rst:242 msgid "" "In the example to create a share, the default share type and the already " "existing share network are used." msgstr "" #: ../shared_file_systems_crud_share.rst:247 msgid "" "There is no default share type just after you started manila as the " "administrator. See :ref:`shared_file_systems_share_types` to create the " "default share type. To create a share network, use :ref:" "`shared_file_systems_share_networks`." msgstr "" #: ../shared_file_systems_crud_share.rst:263 msgid "Check share networks that exist, run:" msgstr "" #: ../shared_file_systems_crud_share.rst:274 msgid "" "Create a public share with ``my_share_net`` network, ``default`` share type, " "NFS shared file system protocol, and size 1 GB:" msgstr "" #: ../shared_file_systems_crud_share.rst:315 msgid "" "The share also can be created from a share snapshot. For details, see :ref:" "`shared_file_systems_snapshots`." msgstr "" #: ../shared_file_systems_crud_share.rst:318 msgid "See the share in a share list:" msgstr "" #: ../shared_file_systems_crud_share.rst:330 msgid "" "Check the share status and see the share export locations. After " "``creating`` status share should have status ``available``:" msgstr "" #: ../shared_file_systems_crud_share.rst:375 msgid "" "``is_public`` defines the level of visibility for the share: whether other " "tenants can or cannot see the share. By default, the share is private." msgstr "" #: ../shared_file_systems_crud_share.rst:379 msgid "Update share" msgstr "" #: ../shared_file_systems_crud_share.rst:381 msgid "" "Update the name, or description, or level of visibility for all tenants for " "the share if you need:" msgstr "" #: ../shared_file_systems_crud_share.rst:428 msgid "A share can have one of these status values:" msgstr "" #: ../shared_file_systems_crud_share.rst:433 msgid "The share is being created." msgstr "" #: ../shared_file_systems_crud_share.rst:433 msgid "creating" msgstr "" #: ../shared_file_systems_crud_share.rst:435 msgid "The share is being deleted." msgstr "" #: ../shared_file_systems_crud_share.rst:435 msgid "deleting" msgstr "" #: ../shared_file_systems_crud_share.rst:437 msgid "An error occurred during share creation." msgstr "" # #-#-#-#-# shared_file_systems_crud_share.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_share_replication.pot (Administrator Guide 0.9) #-#-#-#-# #: ../shared_file_systems_crud_share.rst:437 #: ../shared_file_systems_share_replication.rst:101 msgid "error" msgstr "" #: ../shared_file_systems_crud_share.rst:439 msgid "An error occurred during share deletion." msgstr "" #: ../shared_file_systems_crud_share.rst:439 msgid "error_deleting" msgstr "" #: ../shared_file_systems_crud_share.rst:441 msgid "The share is ready to use." msgstr "" #: ../shared_file_systems_crud_share.rst:441 msgid "available" msgstr "" #: ../shared_file_systems_crud_share.rst:443 msgid "Share manage started." msgstr "" #: ../shared_file_systems_crud_share.rst:443 msgid "manage_starting" msgstr "" #: ../shared_file_systems_crud_share.rst:445 msgid "Share manage failed." msgstr "" #: ../shared_file_systems_crud_share.rst:445 msgid "manage_error" msgstr "" #: ../shared_file_systems_crud_share.rst:447 msgid "Share unmanage started." msgstr "" #: ../shared_file_systems_crud_share.rst:447 msgid "unmanage_starting" msgstr "" #: ../shared_file_systems_crud_share.rst:449 msgid "Share cannot be unmanaged." msgstr "" #: ../shared_file_systems_crud_share.rst:449 msgid "unmanage_error" msgstr "" #: ../shared_file_systems_crud_share.rst:451 msgid "Share was unmanaged." msgstr "" #: ../shared_file_systems_crud_share.rst:451 msgid "unmanaged" msgstr "" #: ../shared_file_systems_crud_share.rst:453 msgid "The extend, or increase, share size request was issued successfully." msgstr "" #: ../shared_file_systems_crud_share.rst:453 msgid "extending" msgstr "" #: ../shared_file_systems_crud_share.rst:456 msgid "Extend share failed." msgstr "" #: ../shared_file_systems_crud_share.rst:456 msgid "extending_error" msgstr "" #: ../shared_file_systems_crud_share.rst:458 msgid "Share is being shrunk." msgstr "" #: ../shared_file_systems_crud_share.rst:458 msgid "shrinking" msgstr "" #: ../shared_file_systems_crud_share.rst:460 msgid "Failed to update quota on share shrinking." msgstr "" #: ../shared_file_systems_crud_share.rst:460 msgid "shrinking_error" msgstr "" #: ../shared_file_systems_crud_share.rst:463 msgid "Shrink share failed due to possible data loss." msgstr "" #: ../shared_file_systems_crud_share.rst:463 msgid "shrinking_possible_data_loss_error" msgstr "" #: ../shared_file_systems_crud_share.rst:466 msgid "Share migration is in progress." msgstr "" #: ../shared_file_systems_crud_share.rst:466 msgid "migrating" msgstr "" #: ../shared_file_systems_crud_share.rst:472 msgid "Share metadata" msgstr "" #: ../shared_file_systems_crud_share.rst:474 msgid "If you want to set the metadata key-value pairs on the share, run:" msgstr "" #: ../shared_file_systems_crud_share.rst:480 msgid "Get all metadata key-value pairs of the share:" msgstr "" #: ../shared_file_systems_crud_share.rst:493 msgid "You can update the metadata:" msgstr "" #: ../shared_file_systems_crud_share.rst:504 msgid "" "You also can unset the metadata using **manila metadata unset " "**." msgstr "" #: ../shared_file_systems_crud_share.rst:508 msgid "Reset share state" msgstr "" #: ../shared_file_systems_crud_share.rst:510 msgid "As administrator, you can reset the state of a share." msgstr "" #: ../shared_file_systems_crud_share.rst:512 msgid "" "Use **manila reset-state [--state ] ** command to reset share " "state, where ``state`` indicates which state to assign the share. Options " "include ``available``, ``error``, ``creating``, ``deleting``, " "``error_deleting`` states." msgstr "" #: ../shared_file_systems_crud_share.rst:562 msgid "Delete and force-delete share" msgstr "" #: ../shared_file_systems_crud_share.rst:564 msgid "" "You also can force-delete a share. The shares cannot be deleted in " "transitional states. The transitional states are ``creating``, ``deleting``, " "``managing``, ``unmanaging``, ``migrating``, ``extending``, and " "``shrinking`` statuses for the shares. Force-deletion deletes an object in " "any state. Use the ``policy.json`` file to grant permissions for this action " "to other roles." msgstr "" #: ../shared_file_systems_crud_share.rst:573 msgid "" "The configuration file ``policy.json`` may be used from different places. " "The path ``/etc/manila/policy.json`` is one of expected paths by default." msgstr "" #: ../shared_file_systems_crud_share.rst:576 msgid "" "Use **manila delete ** command to delete a specified share:" msgstr "" #: ../shared_file_systems_crud_share.rst:584 msgid "" "If you specified :ref:`the consistency group ` " "while creating a share, you should provide the :option:`--consistency-group` " "parameter to delete the share:" msgstr "" #: ../shared_file_systems_crud_share.rst:593 msgid "" "If you try to delete the share in one of the transitional state using soft-" "deletion you'll get an error:" msgstr "" #: ../shared_file_systems_crud_share.rst:602 msgid "" "A share cannot be deleted in a transitional status, that it why an error " "from ``python-manilaclient`` appeared." msgstr "" #: ../shared_file_systems_crud_share.rst:605 msgid "Print the list of all shares for all tenants:" msgstr "" #: ../shared_file_systems_crud_share.rst:617 msgid "" "Force-delete Share2 and check that it is absent in the list of shares, run:" msgstr "" #: ../shared_file_systems_crud_share.rst:634 msgid "Manage access to share" msgstr "" #: ../shared_file_systems_crud_share.rst:636 msgid "" "The Shared File Systems service allows to grant or deny access to a " "specified share, and list the permissions for a specified share." msgstr "" #: ../shared_file_systems_crud_share.rst:639 msgid "" "To grant or deny access to a share, specify one of these supported share " "access levels:" msgstr "" #: ../shared_file_systems_crud_share.rst:642 msgid "**rw**. Read and write (RW) access. This is the default value." msgstr "" #: ../shared_file_systems_crud_share.rst:644 msgid "**ro**. Read-only (RO) access." msgstr "" #: ../shared_file_systems_crud_share.rst:646 msgid "You must also specify one of these supported authentication methods:" msgstr "" #: ../shared_file_systems_crud_share.rst:648 msgid "" "**ip**. Authenticates an instance through its IP address. A valid format is " "``XX.XX.XX.XX`` or ``XX.XX.XX.XX/XX``. For example ``0.0.0.0/0``." msgstr "" #: ../shared_file_systems_crud_share.rst:651 msgid "" "**user**. Authenticates by a specified user or group name. A valid value is " "an alphanumeric string that can contain some special characters and is from " "4 to 32 characters long." msgstr "" #: ../shared_file_systems_crud_share.rst:655 msgid "" "**cert**. Authenticates an instance through a TLS certificate. Specify the " "TLS identity as the IDENTKEY. A valid value is any string up to 64 " "characters long in the common name (CN) of the certificate. The meaning of a " "string depends on its interpretation." msgstr "" #: ../shared_file_systems_crud_share.rst:660 msgid "" "**cephx**. Ceph authentication system. Each share has a distinct " "authentication key that must be passed to clients for them to use it." msgstr "" #: ../shared_file_systems_crud_share.rst:663 msgid "" "Try to mount NFS share with export path ``10.0.0.4:/shares/" "manila_share_a5fb1ab7_0bbd_465b_ac14_05706294b6e9`` on the node with IP " "address ``10.0.0.13``:" msgstr "" #: ../shared_file_systems_crud_share.rst:675 msgid "" "An error message \"Permission denied\" appeared, so you are not allowed to " "mount a share without an access rule. Allow access to the share with ``ip`` " "access type and ``10.0.0.13`` IP address:" msgstr "" #: ../shared_file_systems_crud_share.rst:693 msgid "Try to mount a share again. This time it is mounted successfully:" msgstr "" #: ../shared_file_systems_crud_share.rst:699 msgid "" "Since it is allowed node on 10.0.0.13 read and write access, try to create a " "file on a mounted share:" msgstr "" #: ../shared_file_systems_crud_share.rst:709 msgid "" "Connect via SSH to the ``10.0.0.4`` node and check new file `my_file.txt` in " "the ``/shares/manila_share_a5fb1ab7_0bbd_465b_ac14_05706294b6e9`` directory:" msgstr "" #: ../shared_file_systems_crud_share.rst:722 msgid "" "You have successfully created a file from instance that was given access by " "its IP address." msgstr "" #: ../shared_file_systems_crud_share.rst:725 msgid "Allow access to the share with ``user`` access type:" msgstr "" #: ../shared_file_systems_crud_share.rst:743 msgid "" "Different share features are supported by different share drivers. For the " "example, the Generic driver with the Block Storage service as a back-end " "doesn't support ``user`` and ``cert`` authentications methods. For details " "of supporting of features by different drivers, see `Manila share features " "support mapping `_." msgstr "" #: ../shared_file_systems_crud_share.rst:750 msgid "" "To verify that the access rules (ACL) were configured correctly for a share, " "you list permissions for a share:" msgstr "" #: ../shared_file_systems_crud_share.rst:763 msgid "" "Deny access to the share and check that deleted access rule is absent in the " "access rule list:" msgstr "" #: ../shared_file_systems_intro.rst:7 msgid "" "The OpenStack File Share service allows you to offer shared file systems " "service to OpenStack users in your installation. The Shared File Systems " "service can run in a single-node or multiple node configuration. The Shared " "File Systems service can be configured to provision shares from one or more " "back ends, so it is required to declare at least one back end. Shared File " "System service contains several configurable components." msgstr "" #: ../shared_file_systems_intro.rst:15 msgid "It is important to understand these components:" msgstr "" # #-#-#-#-# shared_file_systems_intro.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# shared_file_systems_share_networks.pot (Administrator Guide 0.9) #-#-#-#-# #: ../shared_file_systems_intro.rst:17 #: ../shared_file_systems_share_networks.rst:5 msgid "Share networks" msgstr "" #: ../shared_file_systems_intro.rst:18 msgid "Shares" msgstr "" #: ../shared_file_systems_intro.rst:19 msgid "Multi-tenancy" msgstr "" #: ../shared_file_systems_intro.rst:20 msgid "Back ends" msgstr "" #: ../shared_file_systems_intro.rst:22 msgid "" "The Shared File Systems service consists of four types of services, most of " "which are similar to those of the Block Storage service:" msgstr "" #: ../shared_file_systems_intro.rst:25 msgid "``manila-api``" msgstr "" #: ../shared_file_systems_intro.rst:26 msgid "``manila-data``" msgstr "" #: ../shared_file_systems_intro.rst:27 msgid "``manila-scheduler``" msgstr "" #: ../shared_file_systems_intro.rst:28 msgid "``manila-share``" msgstr "" #: ../shared_file_systems_intro.rst:30 msgid "" "Installation of first three - ``manila-api``, ``manila-data``, and ``manila-" "scheduler`` is common for almost all deployments. But configuration of " "``manila-share`` is backend-specific and can differ from deployment to " "deployment." msgstr "" #: ../shared_file_systems_key_concepts.rst:5 msgid "Key concepts" msgstr "" #: ../shared_file_systems_key_concepts.rst:8 msgid "Share" msgstr "" #: ../shared_file_systems_key_concepts.rst:10 msgid "" "In the Shared File Systems service ``share`` is the fundamental resource " "unit allocated by the Shared File System service. It represents an " "allocation of a persistent, readable, and writable filesystems. Compute " "instances access these filesystems. Depending on the deployment " "configuration, clients outside of OpenStack can also access the filesystem." msgstr "" #: ../shared_file_systems_key_concepts.rst:18 msgid "" "A ``share`` is an abstract storage object that may or may not directly map " "to a \"share\" concept from the underlying storage provider. See the " "description of ``share instance`` for more details." msgstr "" #: ../shared_file_systems_key_concepts.rst:23 msgid "Share instance" msgstr "" #: ../shared_file_systems_key_concepts.rst:24 msgid "" "This concept is tied with ``share`` and represents created resource on " "specific back end, when ``share`` represents abstraction between end user " "and back-end storages. In common cases, it is one-to-one relation. One " "single ``share`` has more than one ``share instance`` in two cases:" msgstr "" #: ../shared_file_systems_key_concepts.rst:29 msgid "When ``share migration`` is being applied" msgstr "" #: ../shared_file_systems_key_concepts.rst:31 msgid "When ``share replication`` is enabled" msgstr "" #: ../shared_file_systems_key_concepts.rst:33 msgid "" "Therefore, each ``share instance`` stores information specific to real " "allocated resource on storage. And ``share`` represents the information that " "is common for ``share instances``. A user with ``member`` role will not be " "able to work with it directly. Only a user with ``admin`` role has rights to " "perform actions against specific share instances." msgstr "" #: ../shared_file_systems_key_concepts.rst:41 msgid "Snapshot" msgstr "" #: ../shared_file_systems_key_concepts.rst:43 msgid "" "A ``snapshot`` is a point-in-time, read-only copy of a ``share``. You can " "create ``Snapshots`` from an existing, operational ``share`` regardless of " "whether a client has mounted the file system. A ``snapshot`` can serve as " "the content source for a new ``share``. Specify the **Create from snapshot** " "option when creating a new ``share`` on the dashboard." msgstr "" #: ../shared_file_systems_key_concepts.rst:51 msgid "Storage Pools" msgstr "" #: ../shared_file_systems_key_concepts.rst:53 msgid "" "With the Kilo release of OpenStack, Shared File Systems can use ``storage " "pools``. The storage may present one or more logical storage resource pools " "that the Shared File Systems service will select as a storage location when " "provisioning ``shares``." msgstr "" #: ../shared_file_systems_key_concepts.rst:59 msgid "Share Type" msgstr "" #: ../shared_file_systems_key_concepts.rst:61 msgid "" "``Share type`` is an abstract collection of criteria used to characterize " "``shares``. They are most commonly used to create a hierarchy of functional " "capabilities. This hierarchy represents tiered storage services levels. For " "example, an administrator might define a premium ``share type`` that " "indicates a greater level of performance than a basic ``share type``. " "Premium represents the best performance level." msgstr "" #: ../shared_file_systems_key_concepts.rst:70 msgid "Share Access Rules" msgstr "" #: ../shared_file_systems_key_concepts.rst:72 msgid "" "``Share access rules`` define which users can access a particular ``share``. " "For example, administrators can declare rules for NFS shares by listing the " "valid IP networks which will access the ``share``. List the IP networks in " "CIDR notation." msgstr "" #: ../shared_file_systems_key_concepts.rst:78 msgid "Security Services" msgstr "" #: ../shared_file_systems_key_concepts.rst:80 msgid "" "``Security services``allow granular client access rules for administrators. " "They can declare rules for authentication or authorization to access " "``share`` content. External services including LDAP, Active Directory, and " "Kerberos can be declared as resources. Examine and consult these resources " "when making an access decision for a particular ``share``. You can associate " "``Shares`` with multiple security services, but only one service per one " "type." msgstr "" #: ../shared_file_systems_key_concepts.rst:89 msgid "Share Networks" msgstr "" #: ../shared_file_systems_key_concepts.rst:91 msgid "" "A ``share network`` is an object that defines a relationship between a " "tenant network and subnet, as defined in an OpenStack Networking service or " "Compute service. The ``share network`` is also defined in ``shares`` created " "by the same tenant. A tenant may find it desirable to provision ``shares`` " "such that only instances connected to a particular OpenStack-defined network " "have access to the ``share``. Also, ``security services`` can be attached to " "``share networks``, because most of auth protocols require some interaction " "with network services." msgstr "" #: ../shared_file_systems_key_concepts.rst:100 msgid "" "The Shared File Systems service has the ability to work outside of " "OpenStack. That is due to the ``StandaloneNetworkPlugin``. The plugin is " "compatible with any network platform, and does not require specific network " "services in OpenStack like Compute or Networking service. You can set the " "network parameters in the ``manila.conf`` file." msgstr "" #: ../shared_file_systems_key_concepts.rst:107 msgid "Share Servers" msgstr "" #: ../shared_file_systems_key_concepts.rst:109 msgid "" "A ``share server`` is a logical entity that hosts the shares created on a " "specific ``share network``. A ``share server`` may be a configuration object " "within the storage controller, or it may represent logical resources " "provisioned within an OpenStack deployment used to support the data path " "used to access ``shares``." msgstr "" #: ../shared_file_systems_key_concepts.rst:115 msgid "" "``Share servers`` interact with network services to determine the " "appropriate IP addresses on which to export ``shares`` according to the " "related ``share network``. The Shared File Systems service has a pluggable " "network model that allows ``share servers`` to work with different " "implementations of the Networking service." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:5 msgid "Manage and unmanage share" msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:7 msgid "" "To ``manage`` a share means that an administrator, rather than a share " "driver, manages the storage lifecycle. This approach is appropriate when an " "administrator already has the custom non-manila share with its size, shared " "file system protocol, and export path, and an administrator wants to " "register it in the Shared File System service." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:13 msgid "" "To ``unmanage`` a share means to unregister a specified share from the " "Shared File Systems service. Administrators can revert an unmanaged share to " "managed status if needed." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:20 msgid "Unmanage a share" msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:22 msgid "" "The ``unmanage`` operation is not supported for shares that were created on " "top of share servers and created with share networks. The Share service " "should have the option ``driver_handles_share_servers = False`` set in the " "``manila.conf`` file. You can unmanage a share that has no dependent " "snapshots." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:29 msgid "" "To unmanage managed share, run the :command:`manila unmanage ` " "command. Then try to print the information about the share. The returned " "result should indicate that Shared File Systems service won't find the share:" msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:43 msgid "Manage a share" msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:44 msgid "" "To register the non-managed share in the File System service, run the :" "command:`manila manage` command:" msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:54 msgid "The positional arguments are:" msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:56 msgid "" "service_host. The manage-share service host in ``host@backend#POOL`` format, " "which consists of the host name for the back end, the name of the back end, " "and the pool name for the back end." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:61 msgid "" "protocol. The Shared File Systems protocol of the share to manage. Valid " "values are NFS, CIFS, GlusterFS, or HDFS." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:64 msgid "" "export_path. The share export path in the format appropriate for the " "protocol:" msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:67 msgid "NFS protocol. 10.0.0.1:/foo_path." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:69 msgid "CIFS protocol. \\\\\\\\10.0.0.1\\\\foo_name_of_cifs_share." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:71 msgid "HDFS protocol. hdfs://10.0.0.1:foo_port/foo_share_name." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:73 msgid "GlusterFS. 10.0.0.1:/foo_volume." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:75 msgid "" "The ``driver_options`` is an optional set of one or more key and value pairs " "that describe driver options. Note that the share type must have the " "``driver_handles_share_servers = False`` option. As a result, a special " "share type named ``for_managing`` was used in example." msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:80 msgid "To manage share, run:" msgstr "" #: ../shared_file_systems_manage_and_unmanage_share.rst:120 msgid "Check that the share is available:" msgstr "" #: ../shared_file_systems_manage_shares_cli.rst:5 msgid "Migrate shares" msgstr "" #: ../shared_file_systems_manage_shares_cli.rst:7 msgid "" "As an administrator, you can migrate a share with its data from one location " "to another in a manner that is transparent to users and workloads. You can " "use ``manila`` client commands to complete a share migration." msgstr "" #: ../shared_file_systems_multi_backend.rst:5 msgid "Multi-storage configuration" msgstr "" #: ../shared_file_systems_multi_backend.rst:7 msgid "" "The Shared File Systems service can provide access to multiple file storage " "back ends. In general, the workflow with multiple back ends looks similar to " "the Block Storage service one, see :ref:`Configure multiple-storage back " "ends in Block Storage service `." msgstr "" #: ../shared_file_systems_multi_backend.rst:12 msgid "" "Using ``manila.conf``, you can spawn multiple share services. To do it, you " "should set the `enabled_share_backends` flag in the ``manila.conf`` file. " "This flag defines the comma-separated names of the configuration stanzas for " "the different back ends. One name is associated to one configuration group " "for a back end." msgstr "" #: ../shared_file_systems_multi_backend.rst:18 msgid "The following example runs three configured share services:" msgstr "" #: ../shared_file_systems_multi_backend.rst:54 msgid "" "To spawn separate groups of share services, you can use separate " "configuration files. If it is necessary to control each back end in a " "separate way, you should provide a single configuration file per each back " "end." msgstr "" #: ../shared_file_systems_network_plugins.rst:5 msgid "Network plug-ins" msgstr "" #: ../shared_file_systems_network_plugins.rst:7 msgid "" "The Shared File Systems service architecture defines an abstraction layer " "for network resource provisioning and allowing administrators to choose from " "a different options for how network resources are assigned to their tenants’ " "networked storage. There are a set of network plug-ins that provide a " "variety of integration approaches with the network services that are " "available with OpenStack." msgstr "" #: ../shared_file_systems_network_plugins.rst:14 msgid "" "The Shared File Systems service may need a network resource provisioning if " "share service with specified driver works in mode, when a share driver " "manages lifecycle of share servers on its own. This behavior is defined by a " "flag ``driver_handles_share_servers`` in share service configuration. When " "``driver_handles_share_servers`` is set to ``True``, a share driver will be " "called to create share servers for shares using information provided within " "a share network. This information will be provided to one of the enabled " "network plug-ins that will handle reservation, creation and deletion of " "network resources including IP addresses and network interfaces." msgstr "" #: ../shared_file_systems_network_plugins.rst:25 msgid "What network plug-ins are available?" msgstr "" #: ../shared_file_systems_network_plugins.rst:27 msgid "" "There are three different network plug-ins and five python classes in the " "Shared File Systems service:" msgstr "" #: ../shared_file_systems_network_plugins.rst:30 msgid "" "Network plug-in for using the OpenStack Networking service. It allows to use " "any network segmentation that the Networking service supports. It is up to " "each share driver to support at least one network segmentation type." msgstr "" #: ../shared_file_systems_network_plugins.rst:34 msgid "" "``manila.network.neutron.neutron_network_plugin.NeutronNetworkPlugin``. This " "is a default network plug-in. It requires the ``neutron_net_id`` and the " "``neutron_subnet_id`` to be provided when defining the share network that " "will be used for the creation of share servers. The user may define any " "number of share networks corresponding to the various physical network " "segments in a tenant environment." msgstr "" #: ../shared_file_systems_network_plugins.rst:41 msgid "" "``manila.network.neutron.neutron_network_plugin. " "NeutronSingleNetworkPlugin``. This is a simplification of the previous case. " "It accepts values for ``neutron_net_id`` and ``neutron_subnet_id`` from the " "``manila.conf`` configuration file and uses one network for all shares." msgstr "" #: ../shared_file_systems_network_plugins.rst:47 msgid "" "When only a single network is needed, the NeutronSingleNetworkPlugin (1.b) " "is a simple solution. Otherwise NeutronNetworkPlugin (1.a) should be chosen." msgstr "" #: ../shared_file_systems_network_plugins.rst:50 msgid "" "Network plug-in for working with OpenStack Networking from the Compute " "service. It supports either flat networks or VLAN-segmented networks." msgstr "" #: ../shared_file_systems_network_plugins.rst:53 msgid "" "``manila.network.nova_network_plugin.NovaNetworkPlugin``. This plug-in " "serves the networking needs when ``Nova networking`` is configured in the " "cloud instead of Neutron. It requires a single parameter, ``nova_net_id``." msgstr "" #: ../shared_file_systems_network_plugins.rst:58 msgid "" "``manila.network.nova_network_plugin.NovaSingleNetworkPlugin``. This plug-in " "works the same way as ``manila.network.nova_network_plugin." "NovaNetworkPlugin``, except it takes ``nova_net_id`` from the Shared File " "Systems service configuration file and creates the share servers using only " "one network." msgstr "" #: ../shared_file_systems_network_plugins.rst:64 msgid "" "When only a single network is needed, the NovaSingleNetworkPlugin (2.b) is a " "simple solution. Otherwise NovaNetworkPlugin (2.a) should be chosen." msgstr "" #: ../shared_file_systems_network_plugins.rst:67 msgid "" "Network plug-in for specifying networks independently from OpenStack " "networking services." msgstr "" #: ../shared_file_systems_network_plugins.rst:70 msgid "" "``manila.network.standalone_network_plugin.StandaloneNetworkPlugin``. This " "plug-in uses a pre-existing network that is available to the manila-share " "host. This network may be handled either by OpenStack or be created " "independently by any other means. The plug-in supports any type of network - " "flat and segmented. As above, it is completely up to the share driver to " "support the network type for which the network plug-in is configured." msgstr "" #: ../shared_file_systems_network_plugins.rst:80 msgid "" "These network plug-ins were introduced in the OpenStack Kilo release. In the " "OpenStack Juno version, only NeutronNetworkPlugin is available." msgstr "" #: ../shared_file_systems_network_plugins.rst:83 msgid "" "More information about network plug-ins can be found in `Manila developer " "documentation `_" msgstr "" #: ../shared_file_systems_networking.rst:7 msgid "" "Unlike the OpenStack Block Storage service, the Shared File Systems service " "must connect to the Networking service. The share service requires the " "option to self-manage share servers. For client authentication and " "authorization, you can configure the Shared File Systems service to work " "with different network authentication services, like LDAP, Kerberos " "protocols, or Microsoft Active Directory." msgstr "" #: ../shared_file_systems_quotas.rst:5 msgid "Quotas and limits" msgstr "" #: ../shared_file_systems_quotas.rst:8 msgid "Limits" msgstr "" #: ../shared_file_systems_quotas.rst:10 msgid "" "Limits are the resource limitations that are allowed for each tenant " "(project). An administrator can configure limits in the ``manila.conf`` file." msgstr "" #: ../shared_file_systems_quotas.rst:13 msgid "Users can query their rate and absolute limits." msgstr "" #: ../shared_file_systems_quotas.rst:15 msgid "To see the absolute limits, run:" msgstr "" #: ../shared_file_systems_quotas.rst:35 msgid "" "Rate limits control the frequency at which users can issue specific API " "requests. Administrators use rate limiting to configure limits on the type " "and number of API calls that can be made in a specific time interval. For " "example, a rate limit can control the number of ``GET`` requests processed " "during a one-minute period." msgstr "" #: ../shared_file_systems_quotas.rst:41 msgid "" "To set the API rate limits, modify the ``etc/manila/api-paste.ini`` file, " "which is a part of the WSGI pipeline and defines the actual limits. You need " "to restart ``manila-api`` service after you edit the ``etc/manila/api-paste." "ini`` file." msgstr "" #: ../shared_file_systems_quotas.rst:52 msgid "" "Also, add the ``ratelimit`` to ``noauth``, ``keystone``, " "``keystone_nolimit`` parameters in the ``[composite:openstack_share_api]`` " "and ``[composite:openstack_share_api_v2]`` groups." msgstr "" #: ../shared_file_systems_quotas.rst:70 msgid "To see the rate limits, run:" msgstr "" #: ../shared_file_systems_quotas.rst:84 msgid "Quotas" msgstr "" #: ../shared_file_systems_quotas.rst:86 msgid "Quota sets provide quota management support." msgstr "" #: ../shared_file_systems_quotas.rst:88 msgid "" "To list the quotas for a tenant or user, use the :command:`manila quota-" "show` command. If you specify the optional :option:`--user` parameter, you " "get the quotas for this user in the specified tenant. If you omit this " "parameter, you get the quotas for the specified project." msgstr "" #: ../shared_file_systems_quotas.rst:95 msgid "" "The Shared File Systems service does not perform mapping of usernames and " "tenant/project names to IDs. Provide only ID values to get correct setup of " "quotas. Setting it by names you set quota for nonexistent tenant/user. In " "case quota is not set explicitly by tenant/user ID, The Shared File Systems " "service just applies default quotas." msgstr "" #: ../shared_file_systems_quotas.rst:114 msgid "" "There are default quotas for a project that are set from the ``manila.conf`` " "file. To list the default quotas for a project, use the :command:`manila " "quota-defaults` command:" msgstr "" #: ../shared_file_systems_quotas.rst:131 msgid "" "The administrator can update the quotas for a specific tenant, or for a " "specific user by providing both the ``--tenant`` and ``--user`` optional " "arguments. It is possible to update the ``shares``, ``snapshots``, " "``gigabytes``, ``snapshot-gigabytes``, and ``share-networks`` quotas." msgstr "" #: ../shared_file_systems_quotas.rst:140 msgid "" "As administrator, you can also permit or deny the force-update of a quota " "that is already used, or if the requested value exceeds the configured quota " "limit. To force-update a quota, use ``force`` optional key." msgstr "" #: ../shared_file_systems_quotas.rst:148 msgid "To revert quotas to default for a project or for a user, delete quotas:" msgstr "" #: ../shared_file_systems_scheduling.rst:5 msgid "Scheduling" msgstr "" #: ../shared_file_systems_scheduling.rst:7 msgid "" "The Shared File Systems service uses a scheduler to provide unified access " "for a variety of different types of shared file systems. The scheduler " "collects information from the active shared services, and makes decisions " "such as what shared services will be used to create a new share. To manage " "this process, the Shared File Systems service provides Share types API." msgstr "" #: ../shared_file_systems_scheduling.rst:14 msgid "" "A share type is a list from key-value pairs called extra-specs. The " "scheduler uses required and un-scoped extra-specs to look up the shared " "service most suitable for a new share with the specified share type. For " "more information about extra-specs and their type, see `Capabilities and " "Extra-Specs `_ section in developer documentation." msgstr "" #: ../shared_file_systems_scheduling.rst:20 msgid "The general scheduler workflow:" msgstr "" #: ../shared_file_systems_scheduling.rst:22 msgid "" "Share services report information about their existing pool number, their " "capacities, and their capabilities." msgstr "" #: ../shared_file_systems_scheduling.rst:25 msgid "" "When a request on share creation arrives, the scheduler picks a service and " "pool that best serves the request, using share type filters and back end " "capabilities. If back end capabilities pass through, all filters request the " "selected back end where the target pool resides." msgstr "" #: ../shared_file_systems_scheduling.rst:30 msgid "" "The share driver receives a reply on the request status, and lets the target " "pool serve the request as the scheduler instructs. The scoped and un-scoped " "share types are available for the driver implementation to use as needed." msgstr "" #: ../shared_file_systems_security_services.rst:5 msgid "Security services" msgstr "" #: ../shared_file_systems_security_services.rst:7 msgid "" "A security service stores client configuration information used for " "authentication and authorization (AuthN/AuthZ). For example, a share server " "will be the client for an existing service such as LDAP, Kerberos, or " "Microsoft Active Directory." msgstr "" #: ../shared_file_systems_security_services.rst:12 msgid "You can associate a share with one to three security service types:" msgstr "" #: ../shared_file_systems_security_services.rst:14 msgid "``ldap``: LDAP." msgstr "" #: ../shared_file_systems_security_services.rst:16 msgid "``kerberos``: Kerberos." msgstr "" #: ../shared_file_systems_security_services.rst:18 msgid "``active_directory``: Microsoft Active Directory." msgstr "" #: ../shared_file_systems_security_services.rst:20 msgid "You can configure a security service with these options:" msgstr "" #: ../shared_file_systems_security_services.rst:22 msgid "A DNS IP address." msgstr "" #: ../shared_file_systems_security_services.rst:24 msgid "An IP address or host name." msgstr "" #: ../shared_file_systems_security_services.rst:26 msgid "A domain." msgstr "" #: ../shared_file_systems_security_services.rst:28 msgid "A user or group name." msgstr "" #: ../shared_file_systems_security_services.rst:30 msgid "The password for the user, if you specify a user name." msgstr "" #: ../shared_file_systems_security_services.rst:32 msgid "" "You can add the security service to the :ref:`share network " "`." msgstr "" #: ../shared_file_systems_security_services.rst:35 msgid "" "To create a security service, specify the security service type, a " "description of a security service, DNS IP address used inside tenant's " "network, security service IP address or host name, domain, security service " "user or group used by tenant, and a password for the user. The share name is " "optional." msgstr "" #: ../shared_file_systems_security_services.rst:41 msgid "Create a ``ldap`` security service:" msgstr "" #: ../shared_file_systems_security_services.rst:64 msgid "To create ``kerberos`` security service, run:" msgstr "" #: ../shared_file_systems_security_services.rst:87 msgid "" "To see the list of created security service use :command:`manila security-" "service-list`:" msgstr "" #: ../shared_file_systems_security_services.rst:100 msgid "" "You can add a security service to the existing :ref:`share network " "`, which is not yet used (a ``share " "network`` not associated with a share)." msgstr "" #: ../shared_file_systems_security_services.rst:104 msgid "" "Add a security service to the share network with ``share-network-security-" "service-add`` specifying share network and security service. The command " "returns information about the security service. You can see view new " "attributes and ``share_networks`` using the associated share network ID." msgstr "" #: ../shared_file_systems_security_services.rst:134 msgid "" "It is possible to see the list of security services associated with a given " "share network. List security services for ``share_net2`` share network with:" msgstr "" #: ../shared_file_systems_security_services.rst:147 msgid "" "You also can dissociate a security service from the share network and " "confirm that the security service now has an empty list of share networks:" msgstr "" #: ../shared_file_systems_security_services.rst:175 msgid "" "The Shared File Systems service allows you to update a security service " "field using :command:`manila security-service-update` command with optional " "arguments such as :option:`--dns-ip`, :option:`--server`, :option:`--" "domain`, :option:`--user`, :option:`--password`, :option:`--name`, or :" "option:`--description`." msgstr "" #: ../shared_file_systems_security_services.rst:181 msgid "" "To remove a security service not associated with any share networks run:" msgstr "" #: ../shared_file_systems_services_manage.rst:5 msgid "Manage shares services" msgstr "" #: ../shared_file_systems_services_manage.rst:7 msgid "" "The Shared File Systems service provides API that allows to manage running " "share services (`Share services API `_). Using the :command:`manila service-list` " "command, it is possible to get a list of all kinds of running services. To " "select only share services, you can pick items that have field ``binary`` " "equal to ``manila-share``. Also, you can enable or disable share services " "using raw API requests. Disabling means that share services are excluded " "from the scheduler cycle and new shares will not be placed on the disabled " "back end. However, shares from this service stay available." msgstr "" #: ../shared_file_systems_share_management.rst:5 msgid "Share management" msgstr "" #: ../shared_file_systems_share_management.rst:7 msgid "" "A share is a remote, mountable file system. You can mount a share to and " "access a share from several hosts by several users at a time." msgstr "" #: ../shared_file_systems_share_management.rst:10 msgid "" "You can create a share and associate it with a network, list shares, and " "show information for, update, and delete a specified share. You can also " "create snapshots of shares. To create a snapshot, you specify the ID of the " "share that you want to snapshot." msgstr "" #: ../shared_file_systems_share_management.rst:15 msgid "The shares are based on of the supported Shared File Systems protocols:" msgstr "" #: ../shared_file_systems_share_management.rst:17 msgid "*NFS*. Network File System (NFS)." msgstr "" #: ../shared_file_systems_share_management.rst:18 msgid "*CIFS*. Common Internet File System (CIFS)." msgstr "" #: ../shared_file_systems_share_management.rst:19 msgid "*GLUSTERFS*. Gluster file system (GlusterFS)." msgstr "" #: ../shared_file_systems_share_management.rst:20 msgid "*HDFS*. Hadoop Distributed File System (HDFS)." msgstr "" #: ../shared_file_systems_share_management.rst:21 msgid "*CEPHFS*. Ceph File System (CephFS)." msgstr "" #: ../shared_file_systems_share_management.rst:23 msgid "" "The Shared File Systems service provides set of drivers that enable you to " "use various network file storage devices, instead of the base " "implementation. That is the real purpose of the Shared File Systems service " "in production." msgstr "" #: ../shared_file_systems_share_networks.rst:7 msgid "" "Share network is an entity that encapsulates interaction with the OpenStack " "Networking service. If the share driver that you selected runs in a mode " "requiring Networking service interaction, specify the share network when " "creating a new share network." msgstr "" #: ../shared_file_systems_share_networks.rst:13 msgid "How to create share network" msgstr "" #: ../shared_file_systems_share_networks.rst:15 msgid "To list networks in a tenant, run:" msgstr "" #: ../shared_file_systems_share_networks.rst:29 msgid "" "A share network stores network information that share servers can use where " "shares are hosted. You can associate a share with a single share network. " "When you create or update a share, you can optionally specify the ID of a " "share network through which instances can access the share." msgstr "" #: ../shared_file_systems_share_networks.rst:34 msgid "" "When you create a share network, you can specify only one type of network:" msgstr "" #: ../shared_file_systems_share_networks.rst:36 msgid "" "OpenStack Networking (neutron). Specify a network ID and subnet ID. In this " "case ``manila.network.nova_network_plugin.NeutronNetworkPlugin`` will be " "used." msgstr "" #: ../shared_file_systems_share_networks.rst:40 msgid "" "Legacy networking (nova-network). Specify a network ID. In this case " "``manila.network.nova_network_plugin.NoveNetworkPlugin`` will be used." msgstr "" #: ../shared_file_systems_share_networks.rst:44 msgid "" "For more information about supported plug-ins for share networks, see :ref:" "`shared_file_systems_network_plugins`." msgstr "" #: ../shared_file_systems_share_networks.rst:47 msgid "A share network has these attributes:" msgstr "" #: ../shared_file_systems_share_networks.rst:49 msgid "" "The IP block in Classless Inter-Domain Routing (CIDR) notation from which to " "allocate the network." msgstr "" #: ../shared_file_systems_share_networks.rst:52 msgid "The IP version of the network." msgstr "" #: ../shared_file_systems_share_networks.rst:54 msgid "The network type, which is `vlan`, `vxlan`, `gre`, or `flat`." msgstr "" #: ../shared_file_systems_share_networks.rst:56 msgid "" "If the network uses segmentation, a segmentation identifier. For example, " "VLAN, VXLAN, and GRE networks use segmentation." msgstr "" #: ../shared_file_systems_share_networks.rst:59 msgid "To create a share network with private network and subnetwork, run:" msgstr "" #: ../shared_file_systems_share_networks.rst:83 msgid "" "The ``segmentation_id``, ``cidr``, ``ip_version``, and ``network_type`` " "share network attributes are automatically set to the values determined by " "the network provider." msgstr "" #: ../shared_file_systems_share_networks.rst:87 msgid "To check the network list, run:" msgstr "" #: ../shared_file_systems_share_networks.rst:98 msgid "" "If you configured the generic driver with ``driver_handles_share_servers = " "True`` (with the share servers) and already had previous operations in the " "Shared File Systems service, you can see ``manila_service_network`` in the " "neutron list of networks. This network was created by the generic driver for " "internal use." msgstr "" #: ../shared_file_systems_share_networks.rst:117 msgid "" "You also can see detailed information about the share network including " "``network_type``, and ``segmentation_id`` fields:" msgstr "" #: ../shared_file_systems_share_networks.rst:141 msgid "" "You also can add and remove the security services from the share network. " "For more detail, see :ref:`shared_file_systems_security_services`." msgstr "" #: ../shared_file_systems_share_replication.rst:5 msgid "Share replication" msgstr "" #: ../shared_file_systems_share_replication.rst:8 msgid "" "Replication of data has a number of use cases in the cloud. One use case is " "High Availability of the data in a shared file system, used for example, to " "support a production database. Another use case is ensuring Data Protection; " "i.e being prepared for a disaster by having a replication location that will " "be ready to back up your primary data source." msgstr "" #: ../shared_file_systems_share_replication.rst:14 msgid "" "The Shared File System service supports user facing APIs that allow users to " "create shares that support replication, add and remove share replicas and " "manage their snapshots and access rules. Three replication types are " "currently supported and they vary in the semantics associated with the " "primary share and the secondary copies." msgstr "" #: ../shared_file_systems_share_replication.rst:22 msgid "" "**Share replication** is an **experimental** Shared File Systems API in the " "Mitaka release. Contributors can change or remove the experimental part of " "the Shared File Systems API in further releases without maintaining backward " "compatibility. Experimental APIs have an ``X-OpenStack-Manila-API-" "Experimental: true`` header in their HTTP requests." msgstr "" #: ../shared_file_systems_share_replication.rst:30 msgid "Replication types supported" msgstr "" #: ../shared_file_systems_share_replication.rst:32 msgid "" "Before using share replication, make sure the Shared File System driver that " "you are running supports this feature. You can check it in the ``manila-" "scheduler`` service reports. The ``replication_type`` capability reported " "can have one of the following values:" msgstr "" #: ../shared_file_systems_share_replication.rst:38 msgid "" "The driver supports creating ``writable`` share replicas. All share replicas " "can be accorded read/write access and would be synchronously mirrored." msgstr "" #: ../shared_file_systems_share_replication.rst:38 msgid "writable" msgstr "" #: ../shared_file_systems_share_replication.rst:41 msgid "" "The driver supports creating ``read-only`` share replicas. All secondary " "share replicas can be accorded read access. Only the primary (or ``active`` " "share replica) can be written into." msgstr "" #: ../shared_file_systems_share_replication.rst:42 msgid "readable" msgstr "" #: ../shared_file_systems_share_replication.rst:45 msgid "" "The driver supports creating ``dr`` (abbreviated from Disaster Recovery) " "share replicas. A secondary share replica is inaccessible until after a " "``promotion``." msgstr "" #: ../shared_file_systems_share_replication.rst:46 msgid "dr" msgstr "" #: ../shared_file_systems_share_replication.rst:49 msgid "The driver does not support Share Replication." msgstr "" #: ../shared_file_systems_share_replication.rst:50 msgid "None" msgstr "" #: ../shared_file_systems_share_replication.rst:54 msgid "" "The term ``active`` share replica refers to the ``primary`` share. In " "``writable`` style of replication, all share replicas are ``active``, and " "there could be no distinction of a ``primary`` share. In ``readable`` and " "``dr`` styles of replication, a ``secondary`` share replica may be referred " "to as ``passive``, ``non-active`` or simply, ``replica``." msgstr "" #: ../shared_file_systems_share_replication.rst:64 msgid "" "Two new configuration options have been introduced to support Share " "Replication." msgstr "" #: ../shared_file_systems_share_replication.rst:68 msgid "" "Specify this option in the ``DEFAULT`` section of your ``manila.conf``. The " "Shared File Systems service requests periodic update of the `replica_state` " "of all ``non-active`` share replicas. The update occurs with respect to an " "interval corresponding to this option. If it is not specified, it defaults " "to 300 seconds." msgstr "" #: ../shared_file_systems_share_replication.rst:72 msgid "replica_state_update_interval" msgstr "" #: ../shared_file_systems_share_replication.rst:75 msgid "" "Specify this option in the backend stanza when using a multi-backend style " "configuration. The value can be any ASCII string. Two backends that can " "replicate between each other would have the same ``replication_domain``. " "This comes from the premise that the Shared File Systems service expects " "Share Replication to be performed between symmetric backends. This option is " "*required* for using the Share Replication feature." msgstr "" #: ../shared_file_systems_share_replication.rst:81 msgid "replication_domain" msgstr "" #: ../shared_file_systems_share_replication.rst:84 msgid "Health of a share replica" msgstr "" #: ../shared_file_systems_share_replication.rst:86 msgid "" "Apart from the ``status`` attribute, share replicas have the " "``replica_state`` attribute to denote the state of data replication on the " "storage backend. The ``primary`` share replica will have it's " "`replica_state` attribute set to `active`. The ``secondary`` share replicas " "may have one of the following as their ``replica_state``:" msgstr "" #: ../shared_file_systems_share_replication.rst:93 msgid "" "The share replica is up to date with the ``active`` share replica (possibly " "within a backend-specific ``recovery point objective``)." msgstr "" #: ../shared_file_systems_share_replication.rst:93 msgid "in_sync" msgstr "" #: ../shared_file_systems_share_replication.rst:96 msgid "" "The share replica is out of date (all new share replicas start out in this " "``replica_state``)." msgstr "" #: ../shared_file_systems_share_replication.rst:96 msgid "out_of_sync" msgstr "" #: ../shared_file_systems_share_replication.rst:99 msgid "" "When the scheduler fails to schedule this share replica or some potentially " "irrecoverable error occurred with regard to updating data for this replica." msgstr "" #: ../shared_file_systems_share_replication.rst:104 msgid "Promotion or failover" msgstr "" #: ../shared_file_systems_share_replication.rst:106 msgid "" "For ``readable`` and ``dr`` types of replication, we refer to the task of " "switching a `non-active` share replica with the ``active`` replica as " "`promotion`. For the ``writable`` style of replication, promotion does not " "make sense since all share replicas are ``active`` (or writable) at all " "times." msgstr "" #: ../shared_file_systems_share_replication.rst:112 msgid "" "The `status` attribute of the non-active replica being promoted will be set " "to ``replication_change`` during its promotion. This has been classified as " "a ``busy`` state and thus API interactions with the share are restricted " "while one of its share replicas is in this state." msgstr "" #: ../shared_file_systems_share_replication.rst:119 msgid "Share replication workflows" msgstr "" #: ../shared_file_systems_share_replication.rst:121 msgid "" "The following examples have been implemented with the ZFSonLinux driver that " "is a reference implementation in the Shared File Systems service. It " "operates in ``driver_handles_share_servers=False`` mode and supports the " "``readable`` type of replication. In the example, we assume a configuration " "of two Availability Zones (configuration option: " "``storage_availability_zone``), called `availability_zone_1` and " "`availability_zone_2`." msgstr "" #: ../shared_file_systems_share_replication.rst:128 msgid "" "Multiple availability zones are not necessary to use the replication " "feature. However, the use of an availability zone as a ``failure domain`` is " "encouraged." msgstr "" #: ../shared_file_systems_share_replication.rst:131 msgid "" "Pay attention to the network configuration for the ZFS driver. Here, we " "assume a configuration of ``zfs_service_ip`` and ``zfs_share_export_ip`` " "from two separate networks. The service network is reachable from the host " "where the ``manila-share`` service is running. The share export IP is from a " "network that allows user access." msgstr "" #: ../shared_file_systems_share_replication.rst:137 msgid "" "See `Configuring the ZFSonLinux driver `_ for " "information on how to set up the ZFSonLinux driver." msgstr "" #: ../shared_file_systems_share_replication.rst:142 msgid "Creating a share that supports replication" msgstr "" #: ../shared_file_systems_share_replication.rst:144 msgid "" "Create a new share type and specify the `replication_type` as an extra-spec " "within the share-type being used." msgstr "" #: ../shared_file_systems_share_replication.rst:148 msgid "" "Use the :command:`manila type-create` command to create a new share type. " "Specify the name and the value for the extra-spec " "``driver_handles_share_servers``." msgstr "" #: ../shared_file_systems_share_replication.rst:166 msgid "" "Use the :command:`manila type-key` command to set an extra-spec to the share " "type." msgstr "" #: ../shared_file_systems_share_replication.rst:174 msgid "" "This command has no output. To verify the extra-spec, use the :command:" "`manila extra-specs-list` command and specify the share type's name or ID as " "a parameter." msgstr "" #: ../shared_file_systems_share_replication.rst:178 msgid "Create a share with the share type" msgstr "" #: ../shared_file_systems_share_replication.rst:180 msgid "" "Use the :command:`manila create` command to create a share. Specify the " "share protocol, size and the availability zone." msgstr "" #: ../shared_file_systems_share_replication.rst:215 msgid "" "Use the :command:`manila show` command to retrieve details of the share. " "Specify the share ID or name as a parameter." msgstr "" #: ../shared_file_systems_share_replication.rst:265 msgid "" "When you create a share that supports replication, an ``active`` replica is " "created for you. You can verify this with the :command:`manila share-replica-" "list` command." msgstr "" #: ../shared_file_systems_share_replication.rst:271 msgid "Creating and promoting share replicas" msgstr "" #: ../shared_file_systems_share_replication.rst:273 msgid "Create a share replica" msgstr "" #: ../shared_file_systems_share_replication.rst:275 msgid "" "Use the :command:`manila share-replica-create` command to create a share " "replica. Specify the share ID or name as a parameter. You may optionally " "provide the `availability_zone` and `share_network_id`. In the example " "below, `share_network_id` is not used since the ZFSonLinux driver does not " "support it." msgstr "" #: ../shared_file_systems_share_replication.rst:299 msgid "See details of the newly created share replica" msgstr "" #: ../shared_file_systems_share_replication.rst:301 msgid "" "Use the :command:`manila share-replica-show` command to see details of the " "newly created share replica. Specify the share replica's ID as a parameter." msgstr "" #: ../shared_file_systems_share_replication.rst:323 msgid "See all replicas of the share" msgstr "" #: ../shared_file_systems_share_replication.rst:325 msgid "" "Use the :command:`manila share-replica-list` command to see all the replicas " "of the share. Specify the share ID or name as an optional parameter." msgstr "" #: ../shared_file_systems_share_replication.rst:338 msgid "Promote the secondary share replica to be the new active replica" msgstr "" #: ../shared_file_systems_share_replication.rst:340 msgid "" "Use the :command:`manila share-replica-promote` command to promote a non-" "active share replica to become the ``active`` replica. Specify the non-" "active replica's ID as a parameter." msgstr "" #: ../shared_file_systems_share_replication.rst:349 #: ../shared_file_systems_share_replication.rst:514 #: ../shared_file_systems_share_replication.rst:538 #: ../shared_file_systems_share_replication.rst:552 #: ../shared_file_systems_share_replication.rst:580 #: ../shared_file_systems_share_replication.rst:597 msgid "This command has no output." msgstr "" #: ../shared_file_systems_share_replication.rst:351 msgid "" "The promotion may take time. During the promotion, the ``replica_state`` " "attribute of the share replica being promoted will be set to " "``replication_change``." msgstr "" #: ../shared_file_systems_share_replication.rst:365 msgid "" "Once the promotion is complete, the ``replica_state`` will be set to " "``active``." msgstr "" #: ../shared_file_systems_share_replication.rst:380 msgid "Access rules" msgstr "" #: ../shared_file_systems_share_replication.rst:382 msgid "Create an IP access rule for the share" msgstr "" #: ../shared_file_systems_share_replication.rst:384 msgid "" "Use the :command:`manila access-allow` command to add an access rule. " "Specify the share ID or name, protocol and the target as parameters." msgstr "" #: ../shared_file_systems_share_replication.rst:402 msgid "" "Access rules are not meant to be different across the replicas of the share. " "However, as per the type of replication, drivers may choose to modify the " "access level prescribed. In the above example, even though read/write access " "was requested for the share, the driver will provide read-only access to the " "non-active replica to the same target, because of the semantics of the " "replication type: ``readable``. However, the target will have read/write " "access to the (currently) non-active replica when it is promoted to become " "the ``active`` replica." msgstr "" #: ../shared_file_systems_share_replication.rst:411 msgid "" "The :command:`manila access-deny` command can be used to remove a previously " "applied access rule." msgstr "" #: ../shared_file_systems_share_replication.rst:414 msgid "List the export locations of the share" msgstr "" #: ../shared_file_systems_share_replication.rst:416 msgid "" "Use the :command:`manila share-export-locations-list` command to list the " "export locations of a share." msgstr "" #: ../shared_file_systems_share_replication.rst:431 msgid "" "Identify the export location corresponding to the share replica on the user " "accessible network and you may mount it on the target node." msgstr "" #: ../shared_file_systems_share_replication.rst:435 msgid "" "As an administrator, you can list the export locations for a particular " "share replica by using the :command:`manila share-instance-export-location-" "list` command and specifying the share replica's ID as a parameter." msgstr "" #: ../shared_file_systems_share_replication.rst:444 msgid "Create a snapshot of the share" msgstr "" #: ../shared_file_systems_share_replication.rst:446 msgid "" "Use the :command:`manila snapshot-create` command to create a snapshot of " "the share. Specify the share ID or name as a parameter." msgstr "" #: ../shared_file_systems_share_replication.rst:468 msgid "Show the details of the snapshot" msgstr "" #: ../shared_file_systems_share_replication.rst:470 msgid "" "Use the :command:`manila snapshot-show` to view details of a snapshot. " "Specify the snapshot ID or name as a parameter." msgstr "" #: ../shared_file_systems_share_replication.rst:492 msgid "" "The ``status`` attribute of a snapshot will transition from ``creating`` to " "``available`` only when it is present on all the share replicas that have " "their ``replica_state`` attribute set to ``active`` or ``in_sync``." msgstr "" #: ../shared_file_systems_share_replication.rst:496 msgid "" "Likewise, the ``replica_state`` attribute of a share replica will transition " "from ``out_of_sync`` to ``in_sync`` only when all ``available`` snapshots " "are present on it." msgstr "" #: ../shared_file_systems_share_replication.rst:502 msgid "Planned failovers" msgstr "" #: ../shared_file_systems_share_replication.rst:504 msgid "" "As an administrator, you can use the :command:`manila share-replica-resync` " "command to attempt to sync data between ``active`` and ``non-active`` share " "replicas of a share before promotion. This will ensure that share replicas " "have the most up-to-date data and their relationships can be safely switched." msgstr "" #: ../shared_file_systems_share_replication.rst:518 msgid "Updating attributes" msgstr "" #: ../shared_file_systems_share_replication.rst:519 msgid "" "If an error occurs while updating data or replication relationships (during " "a ``promotion``), the Shared File Systems service may not be able to " "determine the consistency or health of a share replica. It may require " "administrator intervention to make any fixes on the storage backend as " "necessary. In such a situation, state correction within the Shared File " "Systems service is possible." msgstr "" #: ../shared_file_systems_share_replication.rst:525 msgid "As an administrator, you can:" msgstr "" #: ../shared_file_systems_share_replication.rst:527 msgid "Reset the ``status`` attribute of a share replica" msgstr "" #: ../shared_file_systems_share_replication.rst:529 msgid "" "Use the :command:`manila share-replica-reset-state` command to reset the " "``status`` attribute. Specify the share replica's ID as a parameter and use " "the ``--state`` option to specify the state intended." msgstr "" #: ../shared_file_systems_share_replication.rst:541 msgid "Reset the ``replica_state`` attribute" msgstr "" #: ../shared_file_systems_share_replication.rst:543 msgid "" "Use the :command:`manila share-replica-reset-replica-state` command to reset " "the ``replica_state`` attribute. Specify the share replica's ID and use the " "``--state`` option to specify the state intended." msgstr "" #: ../shared_file_systems_share_replication.rst:554 msgid "Force delete a specified share replica in any state" msgstr "" #: ../shared_file_systems_share_replication.rst:556 msgid "" "Use the :command:`manila share-replica-delete` command with the '--force' " "key to remove the share replica, regardless of the state it is in." msgstr "" #: ../shared_file_systems_share_replication.rst:582 msgid "" "Use the ``policy.json`` file to grant permissions for these actions to other " "roles." msgstr "" #: ../shared_file_systems_share_replication.rst:587 msgid "Deleting share replicas" msgstr "" #: ../shared_file_systems_share_replication.rst:589 msgid "" "Use the :command:`manila share-replica-delete` command with the share " "replica's ID to delete a share replica." msgstr "" #: ../shared_file_systems_share_replication.rst:600 msgid "" "You cannot delete the last ``active`` replica with this command. You should " "use the :command:`manila delete` command to remove the share." msgstr "" #: ../shared_file_systems_share_resize.rst:5 msgid "Resize share" msgstr "" #: ../shared_file_systems_share_resize.rst:7 msgid "" "To change file share size, use the :command:`manila extend` command and the :" "command:`manila shrink` command. For most drivers it is safe operation. If " "you want to be sure that your data is safe, you can make a share back up by " "creating a snapshot of it." msgstr "" #: ../shared_file_systems_share_resize.rst:12 msgid "" "You can extend and shrink the share with the :command:`manila extend` and :" "command:`manila shrink` commands respectively, and specify the share with " "the new size that does not exceed the quota. For details, see :ref:`Quotas " "and Limits `. You also cannot shrink share size " "to 0 or to a greater value than the current share size." msgstr "" #: ../shared_file_systems_share_resize.rst:18 msgid "" "While extending, the share has an ``extending`` status. This means that the " "increase share size request was issued successfully." msgstr "" #: ../shared_file_systems_share_resize.rst:21 msgid "To extend the share and check the result, run:" msgstr "" #: ../shared_file_systems_share_resize.rst:66 msgid "" "While shrinking, the share has a ``shrinking`` status. This means that the " "decrease share size request was issued successfully. To shrink the share and " "check the result, run:" msgstr "" #: ../shared_file_systems_share_types.rst:5 msgid "Share types" msgstr "" #: ../shared_file_systems_share_types.rst:7 msgid "" "A share type enables you to filter or choose back ends before you create a " "share and to set data for the share driver. A share type behaves in the same " "way as a Block Storage volume type behaves." msgstr "" #: ../shared_file_systems_share_types.rst:11 msgid "" "In the Shared File Systems configuration file ``manila.conf``, the " "administrator can set the share type used by default for the share creation " "and then create a default share type." msgstr "" #: ../shared_file_systems_share_types.rst:15 msgid "To create a share type, use :command:`manila type-create` command as:" msgstr "" #: ../shared_file_systems_share_types.rst:23 msgid "" "where the ``name`` is the share type name, ``--is_public`` defines the level " "of the visibility for the share type, ``snapshot_support`` and " "``spec_driver_handles_share_servers`` are the extra specifications used to " "filter back ends. Administrators can create share types with these extra " "specifications for the back ends filtering:" msgstr "" #: ../shared_file_systems_share_types.rst:29 msgid "" "``driver_handles_share_servers``. Required. Defines the driver mode for " "share server lifecycle management. Valid values are ``true``/``1`` and " "``false``/``0``. Set to True when the share driver can manage, or handle, " "the share server lifecycle. Set to False when an administrator, rather than " "a share driver, manages the bare metal storage with some net interface " "instead of the presence of the share servers." msgstr "" #: ../shared_file_systems_share_types.rst:38 msgid "" "``snapshot_support``. Filters back ends by whether they do or do not support " "share snapshots. Default is ``True``. Set to True to find back ends that " "support share snapshots. Set to False to find back ends that do not support " "share snapshots." msgstr "" #: ../shared_file_systems_share_types.rst:45 msgid "" "The extra specifications set in the share types are operated in the :ref:" "`shared_file_systems_scheduling`." msgstr "" #: ../shared_file_systems_share_types.rst:48 msgid "" "Administrators can also set additional extra specifications for a share type " "for the following purposes:" msgstr "" #: ../shared_file_systems_share_types.rst:51 msgid "" "*Filter back ends*. Unqualified extra specifications written in this format: " "``extra_spec=value``. For example, **netapp_raid_type=raid4**." msgstr "" #: ../shared_file_systems_share_types.rst:54 msgid "" "*Set data for the driver*. Qualified extra specifications always written " "with the prefix with a colon, except for the special ``capabilities`` " "prefix, in this format: ``vendor:extra_spec=value``. For example, **netapp:" "thin_provisioned=true**." msgstr "" #: ../shared_file_systems_share_types.rst:59 msgid "" "The scheduler uses the special capabilities prefix for filtering. The " "scheduler can only create a share on a back end that reports capabilities " "matching the un-scoped extra-spec keys for the share type. For details, see " "`Capabilities and Extra-Specs `_." msgstr "" #: ../shared_file_systems_share_types.rst:65 msgid "" "Each driver implementation determines which extra specification keys it " "uses. For details, see the documentation for the driver." msgstr "" #: ../shared_file_systems_share_types.rst:68 msgid "" "An administrator can use the ``policy.json`` file to grant permissions for " "share type creation with extra specifications to other roles." msgstr "" #: ../shared_file_systems_share_types.rst:71 msgid "" "You set a share type to private or public and :ref:`manage the " "access` to the private share types. By default a share " "type is created as publicly accessible. Set :option:`--is_public` to " "``False`` to make the share type private." msgstr "" #: ../shared_file_systems_share_types.rst:77 msgid "Share type operations" msgstr "" #: ../shared_file_systems_share_types.rst:79 msgid "" "To create a new share type you need to specify the name of the new share " "type. You also require an extra spec ``driver_handles_share_servers``. The " "new share type can also be public." msgstr "" #: ../shared_file_systems_share_types.rst:94 msgid "" "You can set or unset extra specifications for a share type using **manila " "type-key set ** command. Since it is up to each " "driver what extra specification keys it uses, see the documentation for the " "specified driver." msgstr "" #: ../shared_file_systems_share_types.rst:103 msgid "" "It is also possible to view a list of current share types and extra " "specifications:" msgstr "" #: ../shared_file_systems_share_types.rst:117 msgid "" "Use :command:`manila type-key unset ` to unset an extra " "specification." msgstr "" #: ../shared_file_systems_share_types.rst:120 msgid "" "The public or private share type can be deleted with the :command:`manila " "type-delete ` command." msgstr "" #: ../shared_file_systems_share_types.rst:126 msgid "Share type access" msgstr "" #: ../shared_file_systems_share_types.rst:128 msgid "" "You can manage access to a private share type for different projects. " "Administrators can provide access, remove access, and retrieve information " "about access for a specified private share." msgstr "" #: ../shared_file_systems_share_types.rst:132 msgid "Create a private type:" msgstr "" #: ../shared_file_systems_share_types.rst:145 msgid "" "If you run :command:`manila type-list` only public share types appear. To " "see private share types, run :command:`manila type-list` with :option:`--" "all` optional argument." msgstr "" #: ../shared_file_systems_share_types.rst:149 msgid "" "Grant access to created private type for a demo and alt_demo projects by " "providing their IDs:" msgstr "" #: ../shared_file_systems_share_types.rst:157 msgid "" "To view information about access for a private share, type ``my_type1``:" msgstr "" #: ../shared_file_systems_share_types.rst:169 msgid "" "After granting access to the share, the target project can see the share " "type in the list, and create private shares." msgstr "" #: ../shared_file_systems_share_types.rst:173 msgid "" "To deny access for a specified project, use :command:`manila type-access-" "remove ` command." msgstr "" #: ../shared_file_systems_snapshots.rst:5 msgid "Share snapshots" msgstr "" #: ../shared_file_systems_snapshots.rst:7 msgid "" "The Shared File Systems service provides a snapshot mechanism to help users " "restore data by running the :command:`manila snapshot-create` command." msgstr "" #: ../shared_file_systems_snapshots.rst:10 msgid "" "To export a snapshot, create a share from it, then mount the new share to an " "instance. Copy files from the attached share into the archive." msgstr "" #: ../shared_file_systems_snapshots.rst:13 msgid "" "To import a snapshot, create a new share with appropriate size, attach it to " "instance, and then copy a file from the archive to the attached file system." msgstr "" #: ../shared_file_systems_snapshots.rst:19 msgid "You cannot delete a share while it has saved dependent snapshots." msgstr "" #: ../shared_file_systems_snapshots.rst:21 msgid "Create a snapshot from the share:" msgstr "" #: ../shared_file_systems_snapshots.rst:40 msgid "Update snapshot name or description if needed:" msgstr "" #: ../shared_file_systems_snapshots.rst:46 msgid "Check that status of a snapshot is ``available``:" msgstr "" #: ../shared_file_systems_snapshots.rst:65 msgid "" "To restore your data from a snapshot, use :command:`manila create` with key :" "option:`--snapshot-id`. This creates a new share from an existing snapshot. " "Create a share from a snapshot and check whether it is available:" msgstr "" #: ../shared_file_systems_snapshots.rst:129 msgid "" "You can soft-delete a snapshot using :command:`manila snapshot-delete " "`. If a snapshot is in busy state, and during the " "delete an ``error_deleting`` status appeared, administrator can force-delete " "it or explicitly reset the state." msgstr "" #: ../shared_file_systems_snapshots.rst:134 msgid "" "Use :command:`snapshot-reset-state [--state ] ` to update " "the state of a snapshot explicitly. A valid value of a status are " "``available``, ``error``, ``creating``, ``deleting``, ``error_deleting``. If " "no state is provided, the ``available`` state will be used." msgstr "" #: ../shared_file_systems_snapshots.rst:139 msgid "" "Use :command:`manila snapshot-force-delete ` to force-delete a " "specified share snapshot in any state." msgstr "" #: ../shared_file_systems_troubleshoot.rst:5 msgid "Troubleshoot Shared File Systems service" msgstr "" #: ../shared_file_systems_troubleshoot.rst:8 msgid "Failures in Share File Systems service during a share creation" msgstr "" #: ../shared_file_systems_troubleshoot.rst:13 msgid "New shares can enter ``error`` state during the creation process." msgstr "" #: ../shared_file_systems_troubleshoot.rst:18 msgid "" "Make sure, that share services are running in debug mode. If the debug mode " "is not set, you will not get any tips from logs how to fix your issue." msgstr "" #: ../shared_file_systems_troubleshoot.rst:21 msgid "" "Find what share service holds a specified share. To do that, run command :" "command:`manila show ` and find a share host in the " "output. Host uniquely identifies what share service holds the broken share." msgstr "" #: ../shared_file_systems_troubleshoot.rst:25 msgid "" "Look thought logs of this share service. Usually, it can be found at ``/etc/" "var/log/manila-share.log``. This log should contain kind of traceback with " "extra information to help you to find the origin of issues." msgstr "" #: ../shared_file_systems_troubleshoot.rst:30 msgid "No valid host was found" msgstr "" #: ../shared_file_systems_troubleshoot.rst:35 msgid "" "If a share type contains invalid extra specs, the scheduler will not be able " "to locate a valid host for the shares." msgstr "" #: ../shared_file_systems_troubleshoot.rst:41 msgid "" "To diagnose this issue, make sure that scheduler service is running in debug " "mode. Try to create a new share and look for message ``Failed to schedule " "create_share: No valid host was found.`` in ``/etc/var/log/manila-scheduler." "log``." msgstr "" #: ../shared_file_systems_troubleshoot.rst:46 msgid "" "To solve this issue look carefully through the list of extra specs in the " "share type, and the list of share services reported capabilities. Make sure " "that extra specs are pointed in the right way." msgstr "" #: ../shared_file_systems_troubleshoot.rst:51 msgid "Created share is unreachable" msgstr "" #: ../shared_file_systems_troubleshoot.rst:56 msgid "By default, a new share does not have any active access rules." msgstr "" #: ../shared_file_systems_troubleshoot.rst:61 msgid "" "To provide access to new share, you need to create appropriate access rule " "with the right value. The value must defines access." msgstr "" #: ../shared_file_systems_troubleshoot.rst:66 msgid "Service becomes unavailable after upgrade" msgstr "" #: ../shared_file_systems_troubleshoot.rst:71 msgid "" "After upgrading the Shared File Systems service from version v1 to version " "v2.x, you must update the service endpoint in the OpenStack Identity " "service. Otherwise, the service may become unavailable." msgstr "" #: ../shared_file_systems_troubleshoot.rst:78 msgid "" "To get the service type related to the Shared File Systems service, run:" msgstr "" #: ../shared_file_systems_troubleshoot.rst:86 msgid "" "You will get the endpoints expected from running the Shared File Systems " "service." msgstr "" #: ../shared_file_systems_troubleshoot.rst:89 msgid "" "Make sure that these endpoints are updated. Otherwise, delete the outdated " "endpoints and create new ones." msgstr "" #: ../shared_file_systems_troubleshoot.rst:93 msgid "Failures during management of internal resources" msgstr "" #: ../shared_file_systems_troubleshoot.rst:98 msgid "" "The Shared File System service manages internal resources effectively. " "Administrators may need to manually adjust internal resources to handle " "failures." msgstr "" #: ../shared_file_systems_troubleshoot.rst:105 msgid "" "Some drivers in the Shared File Systems service can create service entities, " "like servers and networks. If it is necessary, you can log in to tenant " "``service`` and take manual control over it." msgstr "" #: ../support-compute.rst:8 msgid "Troubleshoot Compute" msgstr "" #: ../support-compute.rst:10 msgid "" "Common problems for Compute typically involve misconfigured networking or " "credentials that are not sourced properly in the environment. Also, most " "flat networking configurations do not enable :command:`ping` or :command:" "`ssh` from a compute node to the instances that run on that node. Another " "common problem is trying to run 32-bit images on a 64-bit compute node. This " "section shows you how to troubleshoot Compute." msgstr "" #: ../support-compute.rst:22 msgid "Compute service logging" msgstr "" #: ../support-compute.rst:24 msgid "" "Compute stores a log file for each service in ``/var/log/nova``. For " "example, ``nova-compute.log`` is the log for the ``nova-compute`` service. " "You can set the following options to format log strings for the ``nova.log`` " "module in the ``nova.conf`` file:" msgstr "" #: ../support-compute.rst:30 msgid "``logging_context_format_string``" msgstr "" #: ../support-compute.rst:32 msgid "``logging_default_format_string``" msgstr "" #: ../support-compute.rst:34 msgid "" "If the log level is set to ``debug``, you can also specify " "``logging_debug_format_suffix`` to append extra formatting. For information " "about what variables are available for the formatter see http://docs.python." "org/library/logging.html#formatter-objects." msgstr "" #: ../support-compute.rst:39 msgid "" "You have two logging options for OpenStack Compute based on configuration " "settings. In ``nova.conf``, include the ``logfile`` option to enable " "logging. Alternatively you can set ``use_syslog = 1`` so that the nova " "daemon logs to syslog." msgstr "" #: ../support-compute.rst:48 msgid "Guru Meditation reports" msgstr "" #: ../support-compute.rst:50 msgid "" "A Guru Meditation report is sent by the Compute service upon receipt of the " "``SIGUSR2`` signal (``SIGUSR1`` before Mitaka). This report is a general-" "purpose error report that includes details about the current state of the " "service. The error report is sent to ``stderr``." msgstr "" #: ../support-compute.rst:55 msgid "" "For example, if you redirect error output to ``nova-api-err.log`` using :" "command:`nova-api 2>/var/log/nova/nova-api-err.log`, resulting in the " "process ID 8675, you can then run:" msgstr "" #: ../support-compute.rst:63 msgid "" "This command triggers the Guru Meditation report to be printed to ``/var/log/" "nova/nova-api-err.log``." msgstr "" #: ../support-compute.rst:66 msgid "The report has the following sections:" msgstr "" #: ../support-compute.rst:68 msgid "" "Package: Displays information about the package to which the process " "belongs, including version information." msgstr "" #: ../support-compute.rst:71 msgid "" "Threads: Displays stack traces and thread IDs for each of the threads within " "the process." msgstr "" #: ../support-compute.rst:74 msgid "" "Green Threads: Displays stack traces for each of the green threads within " "the process (green threads do not have thread IDs)." msgstr "" #: ../support-compute.rst:77 msgid "" "Configuration: Lists all configuration options currently accessible through " "the CONF object for the current process." msgstr "" #: ../support-compute.rst:80 msgid "" "For more information, see `Guru Meditation Reports `_." msgstr "" #: ../support-compute.rst:86 msgid "Common errors and fixes for Compute" msgstr "" #: ../support-compute.rst:88 msgid "" "The `ask.openstack.org `_ site offers a place to " "ask and answer questions, and you can also mark questions as frequently " "asked questions. This section describes some errors people have posted " "previously. Bugs are constantly being fixed, so online resources are a great " "way to get the most up-to-date errors and fixes." msgstr "" #: ../support-compute.rst:95 msgid "Credential errors, 401, and 403 forbidden errors" msgstr "" #: ../support-compute.rst:100 msgid "Missing credentials cause a ``403 forbidden`` error." msgstr "" #: ../support-compute.rst:105 msgid "To resolve this issue, use one of these methods:" msgstr "" #: ../support-compute.rst:108 msgid "" "Gets the ``novarc`` file from the project ZIP file, saves existing " "credentials in case of override, and manually sources the ``novarc`` file." msgstr "" #: ../support-compute.rst:110 msgid "Manual method" msgstr "" #: ../support-compute.rst:113 msgid "Generates ``novarc`` from the project ZIP file and sources it for you." msgstr "" #: ../support-compute.rst:113 msgid "Script method" msgstr "" #: ../support-compute.rst:115 msgid "" "When you run ``nova-api`` the first time, it generates the certificate " "authority information, including ``openssl.cnf``. If you start the CA " "services before this, you might not be able to create your ZIP file. Restart " "the services. When your CA information is available, create your ZIP file." msgstr "" #: ../support-compute.rst:121 msgid "" "Also, check your HTTP proxy settings to see whether they cause problems with " "``novarc`` creation." msgstr "" #: ../support-compute.rst:125 msgid "Instance errors" msgstr "" #: ../support-compute.rst:130 msgid "" "Sometimes a particular instance shows ``pending`` or you cannot SSH to it. " "Sometimes the image itself is the problem. For example, when you use flat " "manager networking, you do not have a DHCP server and certain images do not " "support interface injection; you cannot connect to them." msgstr "" #: ../support-compute.rst:139 msgid "" "To fix instance errors use an image that does support this method, such as " "Ubuntu, which obtains an IP address correctly with FlatManager network " "settings." msgstr "" #: ../support-compute.rst:143 msgid "" "To troubleshoot other possible problems with an instance, such as an " "instance that stays in a spawning state, check the directory for the " "particular instance under ``/var/lib/nova/instances`` on the ``nova-" "compute`` host and make sure that these files are present:" msgstr "" #: ../support-compute.rst:148 msgid "``libvirt.xml``" msgstr "" #: ../support-compute.rst:149 msgid "``disk``" msgstr "" #: ../support-compute.rst:150 msgid "``disk-raw``" msgstr "" #: ../support-compute.rst:151 msgid "``kernel``" msgstr "" #: ../support-compute.rst:152 msgid "``ramdisk``" msgstr "" #: ../support-compute.rst:153 msgid "``console.log``, after the instance starts." msgstr "" #: ../support-compute.rst:155 msgid "" "If any files are missing, empty, or very small, the ``nova-compute`` service " "did not successfully download the images from the Image service." msgstr "" #: ../support-compute.rst:158 msgid "" "Also check ``nova-compute.log`` for exceptions. Sometimes they do not appear " "in the console output." msgstr "" #: ../support-compute.rst:161 msgid "" "Next, check the log file for the instance in the ``/var/log/libvirt/qemu`` " "directory to see if it exists and has any useful error messages in it." msgstr "" #: ../support-compute.rst:164 msgid "" "Finally, from the ``/var/lib/nova/instances`` directory for the instance, " "see if this command returns an error:" msgstr "" #: ../support-compute.rst:172 msgid "Empty log output for Linux instances" msgstr "" #: ../support-compute.rst:177 msgid "" "You can view the log output of running instances from either the :guilabel:" "`Log` tab of the dashboard or the output of :command:`nova console-log`. In " "some cases, the log output of a running Linux instance will be empty or only " "display a single character (for example, the `?` character)." msgstr "" #: ../support-compute.rst:183 msgid "" "This occurs when the Compute service attempts to retrieve the log output of " "the instance via a serial console while the instance itself is not " "configured to send output to the console." msgstr "" #: ../support-compute.rst:190 msgid "" "To rectify this, append the following parameters to kernel arguments " "specified in the instance's boot loader:" msgstr "" #: ../support-compute.rst:197 msgid "" "Upon rebooting, the instance will be configured to send output to the " "Compute service." msgstr "" #: ../support-compute.rst:204 msgid "Reset the state of an instance" msgstr "" #: ../support-compute.rst:209 msgid "Instances can remain in an intermediate state, such as ``deleting``." msgstr "" #: ../support-compute.rst:214 msgid "" "You can use the :command:`nova reset-state` command to manually reset the " "state of an instance to an error state. You can then delete the instance. " "For example:" msgstr "" #: ../support-compute.rst:223 msgid "" "You can also use the :option:`--active` parameter to force the instance back " "to an active state instead of an error state. For example:" msgstr "" #: ../support-compute.rst:234 msgid "Injection problems" msgstr "" #: ../support-compute.rst:239 msgid "" "Instances may boot slowly, or do not boot. File injection can cause this " "problem." msgstr "" #: ../support-compute.rst:245 msgid "To disable injection in libvirt, set the following in ``nova.conf``:" msgstr "" #: ../support-compute.rst:254 msgid "" "If you have not enabled the configuration drive and you want to make user-" "specified files available from the metadata server for to improve " "performance and avoid boot failure if injection fails, you must disable " "injection." msgstr "" #: ../support-compute.rst:264 msgid "Disable live snapshotting" msgstr "" #: ../support-compute.rst:269 msgid "" "Administrators using libverity version ``1.2.2`` may experience problems " "with live snapshot creation. Occasionally, libvirt version ``1.2.2`` fails " "to create live snapshots under the load of creating concurrent snapshot." msgstr "" #: ../support-compute.rst:276 msgid "" "To effectively disable the libvirt live snapshotting, until the problem is " "resolved, configure the ``disable_libvirt_livesnapshot`` option. You can " "turn off the live snapshotting mechanism by setting up its value to ``True`` " "in the ``[workarounds]`` section of the ``nova.conf`` file:" msgstr "" #: ../telemetry-alarms.rst:5 msgid "Alarms" msgstr "" #: ../telemetry-alarms.rst:7 msgid "" "Alarms provide user-oriented Monitoring-as-a-Service for resources running " "on OpenStack. This type of monitoring ensures you can automatically scale in " "or out a group of instances through the Orchestration service, but you can " "also use alarms for general-purpose awareness of your cloud resources' " "health." msgstr "" #: ../telemetry-alarms.rst:13 msgid "These alarms follow a tri-state model:" msgstr "" #: ../telemetry-alarms.rst:16 msgid "The rule governing the alarm has been evaluated as ``False``." msgstr "" #: ../telemetry-alarms.rst:16 msgid "ok" msgstr "" #: ../telemetry-alarms.rst:19 msgid "The rule governing the alarm have been evaluated as ``True``." msgstr "" #: ../telemetry-alarms.rst:19 msgid "alarm" msgstr "" #: ../telemetry-alarms.rst:22 msgid "" "There are not enough datapoints available in the evaluation periods to " "meaningfully determine the alarm state." msgstr "" #: ../telemetry-alarms.rst:23 msgid "insufficient data" msgstr "" #: ../telemetry-alarms.rst:26 msgid "Alarm definitions" msgstr "" #: ../telemetry-alarms.rst:28 msgid "" "The definition of an alarm provides the rules that govern when a state " "transition should occur, and the actions to be taken thereon. The nature of " "these rules depend on the alarm type." msgstr "" #: ../telemetry-alarms.rst:33 msgid "Threshold rule alarms" msgstr "" #: ../telemetry-alarms.rst:35 msgid "" "For conventional threshold-oriented alarms, state transitions are governed " "by:" msgstr "" #: ../telemetry-alarms.rst:38 msgid "" "A static threshold value with a comparison operator such as greater than or " "less than." msgstr "" #: ../telemetry-alarms.rst:41 msgid "A statistic selection to aggregate the data." msgstr "" #: ../telemetry-alarms.rst:43 msgid "" "A sliding time window to indicate how far back into the recent past you want " "to look." msgstr "" #: ../telemetry-alarms.rst:47 msgid "Combination rule alarms" msgstr "" #: ../telemetry-alarms.rst:49 msgid "" "The Telemetry service also supports the concept of a meta-alarm, which " "aggregates over the current state of a set of underlying basic alarms " "combined via a logical operator (AND or OR)." msgstr "" #: ../telemetry-alarms.rst:54 msgid "Alarm dimensioning" msgstr "" #: ../telemetry-alarms.rst:56 msgid "" "A key associated concept is the notion of *dimensioning* which defines the " "set of matching meters that feed into an alarm evaluation. Recall that " "meters are per-resource-instance, so in the simplest case an alarm might be " "defined over a particular meter applied to all resources visible to a " "particular user. More useful however would be the option to explicitly " "select which specific resources you are interested in alarming on." msgstr "" #: ../telemetry-alarms.rst:64 msgid "" "At one extreme you might have narrowly dimensioned alarms where this " "selection would have only a single target (identified by resource ID). At " "the other extreme, you could have widely dimensioned alarms where this " "selection identifies many resources over which the statistic is aggregated. " "For example all instances booted from a particular image or all instances " "with matching user metadata (the latter is how the Orchestration service " "identifies autoscaling groups)." msgstr "" #: ../telemetry-alarms.rst:74 msgid "Alarm evaluation" msgstr "" #: ../telemetry-alarms.rst:76 msgid "" "Alarms are evaluated by the ``alarm-evaluator`` service on a periodic basis, " "defaulting to once every minute." msgstr "" #: ../telemetry-alarms.rst:80 msgid "Alarm actions" msgstr "" #: ../telemetry-alarms.rst:82 msgid "" "Any state transition of individual alarm (to ``ok``, ``alarm``, or " "``insufficient data``) may have one or more actions associated with it. " "These actions effectively send a signal to a consumer that the state " "transition has occurred, and provide some additional context. This includes " "the new and previous states, with some reason data describing the " "disposition with respect to the threshold, the number of datapoints involved " "and most recent of these. State transitions are detected by the ``alarm-" "evaluator``, whereas the ``alarm-notifier`` effects the actual notification " "action." msgstr "" #: ../telemetry-alarms.rst:92 msgid "**Webhooks**" msgstr "" #: ../telemetry-alarms.rst:94 msgid "" "These are the *de facto* notification type used by Telemetry alarming and " "simply involve an HTTP POST request being sent to an endpoint, with a " "request body containing a description of the state transition encoded as a " "JSON fragment." msgstr "" #: ../telemetry-alarms.rst:99 msgid "**Log actions**" msgstr "" #: ../telemetry-alarms.rst:101 msgid "" "These are a lightweight alternative to webhooks, whereby the state " "transition is simply logged by the ``alarm-notifier``, and are intended " "primarily for testing purposes." msgstr "" #: ../telemetry-alarms.rst:106 msgid "Workload partitioning" msgstr "" #: ../telemetry-alarms.rst:108 msgid "" "The alarm evaluation process uses the same mechanism for workload " "partitioning as the central and compute agents. The `Tooz `_ library provides the coordination within the groups " "of service instances. For further information about this approach, see the " "section called :ref:`Support for HA deployment of the central and compute " "agent services `." msgstr "" #: ../telemetry-alarms.rst:116 msgid "" "To use this workload partitioning solution set the ``evaluation_service`` " "option to ``default``. For more information, see the alarm section in the " "`OpenStack Configuration Reference `_." msgstr "" #: ../telemetry-alarms.rst:122 msgid "Using alarms" msgstr "" #: ../telemetry-alarms.rst:125 msgid "Alarm creation" msgstr "" #: ../telemetry-alarms.rst:127 msgid "" "An example of creating a threshold-oriented alarm, based on an upper bound " "on the CPU utilization for a particular instance:" msgstr "" #: ../telemetry-alarms.rst:140 msgid "" "This creates an alarm that will fire when the average CPU utilization for an " "individual instance exceeds 70% for three consecutive 10 minute periods. The " "notification in this case is simply a log message, though it could " "alternatively be a webhook URL." msgstr "" #: ../telemetry-alarms.rst:147 msgid "" "Alarm names must be unique for the alarms associated with an individual " "project. Administrator can limit the maximum resulting actions for three " "different states, and the ability for a normal user to create ``log://`` and " "``test://`` notifiers is disabled. This prevents unintentional consumption " "of disk and memory resources by the Telemetry service." msgstr "" #: ../telemetry-alarms.rst:155 msgid "" "The sliding time window over which the alarm is evaluated is 30 minutes in " "this example. This window is not clamped to wall-clock time boundaries, " "rather it's anchored on the current time for each evaluation cycle, and " "continually creeps forward as each evaluation cycle rolls around (by " "default, this occurs every minute)." msgstr "" #: ../telemetry-alarms.rst:161 msgid "" "The period length is set to 600s in this case to reflect the out-of-the-box " "default cadence for collection of the associated meter. This period matching " "illustrates an important general principal to keep in mind for alarms:" msgstr "" #: ../telemetry-alarms.rst:168 msgid "" "The alarm period should be a whole number multiple (1 or more) of the " "interval configured in the pipeline corresponding to the target meter." msgstr "" #: ../telemetry-alarms.rst:172 msgid "" "Otherwise the alarm will tend to flit in and out of the ``insufficient " "data`` state due to the mismatch between the actual frequency of datapoints " "in the metering store and the statistics queries used to compare against the " "alarm threshold. If a shorter alarm period is needed, then the corresponding " "interval should be adjusted in the ``pipeline.yaml`` file." msgstr "" #: ../telemetry-alarms.rst:179 msgid "" "Other notable alarm attributes that may be set on creation, or via a " "subsequent update, include:" msgstr "" #: ../telemetry-alarms.rst:183 msgid "The initial alarm state (defaults to ``insufficient data``)." msgstr "" #: ../telemetry-alarms.rst:183 msgid "state" msgstr "" #: ../telemetry-alarms.rst:186 msgid "" "A free-text description of the alarm (defaults to a synopsis of the alarm " "rule)." msgstr "" #: ../telemetry-alarms.rst:187 msgid "description" msgstr "" #: ../telemetry-alarms.rst:190 msgid "" "True if evaluation and actioning is to be enabled for this alarm (defaults " "to ``True``)." msgstr "" #: ../telemetry-alarms.rst:194 msgid "" "True if actions should be repeatedly notified while the alarm remains in the " "target state (defaults to ``False``)." msgstr "" #: ../telemetry-alarms.rst:195 msgid "repeat-actions" msgstr "" #: ../telemetry-alarms.rst:198 msgid "An action to invoke when the alarm state transitions to ``ok``." msgstr "" #: ../telemetry-alarms.rst:198 msgid "ok-action" msgstr "" #: ../telemetry-alarms.rst:201 msgid "" "An action to invoke when the alarm state transitions to ``insufficient " "data``." msgstr "" #: ../telemetry-alarms.rst:202 msgid "insufficient-data-action" msgstr "" #: ../telemetry-alarms.rst:205 msgid "" "Used to restrict evaluation of the alarm to certain times of the day or days " "of the week (expressed as ``cron`` expression with an optional timezone)." msgstr "" #: ../telemetry-alarms.rst:207 msgid "time-constraint" msgstr "" #: ../telemetry-alarms.rst:209 msgid "" "An example of creating a combination alarm, based on the combined state of " "two underlying alarms:" msgstr "" #: ../telemetry-alarms.rst:220 msgid "" "This creates an alarm that will fire when either one of two underlying " "alarms transition into the alarm state. The notification in this case is a " "webhook call. Any number of underlying alarms can be combined in this way, " "using either ``and`` or ``or``." msgstr "" #: ../telemetry-alarms.rst:226 msgid "Alarm retrieval" msgstr "" #: ../telemetry-alarms.rst:228 msgid "" "You can display all your alarms via (some attributes are omitted for " "brevity):" msgstr "" #: ../telemetry-alarms.rst:240 msgid "" "In this case, the state is reported as ``insufficient data`` which could " "indicate that:" msgstr "" #: ../telemetry-alarms.rst:243 msgid "" "meters have not yet been gathered about this instance over the evaluation " "window into the recent past (for example a brand-new instance)" msgstr "" #: ../telemetry-alarms.rst:247 msgid "" "*or*, that the identified instance is not visible to the user/tenant owning " "the alarm" msgstr "" #: ../telemetry-alarms.rst:250 msgid "" "*or*, simply that an alarm evaluation cycle hasn't kicked off since the " "alarm was created (by default, alarms are evaluated once per minute)." msgstr "" #: ../telemetry-alarms.rst:256 msgid "" "The visibility of alarms depends on the role and project associated with the " "user issuing the query:" msgstr "" #: ../telemetry-alarms.rst:259 msgid "admin users see *all* alarms, regardless of the owner" msgstr "" #: ../telemetry-alarms.rst:261 msgid "" "on-admin users see only the alarms associated with their project (as per the " "normal tenant segregation in OpenStack)" msgstr "" #: ../telemetry-alarms.rst:265 msgid "Alarm update" msgstr "" #: ../telemetry-alarms.rst:267 msgid "" "Once the state of the alarm has settled down, we might decide that we set " "that bar too low with 70%, in which case the threshold (or most any other " "alarm attribute) can be updated thusly:" msgstr "" #: ../telemetry-alarms.rst:275 msgid "" "The change will take effect from the next evaluation cycle, which by default " "occurs every minute." msgstr "" #: ../telemetry-alarms.rst:278 msgid "" "Most alarm attributes can be changed in this way, but there is also a " "convenient short-cut for getting and setting the alarm state:" msgstr "" #: ../telemetry-alarms.rst:286 msgid "" "Over time the state of the alarm may change often, especially if the " "threshold is chosen to be close to the trending value of the statistic. You " "can follow the history of an alarm over its lifecycle via the audit API:" msgstr "" #: ../telemetry-alarms.rst:306 msgid "Alarm deletion" msgstr "" #: ../telemetry-alarms.rst:308 msgid "" "An alarm that is no longer required can be disabled so that it is no longer " "actively evaluated:" msgstr "" #: ../telemetry-alarms.rst:315 msgid "or even deleted permanently (an irreversible step):" msgstr "" #: ../telemetry-alarms.rst:323 msgid "By default, alarm history is retained for deleted alarms." msgstr "" #: ../telemetry-best-practices.rst:2 msgid "Telemetry best practices" msgstr "" #: ../telemetry-best-practices.rst:4 msgid "" "The following are some suggested best practices to follow when deploying and " "configuring the Telemetry service. The best practices are divided into data " "collection and storage." msgstr "" # #-#-#-#-# telemetry-best-practices.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-best-practices.rst:9 ../telemetry-data-collection.rst:5 msgid "Data collection" msgstr "" #: ../telemetry-best-practices.rst:11 msgid "" "The Telemetry service collects a continuously growing set of data. Not all " "the data will be relevant for an administrator to monitor." msgstr "" #: ../telemetry-best-practices.rst:14 msgid "" "Based on your needs, you can edit the ``pipeline.yaml`` configuration file " "to include a selected number of meters while disregarding the rest." msgstr "" #: ../telemetry-best-practices.rst:18 msgid "" "By default, Telemetry service polls the service APIs every 10 minutes. You " "can change the polling interval on a per meter basis by editing the " "``pipeline.yaml`` configuration file." msgstr "" #: ../telemetry-best-practices.rst:24 msgid "" "If the polling interval is too short, it will likely cause increase of " "stored data and the stress on the service APIs." msgstr "" #: ../telemetry-best-practices.rst:27 msgid "" "Expand the configuration to have greater control over different meter " "intervals." msgstr "" #: ../telemetry-best-practices.rst:32 msgid "For more information, see the :ref:`telemetry-pipeline-configuration`." msgstr "" #: ../telemetry-best-practices.rst:35 msgid "" "If you are using the Kilo version of Telemetry, you can delay or adjust " "polling requests by enabling the jitter support. This adds a random delay on " "how the polling agents send requests to the service APIs. To enable jitter, " "set ``shuffle_time_before_polling_task`` in the ``ceilometer.conf`` " "configuration file to an integer greater than 0." msgstr "" #: ../telemetry-best-practices.rst:42 msgid "" "If you are using Juno or later releases, based on the number of resources " "that will be polled, you can add additional central and compute agents as " "necessary. The agents are designed to scale horizontally." msgstr "" #: ../telemetry-best-practices.rst:49 msgid "For more information see, :ref:`ha-deploy-services`." msgstr "" #: ../telemetry-best-practices.rst:51 msgid "" "If you are using Juno or later releases, use the ``notifier://`` publisher " "rather than ``rpc://`` as there is a certain level of overhead that comes " "with RPC." msgstr "" #: ../telemetry-best-practices.rst:57 msgid "" "For more information on RPC overhead, see `RPC overhead info `__." msgstr "" #: ../telemetry-best-practices.rst:62 msgid "Data storage" msgstr "" #: ../telemetry-best-practices.rst:64 msgid "" "We recommend that you avoid open-ended queries. In order to get better " "performance you can use reasonable time ranges and/or other query " "constraints for retrieving measurements." msgstr "" #: ../telemetry-best-practices.rst:68 msgid "" "For example, this open-ended query might return an unpredictable amount of " "data:" msgstr "" #: ../telemetry-best-practices.rst:75 msgid "" "Whereas, this well-formed query returns a more reasonable amount of data, " "hence better performance:" msgstr "" #: ../telemetry-best-practices.rst:84 msgid "" "As of the Liberty release, the number of items returned will be restricted " "to the value defined by ``default_api_return_limit`` in the ``ceilometer." "conf`` configuration file. Alternatively, the value can be set per query by " "passing ``limit`` option in request." msgstr "" #: ../telemetry-best-practices.rst:89 msgid "" "You can install the API behind ``mod_wsgi``, as it provides more settings to " "tweak, like ``threads`` and ``processes`` in case of ``WSGIDaemon``." msgstr "" #: ../telemetry-best-practices.rst:95 msgid "" "For more information on how to configure ``mod_wsgi``, see the `Telemetry " "Install Documentation `__." msgstr "" #: ../telemetry-best-practices.rst:99 msgid "" "The collection service provided by the Telemetry project is not intended to " "be an archival service. Set a Time to Live (TTL) value to expire data and " "minimize the database size. If you would like to keep your data for longer " "time period, you may consider storing it in a data warehouse outside of " "Telemetry." msgstr "" #: ../telemetry-best-practices.rst:107 msgid "" "For more information on how to set the TTL, see :ref:`telemetry-storing-" "samples`." msgstr "" #: ../telemetry-best-practices.rst:110 msgid "" "We recommend that you do not use SQLAlchemy back end prior to the Juno " "release, as it previously contained extraneous relationships to handle " "deprecated data models. This resulted in extremely poor query performance." msgstr "" #: ../telemetry-best-practices.rst:115 msgid "" "We recommend that you do not run MongoDB on the same node as the controller. " "Keep it on a separate node optimized for fast storage for better " "performance. Also it is advisable for the MongoDB node to have a lot of " "memory." msgstr "" #: ../telemetry-best-practices.rst:122 msgid "" "For more information on how much memory you need, see `MongoDB FAQ `__." msgstr "" #: ../telemetry-best-practices.rst:125 msgid "" "Use replica sets in MongoDB. Replica sets provide high availability through " "automatic failover. If your primary node fails, MongoDB will elect a " "secondary node to replace the primary node, and your cluster will remain " "functional." msgstr "" #: ../telemetry-best-practices.rst:130 msgid "" "For more information on replica sets, see the `MongoDB replica sets docs " "`__." msgstr "" #: ../telemetry-best-practices.rst:133 msgid "" "Use sharding in MongoDB. Sharding helps in storing data records across " "multiple machines and is the MongoDB’s approach to meet the demands of data " "growth." msgstr "" #: ../telemetry-best-practices.rst:137 msgid "" "For more information on sharding, see the `MongoDB sharding docs `__." msgstr "" #: ../telemetry-data-collection.rst:7 msgid "" "The main responsibility of Telemetry in OpenStack is to collect information " "about the system that can be used by billing systems or interpreted by " "analytic tooling. The original focus, regarding to the collected data, was " "on the counters that can be used for billing, but the range is getting wider " "continuously." msgstr "" #: ../telemetry-data-collection.rst:13 msgid "" "Collected data can be stored in the form of samples or events in the " "supported databases, listed in :ref:`telemetry-supported-databases`." msgstr "" #: ../telemetry-data-collection.rst:16 msgid "" "Samples can have various sources regarding to the needs and configuration of " "Telemetry, which requires multiple methods to collect data." msgstr "" #: ../telemetry-data-collection.rst:20 msgid "The available data collection mechanisms are:" msgstr "" #: ../telemetry-data-collection.rst:23 msgid "" "Processing notifications from other OpenStack services, by consuming " "messages from the configured message queue system." msgstr "" #: ../telemetry-data-collection.rst:27 msgid "" "Retrieve information directly from the hypervisor or from the host machine " "using SNMP, or by using the APIs of other OpenStack services." msgstr "" #: ../telemetry-data-collection.rst:29 ../telemetry-data-collection.rst:214 msgid "Polling" msgstr "" #: ../telemetry-data-collection.rst:32 msgid "Pushing samples via the RESTful API of Telemetry." msgstr "" #: ../telemetry-data-collection.rst:32 msgid "RESTful API" msgstr "" #: ../telemetry-data-collection.rst:36 msgid "" "All the services send notifications about the executed operations or system " "state in OpenStack. Several notifications carry information that can be " "metered, like the CPU time of a VM instance created by OpenStack Compute " "service." msgstr "" #: ../telemetry-data-collection.rst:41 msgid "" "The Telemetry service has a separate agent that is responsible for consuming " "notifications, namely the notification agent. This component is responsible " "for consuming from the message bus and transforming notifications into " "events and measurement samples. Beginning in the Liberty release, the " "notification agent is responsible for all data processing such as " "transformations and publishing. After processing, the data is sent via AMQP " "to the collector service or any external service, which is responsible for " "persisting the data into the configured database back end." msgstr "" #: ../telemetry-data-collection.rst:51 msgid "" "The different OpenStack services emit several notifications about the " "various types of events that happen in the system during normal operation. " "Not all these notifications are consumed by the Telemetry service, as the " "intention is only to capture the billable events and notifications that can " "be used for monitoring or profiling purposes. The notification agent filters " "by the event type, that is contained by each notification message. The " "following table contains the event types by each OpenStack service that are " "transformed to samples by Telemetry." msgstr "" #: ../telemetry-data-collection.rst:61 msgid "Event types" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:61 ../telemetry-data-collection.rst:1114 #: ../telemetry-measurements.rst:100 ../telemetry-measurements.rst:434 #: ../telemetry-measurements.rst:489 ../telemetry-measurements.rst:533 #: ../telemetry-measurements.rst:595 ../telemetry-measurements.rst:671 #: ../telemetry-measurements.rst:705 ../telemetry-measurements.rst:771 #: ../telemetry-measurements.rst:821 ../telemetry-measurements.rst:859 #: ../telemetry-measurements.rst:931 ../telemetry-measurements.rst:995 #: ../telemetry-measurements.rst:1082 ../telemetry-measurements.rst:1162 #: ../telemetry-measurements.rst:1257 ../telemetry-measurements.rst:1323 #: ../telemetry-measurements.rst:1374 ../telemetry-measurements.rst:1401 #: ../telemetry-measurements.rst:1425 ../telemetry-measurements.rst:1446 msgid "Note" msgstr "" #: ../telemetry-data-collection.rst:61 msgid "OpenStack service" msgstr "" #: ../telemetry-data-collection.rst:63 msgid "" "For a more detailed list of Compute notifications please check the `System " "Usage Data Data wiki page `__." msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:63 ../telemetry-measurements.rst:95 msgid "OpenStack Compute" msgstr "" #: ../telemetry-data-collection.rst:63 msgid "scheduler.run\\_insta\\ nce.scheduled" msgstr "" #: ../telemetry-data-collection.rst:66 msgid "scheduler.select\\_\\ destinations" msgstr "" #: ../telemetry-data-collection.rst:69 msgid "compute.instance.\\*" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:71 ../telemetry-measurements.rst:472 msgid "Bare metal service" msgstr "" #: ../telemetry-data-collection.rst:71 msgid "hardware.ipmi.\\*" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:73 ../telemetry-measurements.rst:666 msgid "OpenStack Image service" msgstr "" #: ../telemetry-data-collection.rst:73 msgid "" "The required configuration for Image service can be found in `Configure the " "Image service for Telemetry section `__ section in the OpenStack " "Installation Guide" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:73 ../telemetry-measurements.rst:683 msgid "image.update" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:75 ../telemetry-measurements.rst:686 msgid "image.upload" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:77 ../telemetry-measurements.rst:689 msgid "image.delete" msgstr "" #: ../telemetry-data-collection.rst:79 msgid "image.send" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:83 ../telemetry-data-collection.rst:282 #: ../telemetry-measurements.rst:926 msgid "OpenStack Networking" msgstr "" #: ../telemetry-data-collection.rst:83 msgid "floatingip.create.end" msgstr "" #: ../telemetry-data-collection.rst:85 msgid "floatingip.update.\\*" msgstr "" #: ../telemetry-data-collection.rst:87 msgid "floatingip.exists" msgstr "" #: ../telemetry-data-collection.rst:89 msgid "network.create.end" msgstr "" #: ../telemetry-data-collection.rst:91 msgid "network.update.\\*" msgstr "" #: ../telemetry-data-collection.rst:93 msgid "network.exists" msgstr "" #: ../telemetry-data-collection.rst:95 msgid "port.create.end" msgstr "" #: ../telemetry-data-collection.rst:97 msgid "port.update.\\*" msgstr "" #: ../telemetry-data-collection.rst:99 msgid "port.exists" msgstr "" #: ../telemetry-data-collection.rst:101 msgid "router.create.end" msgstr "" #: ../telemetry-data-collection.rst:103 msgid "router.update.\\*" msgstr "" #: ../telemetry-data-collection.rst:105 msgid "router.exists" msgstr "" #: ../telemetry-data-collection.rst:107 msgid "subnet.create.end" msgstr "" #: ../telemetry-data-collection.rst:109 msgid "subnet.update.\\*" msgstr "" #: ../telemetry-data-collection.rst:111 msgid "subnet.exists" msgstr "" #: ../telemetry-data-collection.rst:113 msgid "l3.meter" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:115 ../telemetry-measurements.rst:1370 msgid "Orchestration service" msgstr "" #: ../telemetry-data-collection.rst:115 msgid "orchestration.stack\\ .create.end" msgstr "" #: ../telemetry-data-collection.rst:118 msgid "orchestration.stack\\ .update.end" msgstr "" #: ../telemetry-data-collection.rst:121 msgid "orchestration.stack\\ .delete.end" msgstr "" #: ../telemetry-data-collection.rst:124 msgid "orchestration.stack\\ .resume.end" msgstr "" #: ../telemetry-data-collection.rst:127 msgid "orchestration.stack\\ .suspend.end" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:130 ../telemetry-data-collection.rst:286 #: ../telemetry-measurements.rst:700 msgid "OpenStack Block Storage" msgstr "" #: ../telemetry-data-collection.rst:130 msgid "" "The required configuration for Block Storage service can be found in the " "`Add the Block Storage service agent for Telemetry section `__ " "section in the OpenStack Installation Guide." msgstr "" #: ../telemetry-data-collection.rst:130 msgid "volume.exists" msgstr "" #: ../telemetry-data-collection.rst:132 msgid "volume.create.\\*" msgstr "" #: ../telemetry-data-collection.rst:134 msgid "volume.delete.\\*" msgstr "" #: ../telemetry-data-collection.rst:136 msgid "volume.update.\\*" msgstr "" #: ../telemetry-data-collection.rst:138 msgid "volume.resize.\\*" msgstr "" #: ../telemetry-data-collection.rst:140 msgid "volume.attach.\\*" msgstr "" #: ../telemetry-data-collection.rst:142 msgid "volume.detach.\\*" msgstr "" #: ../telemetry-data-collection.rst:144 msgid "snapshot.exists" msgstr "" #: ../telemetry-data-collection.rst:146 msgid "snapshot.create.\\*" msgstr "" #: ../telemetry-data-collection.rst:148 msgid "snapshot.delete.\\*" msgstr "" #: ../telemetry-data-collection.rst:150 msgid "snapshot.update.\\*" msgstr "" #: ../telemetry-data-collection.rst:152 msgid "volume.backup.create.\\ \\*" msgstr "" #: ../telemetry-data-collection.rst:155 msgid "volume.backup.delete.\\ \\*" msgstr "" #: ../telemetry-data-collection.rst:158 msgid "volume.backup.restore.\\ \\*" msgstr "" #: ../telemetry-data-collection.rst:164 msgid "" "Some services require additional configuration to emit the notifications " "using the correct control exchange on the message queue and so forth. These " "configuration needs are referred in the above table for each OpenStack " "service that needs it." msgstr "" #: ../telemetry-data-collection.rst:169 msgid "" "Specific notifications from the Compute service are important for " "administrators and users. Configuring ``nova_notifications`` in the ``nova." "conf`` file allows administrators to respond to events rapidly. For more " "information on configuring notifications for the compute service, see " "`Telemetry services `__ in the OpenStack Installation Guide." msgstr "" #: ../telemetry-data-collection.rst:180 msgid "" "When the ``store_events`` option is set to ``True`` in ``ceilometer.conf``, " "Prior to the Kilo release, the notification agent needed database access in " "order to work properly." msgstr "" #: ../telemetry-data-collection.rst:185 msgid "Middleware for the OpenStack Object Storage service" msgstr "" #: ../telemetry-data-collection.rst:187 msgid "" "A subset of Object Store statistics requires additional middleware to be " "installed behind the proxy of Object Store. This additional component emits " "notifications containing data-flow-oriented meters, namely the ``storage." "objects.(incoming|outgoing).bytes values``. The list of these meters are " "listed in :ref:`telemetry-object-storage-meter`, marked with " "``notification`` as origin." msgstr "" #: ../telemetry-data-collection.rst:194 msgid "" "The instructions on how to install this middleware can be found in " "`Configure the Object Storage service for Telemetry `__ section in the " "OpenStack Installation Guide." msgstr "" #: ../telemetry-data-collection.rst:200 msgid "Telemetry middleware" msgstr "" #: ../telemetry-data-collection.rst:202 msgid "" "Telemetry provides the capability of counting the HTTP requests and " "responses for each API endpoint in OpenStack. This is achieved by storing a " "sample for each event marked as ``audit.http.request``, ``audit.http." "response``, ``http.request`` or ``http.response``." msgstr "" #: ../telemetry-data-collection.rst:207 msgid "" "It is recommended that these notifications be consumed as events rather than " "samples to better index the appropriate values and avoid massive load on the " "Metering database. If preferred, Telemetry can consume these events as " "samples if the services are configured to emit ``http.*`` notifications." msgstr "" #: ../telemetry-data-collection.rst:216 msgid "" "The Telemetry service is intended to store a complex picture of the " "infrastructure. This goal requires additional information than what is " "provided by the events and notifications published by each service. Some " "information is not emitted directly, like resource usage of the VM instances." msgstr "" #: ../telemetry-data-collection.rst:222 msgid "" "Therefore Telemetry uses another method to gather this data by polling the " "infrastructure including the APIs of the different OpenStack services and " "other assets, like hypervisors. The latter case requires closer interaction " "with the compute hosts. To solve this issue, Telemetry uses an agent based " "architecture to fulfill the requirements against the data collection." msgstr "" #: ../telemetry-data-collection.rst:229 msgid "" "There are three types of agents supporting the polling mechanism, the " "``compute agent``, the ``central agent``, and the ``IPMI agent``. Under the " "hood, all the types of polling agents are the same ``ceilometer-polling`` " "agent, except that they load different polling plug-ins (pollsters) from " "different namespaces to gather data. The following subsections give further " "information regarding the architectural and configuration details of these " "components." msgstr "" #: ../telemetry-data-collection.rst:237 msgid "Running :command:`ceilometer-agent-compute` is exactly the same as:" msgstr "" #: ../telemetry-data-collection.rst:243 msgid "Running :command:`ceilometer-agent-central` is exactly the same as:" msgstr "" #: ../telemetry-data-collection.rst:249 msgid "Running :command:`ceilometer-agent-ipmi` is exactly the same as:" msgstr "" #: ../telemetry-data-collection.rst:255 msgid "" "In addition to loading all the polling plug-ins registered in the specified " "namespaces, the ``ceilometer-polling`` agent can also specify the polling " "plug-ins to be loaded by using the ``pollster-list`` option:" msgstr "" #: ../telemetry-data-collection.rst:266 msgid "HA deployment is NOT supported if the ``pollster-list`` option is used." msgstr "" #: ../telemetry-data-collection.rst:271 msgid "The ``ceilometer-polling`` service is available since Kilo release." msgstr "" #: ../telemetry-data-collection.rst:274 msgid "Central agent" msgstr "" #: ../telemetry-data-collection.rst:276 msgid "" "This agent is responsible for polling public REST APIs to retrieve " "additional information on OpenStack resources not already surfaced via " "notifications, and also for polling hardware resources over SNMP." msgstr "" #: ../telemetry-data-collection.rst:280 msgid "The following services can be polled with this agent:" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:284 ../telemetry-measurements.rst:766 msgid "OpenStack Object Storage" msgstr "" #: ../telemetry-data-collection.rst:288 msgid "Hardware resources via SNMP" msgstr "" #: ../telemetry-data-collection.rst:290 msgid "" "Energy consumption meters via `Kwapi `__ " "framework" msgstr "" #: ../telemetry-data-collection.rst:293 msgid "" "To install and configure this service use the `Add the Telemetry service " "`__ " "section in the OpenStack Installation Guide." msgstr "" #: ../telemetry-data-collection.rst:297 msgid "" "The central agent does not need direct database connection. The samples " "collected by this agent are sent via AMQP to the notification agent to be " "processed." msgstr "" #: ../telemetry-data-collection.rst:303 msgid "" "Prior to the Liberty release, data from the polling agents was processed " "locally and published accordingly rather than by the notification agent." msgstr "" #: ../telemetry-data-collection.rst:307 msgid "Compute agent" msgstr "" #: ../telemetry-data-collection.rst:309 msgid "" "This agent is responsible for collecting resource usage data of VM instances " "on individual compute nodes within an OpenStack deployment. This mechanism " "requires a closer interaction with the hypervisor, therefore a separate " "agent type fulfills the collection of the related meters, which is placed on " "the host machines to locally retrieve this information." msgstr "" #: ../telemetry-data-collection.rst:316 msgid "" "A compute agent instance has to be installed on each and every compute node, " "installation instructions can be found in the `Install the Compute agent for " "Telemetry `__ section in the OpenStack Installation Guide." msgstr "" #: ../telemetry-data-collection.rst:322 msgid "" "Just like the central agent, this component also does not need a direct " "database connection. The samples are sent via AMQP to the notification agent." msgstr "" #: ../telemetry-data-collection.rst:325 msgid "" "The list of supported hypervisors can be found in :ref:`telemetry-supported-" "hypervisors`. The compute agent uses the API of the hypervisor installed on " "the compute hosts. Therefore the supported meters may be different in case " "of each virtualization back end, as each inspection tool provides a " "different set of meters." msgstr "" #: ../telemetry-data-collection.rst:331 msgid "" "The list of collected meters can be found in :ref:`telemetry-compute-" "meters`. The support column provides the information that which meter is " "available for each hypervisor supported by the Telemetry service." msgstr "" #: ../telemetry-data-collection.rst:337 msgid "Telemetry supports Libvirt, which hides the hypervisor under it." msgstr "" #: ../telemetry-data-collection.rst:342 msgid "IPMI agent" msgstr "" #: ../telemetry-data-collection.rst:344 msgid "" "This agent is responsible for collecting IPMI sensor data and Intel Node " "Manager data on individual compute nodes within an OpenStack deployment. " "This agent requires an IPMI capable node with the ipmitool utility " "installed, which is commonly used for IPMI control on various Linux " "distributions." msgstr "" #: ../telemetry-data-collection.rst:349 msgid "" "An IPMI agent instance could be installed on each and every compute node " "with IPMI support, except when the node is managed by the Bare metal service " "and the ``conductor.send_sensor_data`` option is set to ``true`` in the Bare " "metal service. It is no harm to install this agent on a compute node without " "IPMI or Intel Node Manager support, as the agent checks for the hardware and " "if none is available, returns empty data. It is suggested that you install " "the IPMI agent only on an IPMI capable node for performance reasons." msgstr "" #: ../telemetry-data-collection.rst:358 msgid "" "Just like the central agent, this component also does not need direct " "database access. The samples are sent via AMQP to the notification agent." msgstr "" #: ../telemetry-data-collection.rst:361 msgid "" "The list of collected meters can be found in :ref:`telemetry-bare-metal-" "service`." msgstr "" #: ../telemetry-data-collection.rst:366 msgid "" "Do not deploy both the IPMI agent and the Bare metal service on one compute " "node. If ``conductor.send_sensor_data`` is set, this misconfiguration causes " "duplicated IPMI sensor samples." msgstr "" #: ../telemetry-data-collection.rst:374 msgid "Support for HA deployment" msgstr "" #: ../telemetry-data-collection.rst:375 msgid "" "Both the polling agents and notification agents can run in an HA deployment, " "which means that multiple instances of these services can run in parallel " "with workload partitioning among these running instances." msgstr "" #: ../telemetry-data-collection.rst:379 msgid "" "The `Tooz `__ library provides the " "coordination within the groups of service instances. It provides an API " "above several back ends that can be used for building distributed " "applications." msgstr "" #: ../telemetry-data-collection.rst:384 msgid "" "Tooz supports `various drivers `__ including the following back end solutions:" msgstr "" #: ../telemetry-data-collection.rst:388 msgid "" "`Zookeeper `__. Recommended solution by the " "Tooz project." msgstr "" #: ../telemetry-data-collection.rst:391 msgid "`Redis `__. Recommended solution by the Tooz project." msgstr "" #: ../telemetry-data-collection.rst:394 msgid "`Memcached `__. Recommended for testing." msgstr "" #: ../telemetry-data-collection.rst:396 msgid "" "You must configure a supported Tooz driver for the HA deployment of the " "Telemetry services." msgstr "" #: ../telemetry-data-collection.rst:399 msgid "" "For information about the required configuration options that have to be set " "in the ``ceilometer.conf`` configuration file for both the central and " "compute agents, see the `Coordination section `__ in " "the OpenStack Configuration Reference." msgstr "" #: ../telemetry-data-collection.rst:406 msgid "Notification agent HA deployment" msgstr "" #: ../telemetry-data-collection.rst:408 msgid "" "In the Kilo release, workload partitioning support was added to the " "notification agent. This is particularly useful as the pipeline processing " "is handled exclusively by the notification agent now which may result in a " "larger amount of load." msgstr "" #: ../telemetry-data-collection.rst:413 msgid "" "To enable workload partitioning by notification agent, the ``backend_url`` " "option must be set in the ``ceilometer.conf`` configuration file. " "Additionally, ``workload_partitioning`` should be enabled in the " "`Notification section `__ in the OpenStack " "Configuration Reference." msgstr "" #: ../telemetry-data-collection.rst:420 msgid "" "In Liberty, the notification agent creates multiple queues to divide the " "workload across all active agents. The number of queues can be controlled by " "the ``pipeline_processing_queues`` option in the ``ceilometer.conf`` " "configuration file. A larger value will result in better distribution of " "tasks but will also require more memory and longer startup time. It is " "recommended to have a value approximately three times the number of active " "notification agents. At a minimum, the value should be equal to the number " "of active agents." msgstr "" #: ../telemetry-data-collection.rst:430 msgid "Polling agent HA deployment" msgstr "" #: ../telemetry-data-collection.rst:434 msgid "" "Without the ``backend_url`` option being set only one instance of both the " "central and compute agent service is able to run and function correctly." msgstr "" #: ../telemetry-data-collection.rst:438 msgid "" "The availability check of the instances is provided by heartbeat messages. " "When the connection with an instance is lost, the workload will be " "reassigned within the remained instances in the next polling cycle." msgstr "" #: ../telemetry-data-collection.rst:445 msgid "" "``Memcached`` uses a ``timeout`` value, which should always be set to a " "value that is higher than the ``heartbeat`` value set for Telemetry." msgstr "" #: ../telemetry-data-collection.rst:449 msgid "" "For backward compatibility and supporting existing deployments, the central " "agent configuration also supports using different configuration files for " "groups of service instances of this type that are running in parallel. For " "enabling this configuration set a value for the " "``partitioning_group_prefix`` option in the `Central section `__ in the OpenStack Configuration " "Reference." msgstr "" #: ../telemetry-data-collection.rst:459 msgid "" "For each sub-group of the central agent pool with the same " "``partitioning_group_prefix`` a disjoint subset of meters must be polled, " "otherwise samples may be missing or duplicated. The list of meters to poll " "can be set in the ``/etc/ceilometer/pipeline.yaml`` configuration file. For " "more information about pipelines see :ref:`data-collection-and-processing`." msgstr "" #: ../telemetry-data-collection.rst:466 msgid "" "To enable the compute agent to run multiple instances simultaneously with " "workload partitioning, the ``workload_partitioning`` option has to be set to " "``True`` under the `Compute section `__ in the " "``ceilometer.conf`` configuration file." msgstr "" #: ../telemetry-data-collection.rst:474 msgid "Send samples to Telemetry" msgstr "" #: ../telemetry-data-collection.rst:476 msgid "" "While most parts of the data collection in the Telemetry service are " "automated, Telemetry provides the possibility to submit samples via the REST " "API to allow users to send custom samples into this service." msgstr "" #: ../telemetry-data-collection.rst:480 msgid "" "This option makes it possible to send any kind of samples without the need " "of writing extra code lines or making configuration changes." msgstr "" #: ../telemetry-data-collection.rst:483 msgid "" "The samples that can be sent to Telemetry are not limited to the actual " "existing meters. There is a possibility to provide data for any new, " "customer defined counter by filling out all the required fields of the POST " "request." msgstr "" #: ../telemetry-data-collection.rst:488 msgid "" "If the sample corresponds to an existing meter, then the fields like ``meter-" "type`` and meter name should be matched accordingly." msgstr "" #: ../telemetry-data-collection.rst:491 msgid "" "The required fields for sending a sample using the command-line client are:" msgstr "" #: ../telemetry-data-collection.rst:494 msgid "ID of the corresponding resource. (:option:`--resource-id`)" msgstr "" #: ../telemetry-data-collection.rst:496 msgid "Name of meter. (:option:`--meter-name`)" msgstr "" #: ../telemetry-data-collection.rst:498 msgid "Type of meter. (:option:`--meter-type`)" msgstr "" #: ../telemetry-data-collection.rst:500 msgid "Predefined meter types:" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:502 ../telemetry-measurements.rst:37 #: ../telemetry-measurements.rst:104 ../telemetry-measurements.rst:108 #: ../telemetry-measurements.rst:112 ../telemetry-measurements.rst:116 #: ../telemetry-measurements.rst:125 ../telemetry-measurements.rst:129 #: ../telemetry-measurements.rst:136 ../telemetry-measurements.rst:143 #: ../telemetry-measurements.rst:150 ../telemetry-measurements.rst:157 #: ../telemetry-measurements.rst:161 ../telemetry-measurements.rst:164 #: ../telemetry-measurements.rst:171 ../telemetry-measurements.rst:179 #: ../telemetry-measurements.rst:187 ../telemetry-measurements.rst:196 #: ../telemetry-measurements.rst:203 ../telemetry-measurements.rst:208 #: ../telemetry-measurements.rst:213 ../telemetry-measurements.rst:219 #: ../telemetry-measurements.rst:224 ../telemetry-measurements.rst:229 #: ../telemetry-measurements.rst:239 ../telemetry-measurements.rst:248 #: ../telemetry-measurements.rst:257 ../telemetry-measurements.rst:266 #: ../telemetry-measurements.rst:271 ../telemetry-measurements.rst:276 #: ../telemetry-measurements.rst:281 ../telemetry-measurements.rst:286 #: ../telemetry-measurements.rst:293 ../telemetry-measurements.rst:299 #: ../telemetry-measurements.rst:304 ../telemetry-measurements.rst:307 #: ../telemetry-measurements.rst:310 ../telemetry-measurements.rst:314 #: ../telemetry-measurements.rst:317 ../telemetry-measurements.rst:321 #: ../telemetry-measurements.rst:327 ../telemetry-measurements.rst:332 #: ../telemetry-measurements.rst:337 ../telemetry-measurements.rst:343 #: ../telemetry-measurements.rst:351 ../telemetry-measurements.rst:438 #: ../telemetry-measurements.rst:453 ../telemetry-measurements.rst:456 #: ../telemetry-measurements.rst:459 ../telemetry-measurements.rst:462 #: ../telemetry-measurements.rst:465 ../telemetry-measurements.rst:493 #: ../telemetry-measurements.rst:496 ../telemetry-measurements.rst:499 #: ../telemetry-measurements.rst:503 ../telemetry-measurements.rst:506 #: ../telemetry-measurements.rst:537 ../telemetry-measurements.rst:540 #: ../telemetry-measurements.rst:546 ../telemetry-measurements.rst:549 #: ../telemetry-measurements.rst:552 ../telemetry-measurements.rst:557 #: ../telemetry-measurements.rst:562 ../telemetry-measurements.rst:566 #: ../telemetry-measurements.rst:570 ../telemetry-measurements.rst:599 #: ../telemetry-measurements.rst:602 ../telemetry-measurements.rst:605 #: ../telemetry-measurements.rst:608 ../telemetry-measurements.rst:611 #: ../telemetry-measurements.rst:614 ../telemetry-measurements.rst:617 #: ../telemetry-measurements.rst:620 ../telemetry-measurements.rst:623 #: ../telemetry-measurements.rst:626 ../telemetry-measurements.rst:629 #: ../telemetry-measurements.rst:661 ../telemetry-measurements.rst:675 #: ../telemetry-measurements.rst:679 ../telemetry-measurements.rst:709 #: ../telemetry-measurements.rst:712 ../telemetry-measurements.rst:717 #: ../telemetry-measurements.rst:720 ../telemetry-measurements.rst:775 #: ../telemetry-measurements.rst:778 ../telemetry-measurements.rst:781 #: ../telemetry-measurements.rst:795 ../telemetry-measurements.rst:798 #: ../telemetry-measurements.rst:825 ../telemetry-measurements.rst:827 #: ../telemetry-measurements.rst:830 ../telemetry-measurements.rst:833 #: ../telemetry-measurements.rst:838 ../telemetry-measurements.rst:841 #: ../telemetry-measurements.rst:935 ../telemetry-measurements.rst:945 #: ../telemetry-measurements.rst:955 ../telemetry-measurements.rst:964 #: ../telemetry-measurements.rst:974 ../telemetry-measurements.rst:999 #: ../telemetry-measurements.rst:1002 ../telemetry-measurements.rst:1043 #: ../telemetry-measurements.rst:1046 ../telemetry-measurements.rst:1049 #: ../telemetry-measurements.rst:1052 ../telemetry-measurements.rst:1055 #: ../telemetry-measurements.rst:1057 ../telemetry-measurements.rst:1060 #: ../telemetry-measurements.rst:1086 ../telemetry-measurements.rst:1090 #: ../telemetry-measurements.rst:1094 ../telemetry-measurements.rst:1098 #: ../telemetry-measurements.rst:1106 ../telemetry-measurements.rst:1110 #: ../telemetry-measurements.rst:1114 ../telemetry-measurements.rst:1164 #: ../telemetry-measurements.rst:1168 ../telemetry-measurements.rst:1172 #: ../telemetry-measurements.rst:1176 ../telemetry-measurements.rst:1180 #: ../telemetry-measurements.rst:1188 ../telemetry-measurements.rst:1192 #: ../telemetry-measurements.rst:1196 ../telemetry-measurements.rst:1261 #: ../telemetry-measurements.rst:1265 ../telemetry-measurements.rst:1290 #: ../telemetry-measurements.rst:1304 ../telemetry-measurements.rst:1327 #: ../telemetry-measurements.rst:1331 ../telemetry-measurements.rst:1355 #: ../telemetry-measurements.rst:1429 ../telemetry-measurements.rst:1432 #: ../telemetry-measurements.rst:1435 ../telemetry-measurements.rst:1452 msgid "Gauge" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:504 ../telemetry-data-collection.rst:679 #: ../telemetry-measurements.rst:35 ../telemetry-measurements.rst:358 #: ../telemetry-measurements.rst:683 ../telemetry-measurements.rst:686 #: ../telemetry-measurements.rst:689 ../telemetry-measurements.rst:692 #: ../telemetry-measurements.rst:695 ../telemetry-measurements.rst:725 #: ../telemetry-measurements.rst:728 ../telemetry-measurements.rst:731 #: ../telemetry-measurements.rst:735 ../telemetry-measurements.rst:738 #: ../telemetry-measurements.rst:742 ../telemetry-measurements.rst:746 #: ../telemetry-measurements.rst:749 ../telemetry-measurements.rst:752 #: ../telemetry-measurements.rst:755 ../telemetry-measurements.rst:758 #: ../telemetry-measurements.rst:784 ../telemetry-measurements.rst:787 #: ../telemetry-measurements.rst:790 ../telemetry-measurements.rst:863 #: ../telemetry-measurements.rst:866 ../telemetry-measurements.rst:869 #: ../telemetry-measurements.rst:872 ../telemetry-measurements.rst:875 #: ../telemetry-measurements.rst:878 ../telemetry-measurements.rst:881 #: ../telemetry-measurements.rst:884 ../telemetry-measurements.rst:887 #: ../telemetry-measurements.rst:890 ../telemetry-measurements.rst:893 #: ../telemetry-measurements.rst:896 ../telemetry-measurements.rst:899 #: ../telemetry-measurements.rst:902 ../telemetry-measurements.rst:905 #: ../telemetry-measurements.rst:908 ../telemetry-measurements.rst:911 #: ../telemetry-measurements.rst:916 ../telemetry-measurements.rst:920 #: ../telemetry-measurements.rst:938 ../telemetry-measurements.rst:942 #: ../telemetry-measurements.rst:948 ../telemetry-measurements.rst:952 #: ../telemetry-measurements.rst:958 ../telemetry-measurements.rst:961 #: ../telemetry-measurements.rst:967 ../telemetry-measurements.rst:971 #: ../telemetry-measurements.rst:978 ../telemetry-measurements.rst:981 #: ../telemetry-measurements.rst:984 ../telemetry-measurements.rst:1120 #: ../telemetry-measurements.rst:1124 ../telemetry-measurements.rst:1128 #: ../telemetry-measurements.rst:1132 ../telemetry-measurements.rst:1136 #: ../telemetry-measurements.rst:1140 ../telemetry-measurements.rst:1144 #: ../telemetry-measurements.rst:1149 ../telemetry-measurements.rst:1200 #: ../telemetry-measurements.rst:1204 ../telemetry-measurements.rst:1208 #: ../telemetry-measurements.rst:1212 ../telemetry-measurements.rst:1216 #: ../telemetry-measurements.rst:1220 ../telemetry-measurements.rst:1224 #: ../telemetry-measurements.rst:1229 ../telemetry-measurements.rst:1234 #: ../telemetry-measurements.rst:1239 ../telemetry-measurements.rst:1272 #: ../telemetry-measurements.rst:1276 ../telemetry-measurements.rst:1280 #: ../telemetry-measurements.rst:1285 ../telemetry-measurements.rst:1294 #: ../telemetry-measurements.rst:1299 ../telemetry-measurements.rst:1308 #: ../telemetry-measurements.rst:1312 ../telemetry-measurements.rst:1337 #: ../telemetry-measurements.rst:1341 ../telemetry-measurements.rst:1345 #: ../telemetry-measurements.rst:1350 ../telemetry-measurements.rst:1359 #: ../telemetry-measurements.rst:1364 ../telemetry-measurements.rst:1378 #: ../telemetry-measurements.rst:1381 ../telemetry-measurements.rst:1384 #: ../telemetry-measurements.rst:1387 ../telemetry-measurements.rst:1390 #: ../telemetry-measurements.rst:1405 ../telemetry-measurements.rst:1410 #: ../telemetry-measurements.rst:1414 msgid "Delta" msgstr "" # #-#-#-#-# telemetry-data-collection.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# telemetry-measurements.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-data-collection.rst:506 ../telemetry-measurements.rst:33 #: ../telemetry-measurements.rst:1450 msgid "Cumulative" msgstr "" #: ../telemetry-data-collection.rst:508 msgid "Unit of meter. (:option:`--meter-unit`)" msgstr "" #: ../telemetry-data-collection.rst:510 msgid "Volume of sample. (:option:`--sample-volume`)" msgstr "" #: ../telemetry-data-collection.rst:512 msgid "" "To send samples to Telemetry using the command-line client, the following " "command should be invoked:" msgstr "" #: ../telemetry-data-collection.rst:538 msgid "Data collection and processing" msgstr "" #: ../telemetry-data-collection.rst:540 msgid "" "The mechanism by which data is collected and processed is called a pipeline. " "Pipelines, at the configuration level, describe a coupling between sources " "of data and the corresponding sinks for transformation and publication of " "data." msgstr "" #: ../telemetry-data-collection.rst:545 msgid "" "A source is a producer of data: ``samples`` or ``events``. In effect, it is " "a set of pollsters or notification handlers emitting datapoints for a set of " "matching meters and event types." msgstr "" #: ../telemetry-data-collection.rst:549 msgid "" "Each source configuration encapsulates name matching, polling interval " "determination, optional resource enumeration or discovery, and mapping to " "one or more sinks for publication." msgstr "" #: ../telemetry-data-collection.rst:553 msgid "" "Data gathered can be used for different purposes, which can impact how " "frequently it needs to be published. Typically, a meter published for " "billing purposes needs to be updated every 30 minutes while the same meter " "may be needed for performance tuning every minute." msgstr "" #: ../telemetry-data-collection.rst:560 msgid "" "Rapid polling cadences should be avoided, as it results in a huge amount of " "data in a short time frame, which may negatively affect the performance of " "both Telemetry and the underlying database back end. We therefore strongly " "recommend you do not use small granularity values like 10 seconds." msgstr "" #: ../telemetry-data-collection.rst:566 msgid "" "A sink, on the other hand, is a consumer of data, providing logic for the " "transformation and publication of data emitted from related sources." msgstr "" #: ../telemetry-data-collection.rst:569 msgid "" "In effect, a sink describes a chain of handlers. The chain starts with zero " "or more transformers and ends with one or more publishers. The first " "transformer in the chain is passed data from the corresponding source, takes " "some action such as deriving rate of change, performing unit conversion, or " "aggregating, before passing the modified data to the next step that is " "described in :ref:`telemetry-publishers`." msgstr "" #: ../telemetry-data-collection.rst:579 msgid "Pipeline configuration" msgstr "" #: ../telemetry-data-collection.rst:580 msgid "" "Pipeline configuration by default, is stored in separate configuration " "files, called ``pipeline.yaml`` and ``event_pipeline.yaml``, next to the " "``ceilometer.conf`` file. The meter pipeline and event pipeline " "configuration files can be set by the ``pipeline_cfg_file`` and " "``event_pipeline_cfg_file`` options listed in the `Description of " "configuration options for api table `__ section in the " "OpenStack Configuration Reference respectively. Multiple pipelines can be " "defined in one pipeline configuration file." msgstr "" #: ../telemetry-data-collection.rst:590 msgid "The meter pipeline definition looks like:" msgstr "" #: ../telemetry-data-collection.rst:610 msgid "" "The interval parameter in the sources section should be defined in seconds. " "It determines the polling cadence of sample injection into the pipeline, " "where samples are produced under the direct control of an agent." msgstr "" #: ../telemetry-data-collection.rst:615 msgid "" "There are several ways to define the list of meters for a pipeline source. " "The list of valid meters can be found in :ref:`telemetry-measurements`. " "There is a possibility to define all the meters, or just included or " "excluded meters, with which a source should operate:" msgstr "" #: ../telemetry-data-collection.rst:620 msgid "" "To include all meters, use the ``*`` wildcard symbol. It is highly advisable " "to select only the meters that you intend on using to avoid flooding the " "metering database with unused data." msgstr "" #: ../telemetry-data-collection.rst:624 msgid "To define the list of meters, use either of the following:" msgstr "" #: ../telemetry-data-collection.rst:626 msgid "To define the list of included meters, use the ``meter_name`` syntax." msgstr "" #: ../telemetry-data-collection.rst:629 msgid "To define the list of excluded meters, use the ``!meter_name`` syntax." msgstr "" #: ../telemetry-data-collection.rst:632 msgid "" "For meters, which have variants identified by a complex name field, use the " "wildcard symbol to select all, for example, for ``instance:m1.tiny``, use " "``instance:\\*``." msgstr "" #: ../telemetry-data-collection.rst:638 msgid "" "Please be aware that we do not have any duplication check between pipelines " "and if you add a meter to multiple pipelines then it is assumed the " "duplication is intentional and may be stored multiple times according to the " "specified sinks." msgstr "" #: ../telemetry-data-collection.rst:643 msgid "The above definition methods can be used in the following combinations:" msgstr "" #: ../telemetry-data-collection.rst:645 msgid "Use only the wildcard symbol." msgstr "" #: ../telemetry-data-collection.rst:647 msgid "Use the list of included meters." msgstr "" #: ../telemetry-data-collection.rst:649 msgid "Use the list of excluded meters." msgstr "" #: ../telemetry-data-collection.rst:651 msgid "Use wildcard symbol with the list of excluded meters." msgstr "" #: ../telemetry-data-collection.rst:655 msgid "" "At least one of the above variations should be included in the meters " "section. Included and excluded meters cannot co-exist in the same pipeline. " "Wildcard and included meters cannot co-exist in the same pipeline definition " "section." msgstr "" #: ../telemetry-data-collection.rst:660 msgid "" "The optional resources section of a pipeline source allows a static list of " "resource URLs to be configured for polling." msgstr "" #: ../telemetry-data-collection.rst:663 msgid "" "The transformers section of a pipeline sink provides the possibility to add " "a list of transformer definitions. The available transformers are:" msgstr "" #: ../telemetry-data-collection.rst:667 msgid "Name of transformer" msgstr "" #: ../telemetry-data-collection.rst:667 msgid "Reference name for configuration" msgstr "" #: ../telemetry-data-collection.rst:669 msgid "Accumulator" msgstr "" #: ../telemetry-data-collection.rst:669 msgid "accumulator" msgstr "" #: ../telemetry-data-collection.rst:671 msgid "Aggregator" msgstr "" #: ../telemetry-data-collection.rst:671 msgid "aggregator" msgstr "" #: ../telemetry-data-collection.rst:673 msgid "Arithmetic" msgstr "" #: ../telemetry-data-collection.rst:673 msgid "arithmetic" msgstr "" #: ../telemetry-data-collection.rst:675 msgid "Rate of change" msgstr "" #: ../telemetry-data-collection.rst:675 msgid "rate\\_of\\_change" msgstr "" #: ../telemetry-data-collection.rst:677 msgid "Unit conversion" msgstr "" #: ../telemetry-data-collection.rst:677 msgid "unit\\_conversion" msgstr "" #: ../telemetry-data-collection.rst:679 msgid "delta" msgstr "" #: ../telemetry-data-collection.rst:682 msgid "" "The publishers section contains the list of publishers, where the samples " "data should be sent after the possible transformations." msgstr "" #: ../telemetry-data-collection.rst:685 msgid "Similarly, the event pipeline definition looks like:" msgstr "" #: ../telemetry-data-collection.rst:701 msgid "The event filter uses the same filtering logic as the meter pipeline." msgstr "" #: ../telemetry-data-collection.rst:706 msgid "Transformers" msgstr "" #: ../telemetry-data-collection.rst:708 msgid "The definition of transformers can contain the following fields:" msgstr "" #: ../telemetry-data-collection.rst:711 msgid "Name of the transformer." msgstr "" #: ../telemetry-data-collection.rst:714 msgid "Parameters of the transformer." msgstr "" #: ../telemetry-data-collection.rst:714 msgid "parameters" msgstr "" #: ../telemetry-data-collection.rst:716 msgid "" "The parameters section can contain transformer specific fields, like source " "and target fields with different subfields in case of the rate of change, " "which depends on the implementation of the transformer." msgstr "" #: ../telemetry-data-collection.rst:720 msgid "" "In the case of the transformer that creates the ``cpu_util`` meter, the " "definition looks like:" msgstr "" #: ../telemetry-data-collection.rst:734 msgid "" "The rate of change the transformer generates is the ``cpu_util`` meter from " "the sample values of the ``cpu`` counter, which represents cumulative CPU " "time in nanoseconds. The transformer definition above defines a scale factor " "(for nanoseconds and multiple CPUs), which is applied before the " "transformation derives a sequence of gauge samples with unit ``%``, from " "sequential values of the ``cpu`` meter." msgstr "" #: ../telemetry-data-collection.rst:741 msgid "" "The definition for the disk I/O rate, which is also generated by the rate of " "change transformer:" msgstr "" #: ../telemetry-data-collection.rst:759 msgid "**Unit conversion transformer**" msgstr "" #: ../telemetry-data-collection.rst:761 msgid "" "Transformer to apply a unit conversion. It takes the volume of the meter and " "multiplies it with the given ``scale`` expression. Also supports " "``map_from`` and ``map_to`` like the rate of change transformer." msgstr "" #: ../telemetry-data-collection.rst:765 msgid "Sample configuration:" msgstr "" #: ../telemetry-data-collection.rst:777 msgid "With ``map_from`` and ``map_to``:" msgstr "" #: ../telemetry-data-collection.rst:793 msgid "**Aggregator transformer**" msgstr "" #: ../telemetry-data-collection.rst:795 msgid "" "A transformer that sums up the incoming samples until enough samples have " "come in or a timeout has been reached." msgstr "" #: ../telemetry-data-collection.rst:798 msgid "" "Timeout can be specified with the ``retention_time`` option. If we want to " "flush the aggregation after a set number of samples have been aggregated, we " "can specify the size parameter." msgstr "" #: ../telemetry-data-collection.rst:802 msgid "" "The volume of the created sample is the sum of the volumes of samples that " "came into the transformer. Samples can be aggregated by the attributes " "``project_id``, ``user_id`` and ``resource_metadata``. To aggregate by the " "chosen attributes, specify them in the configuration and set which value of " "the attribute to take for the new sample (first to take the first sample's " "attribute, last to take the last sample's attribute, and drop to discard the " "attribute)." msgstr "" #: ../telemetry-data-collection.rst:810 msgid "" "To aggregate 60s worth of samples by ``resource_metadata`` and keep the " "``resource_metadata`` of the latest received sample:" msgstr "" #: ../telemetry-data-collection.rst:821 msgid "" "To aggregate each 15 samples by ``user_id`` and ``resource_metadata`` and " "keep the ``user_id`` of the first received sample and drop the " "``resource_metadata``:" msgstr "" #: ../telemetry-data-collection.rst:834 msgid "**Accumulator transformer**" msgstr "" #: ../telemetry-data-collection.rst:836 msgid "" "This transformer simply caches the samples until enough samples have arrived " "and then flushes them all down the pipeline at once:" msgstr "" #: ../telemetry-data-collection.rst:846 msgid "**Muli meter arithmetic transformer**" msgstr "" #: ../telemetry-data-collection.rst:848 msgid "" "This transformer enables us to perform arithmetic calculations over one or " "more meters and/or their metadata, for example:" msgstr "" #: ../telemetry-data-collection.rst:855 msgid "" "A new sample is created with the properties described in the ``target`` " "section of the transformer's configuration. The sample's volume is the " "result of the provided expression. The calculation is performed on samples " "from the same resource." msgstr "" #: ../telemetry-data-collection.rst:862 msgid "The calculation is limited to meters with the same interval." msgstr "" #: ../telemetry-data-collection.rst:864 ../telemetry-data-collection.rst:901 msgid "Example configuration:" msgstr "" #: ../telemetry-data-collection.rst:877 msgid "" "To demonstrate the use of metadata, here is the implementation of a silly " "meter that shows average CPU time per core:" msgstr "" #: ../telemetry-data-collection.rst:893 msgid "" "Expression evaluation gracefully handles NaNs and exceptions. In such a case " "it does not create a new sample but only logs a warning." msgstr "" #: ../telemetry-data-collection.rst:896 msgid "**Delta transformer**" msgstr "" #: ../telemetry-data-collection.rst:898 msgid "" "This transformer calculates the change between two sample datapoints of a " "resource. It can be configured to capture only the positive growth deltas." msgstr "" #: ../telemetry-data-collection.rst:915 msgid "Meter definitions" msgstr "" #: ../telemetry-data-collection.rst:916 msgid "" "The Telemetry service collects a subset of the meters by filtering " "notifications emitted by other OpenStack services. Starting with the Liberty " "release, you can find the meter definitions in a separate configuration " "file, called ``ceilometer/meter/data/meter.yaml``. This enables operators/" "administrators to add new meters to Telemetry project by updating the " "``meter.yaml`` file without any need for additional code changes." msgstr "" #: ../telemetry-data-collection.rst:925 msgid "" "The ``meter.yaml`` file should be modified with care. Unless intended do not " "remove any existing meter definitions from the file. Also, the collected " "meters can differ in some cases from what is referenced in the documentation." msgstr "" #: ../telemetry-data-collection.rst:930 msgid "A standard meter definition looks like:" msgstr "" #: ../telemetry-data-collection.rst:944 msgid "" "The definition above shows a simple meter definition with some fields, from " "which ``name``, ``event_type``, ``type``, ``unit``, and ``volume`` are " "required. If there is a match on the event type, samples are generated for " "the meter." msgstr "" #: ../telemetry-data-collection.rst:949 msgid "" "If you take a look at the ``meter.yaml`` file, it contains the sample " "definitions for all the meters that Telemetry is collecting from " "notifications. The value of each field is specified by using json path in " "order to find the right value from the notification message. In order to be " "able to specify the right field you need to be aware of the format of the " "consumed notification. The values that need to be searched in the " "notification message are set with a json path starting with ``$.`` For " "instance, if you need the ``size`` information from the payload you can " "define it like ``$.payload.size``." msgstr "" #: ../telemetry-data-collection.rst:959 msgid "" "A notification message may contain multiple meters. You can use ``*`` in the " "meter definition to capture all the meters and generate samples " "respectively. You can use wild cards as shown in the following example:" msgstr "" #: ../telemetry-data-collection.rst:976 msgid "" "In the above example, the ``name`` field is a json path with matching a list " "of meter names defined in the notification message." msgstr "" #: ../telemetry-data-collection.rst:979 msgid "" "You can even use complex operations on json paths. In the following example, " "``volume`` and ``resource_id`` fields perform an arithmetic and string " "concatenation:" msgstr "" #: ../telemetry-data-collection.rst:994 msgid "" "You can use the ``timedelta`` plug-in to evaluate the difference in seconds " "between two ``datetime`` fields from one notification." msgstr "" #: ../telemetry-data-collection.rst:1011 msgid "" "You will find some existence meters in the ``meter.yaml``. These meters have " "a ``volume`` as ``1`` and are at the bottom of the yaml file with a note " "suggesting that these will be removed in Mitaka release." msgstr "" #: ../telemetry-data-collection.rst:1015 msgid "For example, the meter definition for existence meters is as follows:" msgstr "" #: ../telemetry-data-collection.rst:1031 msgid "" "These meters are not loaded by default. To load these meters, flip the " "`disable_non_metric_meters` option in the ``ceilometer.conf`` file." msgstr "" #: ../telemetry-data-collection.rst:1036 msgid "Block Storage audit script setup to get notifications" msgstr "" #: ../telemetry-data-collection.rst:1038 msgid "" "If you want to collect OpenStack Block Storage notification on demand, you " "can use :command:`cinder-volume-usage-audit` from OpenStack Block Storage. " "This script becomes available when you install OpenStack Block Storage, so " "you can use it without any specific settings and you don't need to " "authenticate to access the data. To use it, you must run this command in the " "following format:" msgstr "" #: ../telemetry-data-collection.rst:1050 msgid "" "This script outputs what volumes or snapshots were created, deleted, or " "exists in a given period of time and some information about these volumes or " "snapshots. Information about the existence and size of volumes and snapshots " "is store in the Telemetry service. This data is also stored as an event " "which is the recommended usage as it provides better indexing of data." msgstr "" #: ../telemetry-data-collection.rst:1057 msgid "" "Using this script via cron you can get notifications periodically, for " "example, every 5 minutes::" msgstr "" #: ../telemetry-data-collection.rst:1065 msgid "Storing samples" msgstr "" #: ../telemetry-data-collection.rst:1067 msgid "" "The Telemetry service has a separate service that is responsible for " "persisting the data that comes from the pollsters or is received as " "notifications. The data can be stored in a file or a database back end, for " "which the list of supported databases can be found in :ref:`telemetry-" "supported-databases`. The data can also be sent to an external data store by " "using an HTTP dispatcher." msgstr "" #: ../telemetry-data-collection.rst:1074 msgid "" "The ``ceilometer-collector`` service receives the data as messages from the " "message bus of the configured AMQP service. It sends these datapoints " "without any modification to the configured target. The service has to run on " "a host machine from which it has access to the configured dispatcher." msgstr "" #: ../telemetry-data-collection.rst:1082 msgid "Multiple dispatchers can be configured for Telemetry at one time." msgstr "" #: ../telemetry-data-collection.rst:1084 msgid "" "Multiple ``ceilometer-collector`` processes can be run at a time. It is also " "supported to start multiple worker threads per collector process. The " "``collector_workers`` configuration option has to be modified in the " "`Collector section `__ of the ``ceilometer.conf`` " "configuration file." msgstr "" #: ../telemetry-data-collection.rst:1092 msgid "Database dispatcher" msgstr "" #: ../telemetry-data-collection.rst:1094 msgid "" "When the database dispatcher is configured as data store, you have the " "option to set a ``time_to_live`` option (ttl) for samples. By default the " "time to live value for samples is set to -1, which means that they are kept " "in the database forever." msgstr "" #: ../telemetry-data-collection.rst:1099 msgid "" "The time to live value is specified in seconds. Each sample has a time " "stamp, and the ``ttl`` value indicates that a sample will be deleted from " "the database when the number of seconds has elapsed since that sample " "reading was stamped. For example, if the time to live is set to 600, all " "samples older than 600 seconds will be purged from the database." msgstr "" #: ../telemetry-data-collection.rst:1106 msgid "" "Certain databases support native TTL expiration. In cases where this is not " "possible, a command-line script, which you can use for this purpose is " "``ceilometer-expirer``. You can run it in a cron job, which helps to keep " "your database in a consistent state." msgstr "" #: ../telemetry-data-collection.rst:1111 msgid "The level of support differs in case of the configured back end:" msgstr "" #: ../telemetry-data-collection.rst:1114 msgid "TTL value support" msgstr "" #: ../telemetry-data-collection.rst:1116 msgid "MongoDB" msgstr "" #: ../telemetry-data-collection.rst:1116 msgid "" "MongoDB has native TTL support for deleting samples that are older than the " "configured ttl value." msgstr "" #: ../telemetry-data-collection.rst:1120 msgid "SQL-based back ends" msgstr "" #: ../telemetry-data-collection.rst:1120 msgid "" "``ceilometer-expirer`` has to be used for deleting samples and its related " "data from the database." msgstr "" #: ../telemetry-data-collection.rst:1124 msgid "HBase" msgstr "" #: ../telemetry-data-collection.rst:1124 msgid "" "Telemetry's HBase support does not include native TTL nor ``ceilometer-" "expirer`` support." msgstr "" #: ../telemetry-data-collection.rst:1128 msgid "DB2 NoSQL" msgstr "" #: ../telemetry-data-collection.rst:1128 msgid "DB2 NoSQL does not have native TTL nor ``ceilometer-expirer`` support." msgstr "" #: ../telemetry-data-collection.rst:1134 msgid "HTTP dispatcher" msgstr "" #: ../telemetry-data-collection.rst:1136 msgid "" "The Telemetry service supports sending samples to an external HTTP target. " "The samples are sent without any modification. To set this option as the " "collector's target, the ``dispatcher`` has to be changed to ``http`` in the " "``ceilometer.conf`` configuration file. For the list of options that you " "need to set, see the see the `dispatcher_http section `__ " "in the OpenStack Configuration Reference." msgstr "" #: ../telemetry-data-collection.rst:1145 msgid "File dispatcher" msgstr "" #: ../telemetry-data-collection.rst:1147 msgid "" "You can store samples in a file by setting the ``dispatcher`` option in the " "``ceilometer.conf`` file. For the list of configuration options, see the " "`dispatcher_file section `__ in the OpenStack " "Configuration Reference." msgstr "" #: ../telemetry-data-collection.rst:1154 msgid "Gnocchi dispatcher" msgstr "" #: ../telemetry-data-collection.rst:1156 msgid "" "The Telemetry service supports sending the metering data to Gnocchi back end " "through the gnocchi dispatcher. To set this option as the target, change the " "``dispatcher`` to ``gnocchi`` in the ``ceilometer.conf`` configuration file." msgstr "" #: ../telemetry-data-collection.rst:1161 msgid "" "For the list of options that you need to set, see the `dispatcher_gnocchi " "section `__ in the OpenStack Configuration " "Reference." msgstr "" #: ../telemetry-data-retrieval.rst:3 msgid "Data retrieval" msgstr "" #: ../telemetry-data-retrieval.rst:5 msgid "" "The Telemetry service offers several mechanisms from which the persisted " "data can be accessed. As described in :ref:`telemetry-system-architecture` " "and in :ref:`telemetry-data-collection`, the collected information can be " "stored in one or more database back ends, which are hidden by the Telemetry " "RESTful API." msgstr "" #: ../telemetry-data-retrieval.rst:12 msgid "" "It is highly recommended not to access the database directly and read or " "modify any data in it. The API layer hides all the changes in the actual " "database schema and provides a standard interface to expose the samples, " "alarms and so forth." msgstr "" #: ../telemetry-data-retrieval.rst:18 msgid "Telemetry v2 API" msgstr "" #: ../telemetry-data-retrieval.rst:20 msgid "" "The Telemetry service provides a RESTful API, from which the collected " "samples and all the related information can be retrieved, like the list of " "meters, alarm definitions and so forth." msgstr "" #: ../telemetry-data-retrieval.rst:24 msgid "" "The Telemetry API URL can be retrieved from the service catalog provided by " "OpenStack Identity, which is populated during the installation process. The " "API access needs a valid token and proper permission to retrieve data, as " "described in :ref:`telemetry-users-roles-tenants`." msgstr "" #: ../telemetry-data-retrieval.rst:29 msgid "" "Further information about the available API endpoints can be found in the " "`Telemetry API Reference `__." msgstr "" #: ../telemetry-data-retrieval.rst:34 msgid "Query" msgstr "" #: ../telemetry-data-retrieval.rst:36 msgid "" "The API provides some additional functionalities, like querying the " "collected data set. For the samples and alarms API endpoints, both simple " "and complex query styles are available, whereas for the other endpoints only " "simple queries are supported." msgstr "" #: ../telemetry-data-retrieval.rst:41 msgid "" "After validating the query parameters, the processing is done on the " "database side in the case of most database back ends in order to achieve " "better performance." msgstr "" #: ../telemetry-data-retrieval.rst:45 msgid "**Simple query**" msgstr "" #: ../telemetry-data-retrieval.rst:47 msgid "" "Many of the API endpoints accept a query filter argument, which should be a " "list of data structures that consist of the following items:" msgstr "" #: ../telemetry-data-retrieval.rst:50 msgid "``field``" msgstr "" #: ../telemetry-data-retrieval.rst:52 msgid "``op``" msgstr "" #: ../telemetry-data-retrieval.rst:54 msgid "``value``" msgstr "" #: ../telemetry-data-retrieval.rst:56 msgid "``type``" msgstr "" #: ../telemetry-data-retrieval.rst:58 msgid "" "Regardless of the endpoint on which the filter is applied on, it will always " "target the fields of the `Sample type `__." msgstr "" #: ../telemetry-data-retrieval.rst:62 msgid "" "Several fields of the API endpoints accept shorter names than the ones " "defined in the reference. The API will do the transformation internally and " "return the output with the fields that are listed in the `API reference " "`__. The " "fields are the following:" msgstr "" #: ../telemetry-data-retrieval.rst:68 msgid "``project_id``: project" msgstr "" #: ../telemetry-data-retrieval.rst:70 msgid "``resource_id``: resource" msgstr "" #: ../telemetry-data-retrieval.rst:72 msgid "``user_id``: user" msgstr "" #: ../telemetry-data-retrieval.rst:74 msgid "" "When a filter argument contains multiple constraints of the above form, a " "logical ``AND`` relation between them is implied." msgstr "" #: ../telemetry-data-retrieval.rst:79 msgid "**Complex query**" msgstr "" #: ../telemetry-data-retrieval.rst:81 msgid "" "The filter expressions of the complex query feature operate on the fields of " "``Sample``, ``Alarm`` and ``AlarmChange`` types. The following comparison " "operators are supported:" msgstr "" #: ../telemetry-data-retrieval.rst:85 msgid "``=``" msgstr "" #: ../telemetry-data-retrieval.rst:87 msgid "``!=``" msgstr "" #: ../telemetry-data-retrieval.rst:89 msgid "``<``" msgstr "" #: ../telemetry-data-retrieval.rst:91 msgid "``<=``" msgstr "" #: ../telemetry-data-retrieval.rst:93 msgid "``>``" msgstr "" #: ../telemetry-data-retrieval.rst:95 msgid "``>=``" msgstr "" #: ../telemetry-data-retrieval.rst:97 msgid "The following logical operators can be used:" msgstr "" #: ../telemetry-data-retrieval.rst:99 msgid "``and``" msgstr "" #: ../telemetry-data-retrieval.rst:101 msgid "``or``" msgstr "" #: ../telemetry-data-retrieval.rst:103 msgid "``not``" msgstr "" #: ../telemetry-data-retrieval.rst:107 msgid "" "The ``not`` operator has different behavior in MongoDB and in the SQLAlchemy-" "based database engines. If the ``not`` operator is applied on a non existent " "metadata field then the result depends on the database engine. In case of " "MongoDB, it will return every sample as the ``not`` operator is evaluated " "true for every sample where the given field does not exist. On the other " "hand the SQL-based database engine will return an empty result because of " "the underlying ``join`` operation." msgstr "" #: ../telemetry-data-retrieval.rst:116 msgid "" "Complex query supports specifying a list of ``orderby`` expressions. This " "means that the result of the query can be ordered based on the field names " "provided in this list. When multiple keys are defined for the ordering, " "these will be applied sequentially in the order of the specification. The " "second expression will be applied on the groups for which the values of the " "first expression are the same. The ordering can be ascending or descending." msgstr "" #: ../telemetry-data-retrieval.rst:124 msgid "The number of returned items can be bounded using the ``limit`` option." msgstr "" #: ../telemetry-data-retrieval.rst:126 msgid "The ``filter``, ``orderby`` and ``limit`` fields are optional." msgstr "" #: ../telemetry-data-retrieval.rst:130 msgid "" "As opposed to the simple query, complex query is available via a separate " "API endpoint. For more information see the `Telemetry v2 Web API Reference " "`__." msgstr "" #: ../telemetry-data-retrieval.rst:135 msgid "Statistics" msgstr "" #: ../telemetry-data-retrieval.rst:137 msgid "" "The sample data can be used in various ways for several purposes, like " "billing or profiling. In external systems the data is often used in the form " "of aggregated statistics. The Telemetry API provides several built-in " "functions to make some basic calculations available without any additional " "coding." msgstr "" #: ../telemetry-data-retrieval.rst:143 msgid "Telemetry supports the following statistics and aggregation functions:" msgstr "" #: ../telemetry-data-retrieval.rst:146 msgid "Average of the sample volumes over each period." msgstr "" #: ../telemetry-data-retrieval.rst:146 msgid "``avg``" msgstr "" #: ../telemetry-data-retrieval.rst:149 msgid "" "Count of distinct values in each period identified by a key specified as the " "parameter of this aggregate function. The supported parameter values are:" msgstr "" #: ../telemetry-data-retrieval.rst:155 msgid "``resource_id``" msgstr "" #: ../telemetry-data-retrieval.rst:157 msgid "``cardinality``" msgstr "" #: ../telemetry-data-retrieval.rst:161 msgid "The ``aggregate.param`` option is required." msgstr "" #: ../telemetry-data-retrieval.rst:164 msgid "Number of samples in each period." msgstr "" #: ../telemetry-data-retrieval.rst:164 msgid "``count``" msgstr "" #: ../telemetry-data-retrieval.rst:167 msgid "Maximum of the sample volumes in each period." msgstr "" #: ../telemetry-data-retrieval.rst:167 msgid "``max``" msgstr "" #: ../telemetry-data-retrieval.rst:170 msgid "Minimum of the sample volumes in each period." msgstr "" #: ../telemetry-data-retrieval.rst:170 msgid "``min``" msgstr "" #: ../telemetry-data-retrieval.rst:173 msgid "Standard deviation of the sample volumes in each period." msgstr "" #: ../telemetry-data-retrieval.rst:173 msgid "``stddev``" msgstr "" #: ../telemetry-data-retrieval.rst:176 msgid "Sum of the sample volumes over each period." msgstr "" #: ../telemetry-data-retrieval.rst:176 msgid "``sum``" msgstr "" #: ../telemetry-data-retrieval.rst:178 msgid "" "The simple query and the statistics functionality can be used together in a " "single API request." msgstr "" #: ../telemetry-data-retrieval.rst:182 msgid "Telemetry command-line client and SDK" msgstr "" #: ../telemetry-data-retrieval.rst:184 msgid "" "The Telemetry service provides a command-line client, with which the " "collected data is available just as the alarm definition and retrieval " "options. The client uses the Telemetry RESTful API in order to execute the " "requested operations." msgstr "" #: ../telemetry-data-retrieval.rst:189 msgid "" "To be able to use the :command:`ceilometer` command, the python-" "ceilometerclient package needs to be installed and configured properly. For " "details about the installation process, see the `Telemetry chapter `__ in the " "OpenStack Installation Guide." msgstr "" #: ../telemetry-data-retrieval.rst:197 msgid "" "The Telemetry service captures the user-visible resource usage data. " "Therefore the database will not contain any data without the existence of " "these resources, like VM images in the OpenStack Image service." msgstr "" #: ../telemetry-data-retrieval.rst:202 msgid "" "Similarly to other OpenStack command-line clients, the ``ceilometer`` client " "uses OpenStack Identity for authentication. The proper credentials and :" "option:`--auth_url` parameter have to be defined via command line parameters " "or environment variables." msgstr "" #: ../telemetry-data-retrieval.rst:207 msgid "" "This section provides some examples without the aim of completeness. These " "commands can be used for instance for validating an installation of " "Telemetry." msgstr "" #: ../telemetry-data-retrieval.rst:211 msgid "" "To retrieve the list of collected meters, the following command should be " "used:" msgstr "" #: ../telemetry-data-retrieval.rst:231 msgid "" "The :command:`ceilometer` command was run with ``admin`` rights, which means " "that all the data is accessible in the database. For more information about " "access right see :ref:`telemetry-users-roles-tenants`. As it can be seen in " "the above example, there are two VM instances existing in the system, as " "there are VM instance related meters on the top of the result list. The " "existence of these meters does not indicate that these instances are running " "at the time of the request. The result contains the currently collected " "meters per resource, in an ascending order based on the name of the meter." msgstr "" #: ../telemetry-data-retrieval.rst:240 msgid "" "Samples are collected for each meter that is present in the list of meters, " "except in case of instances that are not running or deleted from the " "OpenStack Compute database. If an instance no longer exists and there is a " "``time_to_live`` value set in the ``ceilometer.conf`` configuration file, " "then a group of samples are deleted in each expiration cycle. When the last " "sample is deleted for a meter, the database can be cleaned up by running " "ceilometer-expirer and the meter will not be present in the list above " "anymore. For more information about the expiration procedure see :ref:" "`telemetry-storing-samples`." msgstr "" #: ../telemetry-data-retrieval.rst:250 msgid "" "The Telemetry API supports simple query on the meter endpoint. The query " "functionality has the following syntax:" msgstr "" #: ../telemetry-data-retrieval.rst:257 msgid "" "The following command needs to be invoked to request the meters of one VM " "instance:" msgstr "" #: ../telemetry-data-retrieval.rst:284 msgid "" "As it was described above, the whole set of samples can be retrieved that " "are stored for a meter or filtering the result set by using one of the " "available query types. The request for all the samples of the ``cpu`` meter " "without any additional filtering looks like the following:" msgstr "" #: ../telemetry-data-retrieval.rst:304 msgid "" "The result set of the request contains the samples for both instances " "ordered by the timestamp field in the default descending order." msgstr "" #: ../telemetry-data-retrieval.rst:307 msgid "" "The simple query makes it possible to retrieve only a subset of the " "collected samples. The following command can be executed to request the " "``cpu`` samples of only one of the VM instances:" msgstr "" #: ../telemetry-data-retrieval.rst:327 msgid "" "As it can be seen on the output above, the result set contains samples for " "only one instance of the two." msgstr "" #: ../telemetry-data-retrieval.rst:330 msgid "" "The :command:`ceilometer query-samples` command is used to execute rich " "queries. This command accepts the following parameters:" msgstr "" #: ../telemetry-data-retrieval.rst:334 msgid "" "Contains the filter expression for the query in the form of: ``{complex_op: " "[{simple_op: {field_name: value}}]}``." msgstr "" #: ../telemetry-data-retrieval.rst:335 msgid "``--filter``" msgstr "" #: ../telemetry-data-retrieval.rst:338 msgid "" "Contains the list of ``orderby`` expressions in the form of: ``[{field_name: " "direction}, {field_name: direction}]``." msgstr "" #: ../telemetry-data-retrieval.rst:339 msgid "``--orderby``" msgstr "" #: ../telemetry-data-retrieval.rst:342 msgid "Specifies the maximum number of samples to return." msgstr "" #: ../telemetry-data-retrieval.rst:342 msgid "``--limit``" msgstr "" #: ../telemetry-data-retrieval.rst:344 msgid "" "For more information about complex queries see :ref:`Complex query `." msgstr "" #: ../telemetry-data-retrieval.rst:347 msgid "" "As the complex query functionality provides the possibility of using complex " "operators, it is possible to retrieve a subset of samples for a given VM " "instance. To request for the first six samples for the ``cpu`` and ``disk." "read.bytes`` meters, the following command should be invoked:" msgstr "" #: ../telemetry-data-retrieval.rst:368 msgid "" "Ceilometer also captures data as events, which represents the state of a " "resource. Refer to ``/telemetry-events`` for more information regarding " "Events." msgstr "" #: ../telemetry-data-retrieval.rst:372 msgid "" "To retrieve a list of recent events that occurred in the system, the " "following command can be executed:" msgstr "" #: ../telemetry-data-retrieval.rst:406 msgid "" "In Liberty, the data returned corresponds to the role and user. Non-admin " "users will only return events that are scoped to them. Admin users will " "return all events related to the project they administer as well as all " "unscoped events." msgstr "" #: ../telemetry-data-retrieval.rst:411 msgid "" "Similar to querying meters, additional filter parameters can be given to " "retrieve specific events:" msgstr "" #: ../telemetry-data-retrieval.rst:447 msgid "" "As of the Liberty release, the number of items returned will be restricted " "to the value defined by ``default_api_return_limit`` in the ``ceilometer." "conf`` configuration file. Alternatively, the value can be set per query by " "passing the ``limit`` option in the request." msgstr "" #: ../telemetry-data-retrieval.rst:454 msgid "Telemetry Python bindings" msgstr "" #: ../telemetry-data-retrieval.rst:456 msgid "" "The command-line client library provides python bindings in order to use the " "Telemetry Python API directly from python programs." msgstr "" #: ../telemetry-data-retrieval.rst:459 msgid "" "The first step in setting up the client is to create a client instance with " "the proper credentials:" msgstr "" #: ../telemetry-data-retrieval.rst:467 msgid "" "The ``VERSION`` parameter can be ``1`` or ``2``, specifying the API version " "to be used." msgstr "" #: ../telemetry-data-retrieval.rst:470 msgid "The method calls look like the following:" msgstr "" #: ../telemetry-data-retrieval.rst:480 msgid "" "For further details about the python-ceilometerclient package, see the " "`Python bindings to the OpenStack Ceilometer API `__ reference." msgstr "" #: ../telemetry-data-retrieval.rst:488 msgid "Publishers" msgstr "" #: ../telemetry-data-retrieval.rst:490 msgid "" "The Telemetry service provides several transport methods to forward the data " "collected to the ``ceilometer-collector`` service or to an external system. " "The consumers of this data are widely different, like monitoring systems, " "for which data loss is acceptable and billing systems, which require " "reliable data transportation. Telemetry provides methods to fulfill the " "requirements of both kind of systems, as it is described below." msgstr "" #: ../telemetry-data-retrieval.rst:498 msgid "" "The publisher component makes it possible to persist the data into storage " "through the message bus or to send it to one or more external consumers. One " "chain can contain multiple publishers." msgstr "" #: ../telemetry-data-retrieval.rst:502 msgid "" "To solve the above mentioned problem, the notion of multi-publisher can be " "configured for each datapoint within the Telemetry service, allowing the " "same technical meter or event to be published multiple times to multiple " "destinations, each potentially using a different transport." msgstr "" #: ../telemetry-data-retrieval.rst:507 msgid "" "Publishers can be specified in the ``publishers`` section for each pipeline " "(for further details about pipelines see :ref:`data-collection-and-" "processing`) that is defined in the `pipeline.yaml `__ file." msgstr "" #: ../telemetry-data-retrieval.rst:514 msgid "The following publisher types are supported:" msgstr "" #: ../telemetry-data-retrieval.rst:517 msgid "" "It can be specified in the form of ``notifier://?" "option1=value1&option2=value2``. It emits data over AMQP using oslo." "messaging. This is the recommended method of publishing." msgstr "" #: ../telemetry-data-retrieval.rst:520 msgid "notifier" msgstr "" #: ../telemetry-data-retrieval.rst:523 msgid "" "It can be specified in the form of ``rpc://?option1=value1&option2=value2``. " "It emits metering data over lossy AMQP. This method is synchronous and may " "experience performance issues. This publisher is deprecated in Liberty in " "favor of the notifier publisher." msgstr "" #: ../telemetry-data-retrieval.rst:527 msgid "rpc" msgstr "" #: ../telemetry-data-retrieval.rst:530 msgid "" "It can be specified in the form of ``udp://:/``. It emits " "metering data for over UDP." msgstr "" #: ../telemetry-data-retrieval.rst:531 msgid "udp" msgstr "" #: ../telemetry-data-retrieval.rst:534 msgid "" "It can be specified in the form of ``file://path?" "option1=value1&option2=value2``. This publisher records metering data into a " "file." msgstr "" #: ../telemetry-data-retrieval.rst:536 msgid "file" msgstr "" #: ../telemetry-data-retrieval.rst:540 msgid "" "If a file name and location is not specified, this publisher does not log " "any meters, instead it logs a warning message in the configured log file for " "Telemetry." msgstr "" #: ../telemetry-data-retrieval.rst:545 msgid "" "It can be specified in the form of: ``kafka://kafka_broker_ip: " "kafka_broker_port?topic=kafka_topic &option1=value1``." msgstr "" #: ../telemetry-data-retrieval.rst:549 msgid "This publisher sends metering data to a kafka broker." msgstr "" #: ../telemetry-data-retrieval.rst:549 msgid "kafka" msgstr "" #: ../telemetry-data-retrieval.rst:553 msgid "" "If the topic parameter is missing, this publisher brings out metering data " "under a topic name, ``ceilometer``. When the port number is not specified, " "this publisher uses 9092 as the broker's port." msgstr "" #: ../telemetry-data-retrieval.rst:558 msgid "" "The following options are available for ``rpc`` and ``notifier``. The policy " "option can be used by ``kafka`` publisher:" msgstr "" #: ../telemetry-data-retrieval.rst:562 msgid "" "The value of it is 1. It is used for publishing the samples on additional " "``metering_topic.sample_name`` topic queue besides the default " "``metering_topic`` queue." msgstr "" #: ../telemetry-data-retrieval.rst:564 msgid "``per_meter_topic``" msgstr "" #: ../telemetry-data-retrieval.rst:567 msgid "" "It is used for configuring the behavior for the case, when the publisher " "fails to send the samples, where the possible predefined values are the " "following:" msgstr "" #: ../telemetry-data-retrieval.rst:572 msgid "Used for waiting and blocking until the samples have been sent." msgstr "" #: ../telemetry-data-retrieval.rst:572 msgid "default" msgstr "" #: ../telemetry-data-retrieval.rst:575 msgid "Used for dropping the samples which are failed to be sent." msgstr "" #: ../telemetry-data-retrieval.rst:575 msgid "drop" msgstr "" #: ../telemetry-data-retrieval.rst:578 msgid "" "Used for creating an in-memory queue and retrying to send the samples on the " "queue on the next samples publishing period (the queue length can be " "configured with ``max_queue_length``, where 1024 is the default value)." msgstr "" #: ../telemetry-data-retrieval.rst:581 msgid "``policy``" msgstr "" #: ../telemetry-data-retrieval.rst:581 msgid "queue" msgstr "" #: ../telemetry-data-retrieval.rst:583 msgid "" "The following option is additionally available for the ``notifier`` " "publisher:" msgstr "" #: ../telemetry-data-retrieval.rst:586 msgid "" "The topic name of queue to publish to. Setting this will override the " "default topic defined by ``metering_topic`` and ``event_topic`` options. " "This option can be used to support multiple consumers. Support for this " "feature was added in Kilo." msgstr "" #: ../telemetry-data-retrieval.rst:589 msgid "``topic``" msgstr "" #: ../telemetry-data-retrieval.rst:591 msgid "The following options are available for the ``file`` publisher:" msgstr "" #: ../telemetry-data-retrieval.rst:594 msgid "" "When this option is greater than zero, it will cause a rollover. When the " "size is about to be exceeded, the file is closed and a new file is silently " "opened for output. If its value is zero, rollover never occurs." msgstr "" #: ../telemetry-data-retrieval.rst:597 msgid "``max_bytes``" msgstr "" #: ../telemetry-data-retrieval.rst:600 msgid "" "If this value is non-zero, an extension will be appended to the filename of " "the old log, as '.1', '.2', and so forth until the specified value is " "reached. The file that is written and contains the newest data is always the " "one that is specified without any extensions." msgstr "" #: ../telemetry-data-retrieval.rst:604 msgid "``backup_count``" msgstr "" #: ../telemetry-data-retrieval.rst:606 msgid "" "The default publisher is ``notifier``, without any additional options " "specified. A sample ``publishers`` section in the ``/etc/ceilometer/pipeline." "yaml`` looks like the following:" msgstr "" #: ../telemetry-events.rst:3 msgid "Events" msgstr "" #: ../telemetry-events.rst:5 msgid "" "In addition to meters, the Telemetry service collects events triggered " "within an OpenStack environment. This section provides a brief summary of " "the events format in the Telemetry service." msgstr "" #: ../telemetry-events.rst:9 msgid "" "While a sample represents a single, numeric datapoint within a time-series, " "an event is a broader concept that represents the state of a resource at a " "point in time. The state may be described using various data types including " "non-numeric data such as an instance's flavor. In general, events represent " "any action made in the OpenStack system." msgstr "" #: ../telemetry-events.rst:16 msgid "Event configuration" msgstr "" #: ../telemetry-events.rst:18 msgid "" "To enable the creation and storage of events in the Telemetry service " "``store_events`` option needs to be set to ``True``. For further " "configuration options, see the event section in the `OpenStack Configuration " "Reference `__." msgstr "" #: ../telemetry-events.rst:25 msgid "" "It is advisable to set ``disable_non_metric_meters`` to ``True`` when " "enabling events in the Telemetry service. The Telemetry service historically " "represented events as metering data, which may create duplication of data if " "both events and non-metric meters are enabled." msgstr "" #: ../telemetry-events.rst:32 msgid "Event structure" msgstr "" #: ../telemetry-events.rst:34 msgid "" "Events captured by the Telemetry service are represented by five key " "attributes:" msgstr "" #: ../telemetry-events.rst:38 msgid "" "A dotted string defining what event occurred such as ``\"compute.instance." "resize.start\"``." msgstr "" #: ../telemetry-events.rst:39 ../telemetry-events.rst:128 msgid "event\\_type" msgstr "" #: ../telemetry-events.rst:42 msgid "A UUID for the event." msgstr "" #: ../telemetry-events.rst:42 msgid "message\\_id" msgstr "" #: ../telemetry-events.rst:45 msgid "A timestamp of when the event occurred in the system." msgstr "" #: ../telemetry-events.rst:45 msgid "generated" msgstr "" #: ../telemetry-events.rst:48 msgid "" "A flat mapping of key-value pairs which describe the event. The event's " "traits contain most of the details of the event. Traits are typed, and can " "be strings, integers, floats, or datetimes." msgstr "" #: ../telemetry-events.rst:50 ../telemetry-events.rst:132 msgid "traits" msgstr "" #: ../telemetry-events.rst:53 msgid "" "Mainly for auditing purpose, the full event message can be stored " "(unindexed) for future evaluation." msgstr "" # #-#-#-#-# telemetry-events.pot (Administrator Guide 0.9) #-#-#-#-# # #-#-#-#-# ts-eql-volume-size.pot (Administrator Guide 0.9) #-#-#-#-# #: ../telemetry-events.rst:54 ../ts-eql-volume-size.rst:112 msgid "raw" msgstr "" #: ../telemetry-events.rst:57 msgid "Event indexing" msgstr "" #: ../telemetry-events.rst:58 msgid "" "The general philosophy of notifications in OpenStack is to emit any and all " "data someone might need, and let the consumer filter out what they are not " "interested in. In order to make processing simpler and more efficient, the " "notifications are stored and processed within Ceilometer as events. The " "notification payload, which can be an arbitrarily complex JSON data " "structure, is converted to a flat set of key-value pairs. This conversion is " "specified by a config file." msgstr "" #: ../telemetry-events.rst:68 msgid "" "The event format is meant for efficient processing and querying. Storage of " "complete notifications for auditing purposes can be enabled by configuring " "``store_raw`` option." msgstr "" #: ../telemetry-events.rst:73 msgid "Event conversion" msgstr "" #: ../telemetry-events.rst:74 msgid "" "The conversion from notifications to events is driven by a configuration " "file defined by the ``definitions_cfg_file`` in the ``ceilometer.conf`` " "configuration file." msgstr "" #: ../telemetry-events.rst:78 msgid "" "This includes descriptions of how to map fields in the notification body to " "Traits, and optional plug-ins for doing any programmatic translations " "(splitting a string, forcing case)." msgstr "" #: ../telemetry-events.rst:82 msgid "" "The mapping of notifications to events is defined per event\\_type, which " "can be wildcarded. Traits are added to events if the corresponding fields in " "the notification exist and are non-null." msgstr "" #: ../telemetry-events.rst:88 msgid "" "The default definition file included with the Telemetry service contains a " "list of known notifications and useful traits. The mappings provided can be " "modified to include more or less data according to user requirements." msgstr "" #: ../telemetry-events.rst:93 msgid "" "If the definitions file is not present, a warning will be logged, but an " "empty set of definitions will be assumed. By default, any notifications that " "do not have a corresponding event definition in the definitions file will be " "converted to events with a set of minimal traits. This can be changed by " "setting the option ``drop_unmatched_notifications`` in the ``ceilometer." "conf`` file. If this is set to ``True``, any unmapped notifications will be " "dropped." msgstr "" #: ../telemetry-events.rst:101 msgid "" "The basic set of traits (all are TEXT type) that will be added to all events " "if the notification has the relevant data are: service (notification's " "publisher), tenant\\_id, and request\\_id. These do not have to be specified " "in the event definition, they are automatically added, but their definitions " "can be overridden for a given event\\_type." msgstr "" #: ../telemetry-events.rst:108 msgid "Event definitions format" msgstr "" #: ../telemetry-events.rst:110 msgid "" "The event definitions file is in YAML format. It consists of a list of event " "definitions, which are mappings. Order is significant, the list of " "definitions is scanned in reverse order to find a definition which matches " "the notification's event\\_type. That definition will be used to generate " "the event. The reverse ordering is done because it is common to want to have " "a more general wildcarded definition (such as ``compute.instance.*``) with a " "set of traits common to all of those events, with a few more specific event " "definitions afterwards that have all of the above traits, plus a few more." msgstr "" #: ../telemetry-events.rst:120 msgid "Each event definition is a mapping with two keys:" msgstr "" #: ../telemetry-events.rst:123 msgid "" "This is a list (or a string, which will be taken as a 1 element list) of " "event\\_types this definition will handle. These can be wildcarded with unix " "shell glob syntax. An exclusion listing (starting with a ``!``) will exclude " "any types listed from matching. If only exclusions are listed, the " "definition will match anything not matching the exclusions." msgstr "" #: ../telemetry-events.rst:131 msgid "" "This is a mapping, the keys are the trait names, and the values are trait " "definitions." msgstr "" #: ../telemetry-events.rst:134 msgid "Each trait definition is a mapping with the following keys:" msgstr "" #: ../telemetry-events.rst:137 msgid "" "A path specification for the field(s) in the notification you wish to " "extract for this trait. Specifications can be written to match multiple " "possible fields. By default the value will be the first such field. The " "paths can be specified with a dot syntax (``payload.host``). Square bracket " "syntax (``payload[host]``) is also supported. In either case, if the key for " "the field you are looking for contains special characters, like ``.``, it " "will need to be quoted (with double or single quotes): ``payload." "image_meta.’org.openstack__1__architecture’``. The syntax used for the field " "specification is a variant of `JSONPath `__" msgstr "" #: ../telemetry-events.rst:147 msgid "fields" msgstr "" #: ../telemetry-events.rst:150 msgid "" "(Optional) The data type for this trait. Valid options are: ``text``, " "``int``, ``float``, and ``datetime``. Defaults to ``text`` if not specified." msgstr "" #: ../telemetry-events.rst:152 msgid "type" msgstr "" #: ../telemetry-events.rst:155 msgid "" "(Optional) Used to execute simple programmatic conversions on the value in a " "notification field." msgstr "" #: ../telemetry-events.rst:155 msgid "plugin" msgstr "" #: ../telemetry-measurements.rst:5 msgid "Measurements" msgstr "" #: ../telemetry-measurements.rst:7 msgid "" "The Telemetry service collects meters within an OpenStack deployment. This " "section provides a brief summary about meters format and origin and also " "contains the list of available meters." msgstr "" #: ../telemetry-measurements.rst:11 msgid "" "Telemetry collects meters by polling the infrastructure elements and also by " "consuming the notifications emitted by other OpenStack services. For more " "information about the polling mechanism and notifications see :ref:" "`telemetry-data-collection`. There are several meters which are collected by " "polling and by consuming. The origin for each meter is listed in the tables " "below." msgstr "" #: ../telemetry-measurements.rst:20 msgid "" "You may need to configure Telemetry or other OpenStack services in order to " "be able to collect all the samples you need. For further information about " "configuration requirements see the `Telemetry chapter `__ in the OpenStack " "Installation Guide. Also check the `Telemetry manual installation `__ description." msgstr "" #: ../telemetry-measurements.rst:28 msgid "Telemetry uses the following meter types:" msgstr "" #: ../telemetry-measurements.rst:33 msgid "Increasing over time (instance hours)" msgstr "" #: ../telemetry-measurements.rst:35 msgid "Changing over time (bandwidth)" msgstr "" #: ../telemetry-measurements.rst:37 msgid "" "Discrete items (floating IPs, image uploads) and fluctuating values (disk I/" "O)" msgstr "" #: ../telemetry-measurements.rst:43 msgid "" "Telemetry provides the possibility to store metadata for samples. This " "metadata can be extended for OpenStack Compute and OpenStack Object Storage." msgstr "" #: ../telemetry-measurements.rst:47 msgid "" "In order to add additional metadata information to OpenStack Compute you " "have two options to choose from. The first one is to specify them when you " "boot up a new instance. The additional information will be stored with the " "sample in the form of ``resource_metadata.user_metadata.*``. The new field " "should be defined by using the prefix ``metering.``. The modified boot " "command look like the following:" msgstr "" #: ../telemetry-measurements.rst:58 msgid "" "The other option is to set the ``reserved_metadata_keys`` to the list of " "metadata keys that you would like to be included in ``resource_metadata`` of " "the instance related samples that are collected for OpenStack Compute. This " "option is included in the ``DEFAULT`` section of the ``ceilometer.conf`` " "configuration file." msgstr "" #: ../telemetry-measurements.rst:64 msgid "" "You might also specify headers whose values will be stored along with the " "sample data of OpenStack Object Storage. The additional information is also " "stored under ``resource_metadata``. The format of the new field is " "``resource_metadata.http_header_$name``, where ``$name`` is the name of the " "header with ``-`` replaced by ``_``." msgstr "" #: ../telemetry-measurements.rst:70 msgid "" "For specifying the new header, you need to set ``metadata_headers`` option " "under the ``[filter:ceilometer]`` section in ``proxy-server.conf`` under the " "``swift`` folder. You can use this additional data for instance to " "distinguish external and internal users." msgstr "" #: ../telemetry-measurements.rst:75 msgid "" "Measurements are grouped by services which are polled by Telemetry or emit " "notifications that this service consumes." msgstr "" #: ../telemetry-measurements.rst:80 msgid "" "The Telemetry service supports storing notifications as events. This " "functionality was added later, therefore the list of meters still contains " "existence type and other event related items. The proper way of using " "Telemetry is to configure it to use the event store and turn off the " "collection of the event related meters. For further information about events " "see `Events section `__ in the Telemetry documentation. For further information about how " "to turn on and off meters see :ref:`telemetry-pipeline-configuration`. " "Please also note that currently no migration is available to move the " "already existing event type samples to the event store." msgstr "" #: ../telemetry-measurements.rst:97 msgid "The following meters are collected for OpenStack Compute:" msgstr "" #: ../telemetry-measurements.rst:100 ../telemetry-measurements.rst:434 #: ../telemetry-measurements.rst:489 ../telemetry-measurements.rst:533 #: ../telemetry-measurements.rst:595 ../telemetry-measurements.rst:671 #: ../telemetry-measurements.rst:705 ../telemetry-measurements.rst:771 #: ../telemetry-measurements.rst:821 ../telemetry-measurements.rst:859 #: ../telemetry-measurements.rst:931 ../telemetry-measurements.rst:995 #: ../telemetry-measurements.rst:1082 ../telemetry-measurements.rst:1162 #: ../telemetry-measurements.rst:1257 ../telemetry-measurements.rst:1323 #: ../telemetry-measurements.rst:1374 ../telemetry-measurements.rst:1401 #: ../telemetry-measurements.rst:1425 ../telemetry-measurements.rst:1446 msgid "Origin" msgstr "" #: ../telemetry-measurements.rst:100 msgid "Support" msgstr "" #: ../telemetry-measurements.rst:100 ../telemetry-measurements.rst:434 #: ../telemetry-measurements.rst:489 ../telemetry-measurements.rst:533 #: ../telemetry-measurements.rst:595 ../telemetry-measurements.rst:671 #: ../telemetry-measurements.rst:705 ../telemetry-measurements.rst:771 #: ../telemetry-measurements.rst:821 ../telemetry-measurements.rst:859 #: ../telemetry-measurements.rst:931 ../telemetry-measurements.rst:995 #: ../telemetry-measurements.rst:1082 ../telemetry-measurements.rst:1162 #: ../telemetry-measurements.rst:1257 ../telemetry-measurements.rst:1323 #: ../telemetry-measurements.rst:1374 ../telemetry-measurements.rst:1401 #: ../telemetry-measurements.rst:1425 ../telemetry-measurements.rst:1446 msgid "Unit" msgstr "" #: ../telemetry-measurements.rst:102 ../telemetry-measurements.rst:436 #: ../telemetry-measurements.rst:673 ../telemetry-measurements.rst:707 #: ../telemetry-measurements.rst:773 ../telemetry-measurements.rst:933 #: ../telemetry-measurements.rst:997 ../telemetry-measurements.rst:1376 #: ../telemetry-measurements.rst:1448 msgid "**Meters added in the Icehouse release or earlier**" msgstr "" #: ../telemetry-measurements.rst:104 ../telemetry-measurements.rst:203 msgid "Existence of instance" msgstr "" #: ../telemetry-measurements.rst:104 ../telemetry-measurements.rst:108 #: ../telemetry-measurements.rst:125 ../telemetry-measurements.rst:136 #: ../telemetry-measurements.rst:143 ../telemetry-measurements.rst:150 #: ../telemetry-measurements.rst:157 ../telemetry-measurements.rst:171 #: ../telemetry-measurements.rst:179 ../telemetry-measurements.rst:187 #: ../telemetry-measurements.rst:196 ../telemetry-measurements.rst:239 #: ../telemetry-measurements.rst:248 ../telemetry-measurements.rst:257 #: ../telemetry-measurements.rst:266 msgid "Libvirt, Hyper-V, vSphere" msgstr "" #: ../telemetry-measurements.rst:104 ../telemetry-measurements.rst:108 #: ../telemetry-measurements.rst:203 ../telemetry-measurements.rst:208 #: ../telemetry-measurements.rst:351 msgid "Notific\\ ation, Pollster" msgstr "" #: ../telemetry-measurements.rst:104 ../telemetry-measurements.rst:108 msgid "inst\\ ance" msgstr "" #: ../telemetry-measurements.rst:104 ../telemetry-measurements.rst:203 msgid "instance" msgstr "" #: ../telemetry-measurements.rst:104 ../telemetry-measurements.rst:108 #: ../telemetry-measurements.rst:112 ../telemetry-measurements.rst:116 #: ../telemetry-measurements.rst:122 ../telemetry-measurements.rst:125 #: ../telemetry-measurements.rst:129 ../telemetry-measurements.rst:133 #: ../telemetry-measurements.rst:136 ../telemetry-measurements.rst:140 #: ../telemetry-measurements.rst:143 ../telemetry-measurements.rst:147 #: ../telemetry-measurements.rst:150 ../telemetry-measurements.rst:154 #: ../telemetry-measurements.rst:157 ../telemetry-measurements.rst:161 #: ../telemetry-measurements.rst:164 ../telemetry-measurements.rst:203 #: ../telemetry-measurements.rst:208 ../telemetry-measurements.rst:213 #: ../telemetry-measurements.rst:219 ../telemetry-measurements.rst:224 #: ../telemetry-measurements.rst:229 ../telemetry-measurements.rst:293 #: ../telemetry-measurements.rst:299 ../telemetry-measurements.rst:304 #: ../telemetry-measurements.rst:307 ../telemetry-measurements.rst:317 #: ../telemetry-measurements.rst:321 ../telemetry-measurements.rst:327 #: ../telemetry-measurements.rst:351 ../telemetry-measurements.rst:358 msgid "instance ID" msgstr "" #: ../telemetry-measurements.rst:108 ../telemetry-measurements.rst:208 #: ../telemetry-measurements.rst:351 msgid "Existence of instance (OpenStack types)" msgstr "" #: ../telemetry-measurements.rst:108 msgid "instance:\\ " msgstr "" #: ../telemetry-measurements.rst:112 ../telemetry-measurements.rst:122 #: ../telemetry-measurements.rst:129 ../telemetry-measurements.rst:133 #: ../telemetry-measurements.rst:140 ../telemetry-measurements.rst:147 #: ../telemetry-measurements.rst:154 ../telemetry-measurements.rst:161 #: ../telemetry-measurements.rst:164 ../telemetry-measurements.rst:167 #: ../telemetry-measurements.rst:175 ../telemetry-measurements.rst:183 #: ../telemetry-measurements.rst:192 ../telemetry-measurements.rst:235 #: ../telemetry-measurements.rst:244 ../telemetry-measurements.rst:253 #: ../telemetry-measurements.rst:262 ../telemetry-measurements.rst:358 msgid "Libvirt, Hyper-V" msgstr "" #: ../telemetry-measurements.rst:112 ../telemetry-measurements.rst:116 #: ../telemetry-measurements.rst:213 ../telemetry-measurements.rst:293 #: ../telemetry-measurements.rst:299 msgid "MB" msgstr "" #: ../telemetry-measurements.rst:112 ../telemetry-measurements.rst:129 #: ../telemetry-measurements.rst:161 ../telemetry-measurements.rst:164 #: ../telemetry-measurements.rst:784 ../telemetry-measurements.rst:787 #: ../telemetry-measurements.rst:790 msgid "Notific\\ ation" msgstr "" #: ../telemetry-measurements.rst:112 msgid "Volume of RAM allocated to the instance" msgstr "" #: ../telemetry-measurements.rst:112 msgid "memory" msgstr "" #: ../telemetry-measurements.rst:116 ../telemetry-measurements.rst:122 #: ../telemetry-measurements.rst:125 ../telemetry-measurements.rst:133 #: ../telemetry-measurements.rst:136 ../telemetry-measurements.rst:140 #: ../telemetry-measurements.rst:143 ../telemetry-measurements.rst:147 #: ../telemetry-measurements.rst:150 ../telemetry-measurements.rst:154 #: ../telemetry-measurements.rst:157 ../telemetry-measurements.rst:167 #: ../telemetry-measurements.rst:171 ../telemetry-measurements.rst:175 #: ../telemetry-measurements.rst:179 ../telemetry-measurements.rst:183 #: ../telemetry-measurements.rst:187 ../telemetry-measurements.rst:192 #: ../telemetry-measurements.rst:196 ../telemetry-measurements.rst:213 #: ../telemetry-measurements.rst:219 ../telemetry-measurements.rst:224 #: ../telemetry-measurements.rst:229 ../telemetry-measurements.rst:235 #: ../telemetry-measurements.rst:239 ../telemetry-measurements.rst:244 #: ../telemetry-measurements.rst:248 ../telemetry-measurements.rst:253 #: ../telemetry-measurements.rst:257 ../telemetry-measurements.rst:262 #: ../telemetry-measurements.rst:266 ../telemetry-measurements.rst:271 #: ../telemetry-measurements.rst:276 ../telemetry-measurements.rst:281 #: ../telemetry-measurements.rst:286 ../telemetry-measurements.rst:293 #: ../telemetry-measurements.rst:299 ../telemetry-measurements.rst:304 #: ../telemetry-measurements.rst:307 ../telemetry-measurements.rst:310 #: ../telemetry-measurements.rst:314 ../telemetry-measurements.rst:317 #: ../telemetry-measurements.rst:321 ../telemetry-measurements.rst:327 #: ../telemetry-measurements.rst:332 ../telemetry-measurements.rst:337 #: ../telemetry-measurements.rst:343 ../telemetry-measurements.rst:358 #: ../telemetry-measurements.rst:537 ../telemetry-measurements.rst:540 #: ../telemetry-measurements.rst:546 ../telemetry-measurements.rst:549 #: ../telemetry-measurements.rst:552 ../telemetry-measurements.rst:557 #: ../telemetry-measurements.rst:562 ../telemetry-measurements.rst:566 #: ../telemetry-measurements.rst:570 ../telemetry-measurements.rst:599 #: ../telemetry-measurements.rst:602 ../telemetry-measurements.rst:605 #: ../telemetry-measurements.rst:608 ../telemetry-measurements.rst:611 #: ../telemetry-measurements.rst:614 ../telemetry-measurements.rst:617 #: ../telemetry-measurements.rst:620 ../telemetry-measurements.rst:623 #: ../telemetry-measurements.rst:626 ../telemetry-measurements.rst:629 #: ../telemetry-measurements.rst:632 ../telemetry-measurements.rst:636 #: ../telemetry-measurements.rst:639 ../telemetry-measurements.rst:643 #: ../telemetry-measurements.rst:647 ../telemetry-measurements.rst:651 #: ../telemetry-measurements.rst:656 ../telemetry-measurements.rst:661 #: ../telemetry-measurements.rst:775 ../telemetry-measurements.rst:778 #: ../telemetry-measurements.rst:781 ../telemetry-measurements.rst:795 #: ../telemetry-measurements.rst:798 ../telemetry-measurements.rst:825 #: ../telemetry-measurements.rst:827 ../telemetry-measurements.rst:830 #: ../telemetry-measurements.rst:833 ../telemetry-measurements.rst:838 #: ../telemetry-measurements.rst:841 ../telemetry-measurements.rst:999 #: ../telemetry-measurements.rst:1002 ../telemetry-measurements.rst:1005 #: ../telemetry-measurements.rst:1008 ../telemetry-measurements.rst:1011 #: ../telemetry-measurements.rst:1014 ../telemetry-measurements.rst:1017 #: ../telemetry-measurements.rst:1020 ../telemetry-measurements.rst:1023 #: ../telemetry-measurements.rst:1026 ../telemetry-measurements.rst:1029 #: ../telemetry-measurements.rst:1033 ../telemetry-measurements.rst:1037 #: ../telemetry-measurements.rst:1040 ../telemetry-measurements.rst:1043 #: ../telemetry-measurements.rst:1046 ../telemetry-measurements.rst:1049 #: ../telemetry-measurements.rst:1052 ../telemetry-measurements.rst:1055 #: ../telemetry-measurements.rst:1057 ../telemetry-measurements.rst:1060 #: ../telemetry-measurements.rst:1064 ../telemetry-measurements.rst:1067 #: ../telemetry-measurements.rst:1102 ../telemetry-measurements.rst:1106 #: ../telemetry-measurements.rst:1110 ../telemetry-measurements.rst:1114 #: ../telemetry-measurements.rst:1184 ../telemetry-measurements.rst:1188 #: ../telemetry-measurements.rst:1192 ../telemetry-measurements.rst:1196 #: ../telemetry-measurements.rst:1450 ../telemetry-measurements.rst:1452 msgid "Pollster" msgstr "" #: ../telemetry-measurements.rst:116 ../telemetry-measurements.rst:213 msgid "" "Volume of RAM used by the instance from the amount of its allocated memory" msgstr "" #: ../telemetry-measurements.rst:116 ../telemetry-measurements.rst:213 #: ../telemetry-measurements.rst:293 msgid "memory.\\ usage" msgstr "" #: ../telemetry-measurements.rst:116 msgid "vSphere" msgstr "" #: ../telemetry-measurements.rst:122 msgid "CPU time used" msgstr "" #: ../telemetry-measurements.rst:122 ../telemetry-measurements.rst:147 #: ../telemetry-measurements.rst:154 ../telemetry-measurements.rst:167 #: ../telemetry-measurements.rst:175 ../telemetry-measurements.rst:183 #: ../telemetry-measurements.rst:192 ../telemetry-measurements.rst:235 #: ../telemetry-measurements.rst:244 ../telemetry-measurements.rst:253 #: ../telemetry-measurements.rst:262 ../telemetry-measurements.rst:441 #: ../telemetry-measurements.rst:444 ../telemetry-measurements.rst:447 #: ../telemetry-measurements.rst:450 msgid "Cumu\\ lative" msgstr "" #: ../telemetry-measurements.rst:122 ../telemetry-measurements.rst:358 #: ../telemetry-measurements.rst:441 ../telemetry-measurements.rst:444 #: ../telemetry-measurements.rst:447 ../telemetry-measurements.rst:450 #: ../telemetry-measurements.rst:1060 msgid "ns" msgstr "" #: ../telemetry-measurements.rst:125 ../telemetry-measurements.rst:219 #: ../telemetry-measurements.rst:453 ../telemetry-measurements.rst:456 #: ../telemetry-measurements.rst:459 ../telemetry-measurements.rst:462 #: ../telemetry-measurements.rst:465 ../telemetry-measurements.rst:562 #: ../telemetry-measurements.rst:566 ../telemetry-measurements.rst:570 #: ../telemetry-measurements.rst:661 msgid "%" msgstr "" #: ../telemetry-measurements.rst:125 ../telemetry-measurements.rst:219 msgid "Average CPU utilization" msgstr "" #: ../telemetry-measurements.rst:125 ../telemetry-measurements.rst:219 msgid "cpu_util" msgstr "" #: ../telemetry-measurements.rst:129 msgid "Number of virtual CPUs allocated to the instance" msgstr "" #: ../telemetry-measurements.rst:129 msgid "vcpu" msgstr "" #: ../telemetry-measurements.rst:129 msgid "vcpus" msgstr "" #: ../telemetry-measurements.rst:133 ../telemetry-measurements.rst:140 #: ../telemetry-measurements.rst:632 ../telemetry-measurements.rst:636 #: ../telemetry-measurements.rst:639 ../telemetry-measurements.rst:643 #: ../telemetry-measurements.rst:647 ../telemetry-measurements.rst:651 #: ../telemetry-measurements.rst:656 msgid "Cumul\\ ative" msgstr "" #: ../telemetry-measurements.rst:133 ../telemetry-measurements.rst:235 msgid "Number of read requests" msgstr "" #: ../telemetry-measurements.rst:133 msgid "disk.read\\ .requests" msgstr "" #: ../telemetry-measurements.rst:133 ../telemetry-measurements.rst:140 #: ../telemetry-measurements.rst:235 ../telemetry-measurements.rst:244 msgid "req\\ uest" msgstr "" #: ../telemetry-measurements.rst:136 ../telemetry-measurements.rst:239 msgid "Average rate of read requests" msgstr "" #: ../telemetry-measurements.rst:136 msgid "disk.read\\ .requests\\ .rate" msgstr "" #: ../telemetry-measurements.rst:136 ../telemetry-measurements.rst:143 #: ../telemetry-measurements.rst:239 ../telemetry-measurements.rst:248 msgid "requ\\ est/s" msgstr "" #: ../telemetry-measurements.rst:140 ../telemetry-measurements.rst:244 msgid "Number of write requests" msgstr "" #: ../telemetry-measurements.rst:140 msgid "disk.writ\\ e.requests" msgstr "" #: ../telemetry-measurements.rst:143 ../telemetry-measurements.rst:248 msgid "Average rate of write requests" msgstr "" #: ../telemetry-measurements.rst:143 msgid "disk.writ\\ e.request\\ s.rate" msgstr "" #: ../telemetry-measurements.rst:147 ../telemetry-measurements.rst:154 #: ../telemetry-measurements.rst:167 ../telemetry-measurements.rst:175 #: ../telemetry-measurements.rst:253 ../telemetry-measurements.rst:262 #: ../telemetry-measurements.rst:317 ../telemetry-measurements.rst:321 #: ../telemetry-measurements.rst:327 ../telemetry-measurements.rst:332 #: ../telemetry-measurements.rst:337 ../telemetry-measurements.rst:343 #: ../telemetry-measurements.rst:632 ../telemetry-measurements.rst:636 #: ../telemetry-measurements.rst:692 ../telemetry-measurements.rst:695 #: ../telemetry-measurements.rst:778 ../telemetry-measurements.rst:784 #: ../telemetry-measurements.rst:787 ../telemetry-measurements.rst:798 #: ../telemetry-measurements.rst:827 ../telemetry-measurements.rst:841 #: ../telemetry-measurements.rst:984 ../telemetry-measurements.rst:1011 #: ../telemetry-measurements.rst:1014 ../telemetry-measurements.rst:1067 #: ../telemetry-measurements.rst:1110 ../telemetry-measurements.rst:1114 #: ../telemetry-measurements.rst:1192 ../telemetry-measurements.rst:1196 msgid "B" msgstr "" #: ../telemetry-measurements.rst:147 ../telemetry-measurements.rst:253 msgid "Volume of reads" msgstr "" #: ../telemetry-measurements.rst:147 msgid "disk.read\\ .bytes" msgstr "" #: ../telemetry-measurements.rst:150 ../telemetry-measurements.rst:224 #: ../telemetry-measurements.rst:257 msgid "Average rate of reads" msgstr "" #: ../telemetry-measurements.rst:150 ../telemetry-measurements.rst:157 #: ../telemetry-measurements.rst:171 ../telemetry-measurements.rst:179 #: ../telemetry-measurements.rst:224 ../telemetry-measurements.rst:229 #: ../telemetry-measurements.rst:257 ../telemetry-measurements.rst:266 #: ../telemetry-measurements.rst:271 ../telemetry-measurements.rst:276 msgid "B/s" msgstr "" #: ../telemetry-measurements.rst:150 ../telemetry-measurements.rst:224 msgid "disk.read\\ .bytes.\\ rate" msgstr "" #: ../telemetry-measurements.rst:154 ../telemetry-measurements.rst:262 msgid "Volume of writes" msgstr "" #: ../telemetry-measurements.rst:154 msgid "disk.writ\\ e.bytes" msgstr "" #: ../telemetry-measurements.rst:157 ../telemetry-measurements.rst:229 #: ../telemetry-measurements.rst:266 msgid "Average rate of writes" msgstr "" #: ../telemetry-measurements.rst:157 msgid "disk.writ\\ e.bytes.\\ rate" msgstr "" #: ../telemetry-measurements.rst:161 ../telemetry-measurements.rst:164 #: ../telemetry-measurements.rst:712 ../telemetry-measurements.rst:720 msgid "GB" msgstr "" #: ../telemetry-measurements.rst:161 msgid "Size of root disk" msgstr "" #: ../telemetry-measurements.rst:161 msgid "disk.root\\ .size" msgstr "" #: ../telemetry-measurements.rst:164 msgid "Size of ephemeral disk" msgstr "" #: ../telemetry-measurements.rst:164 msgid "disk.ephe\\ meral.size" msgstr "" #: ../telemetry-measurements.rst:167 msgid "Number of incoming bytes" msgstr "" #: ../telemetry-measurements.rst:167 ../telemetry-measurements.rst:171 #: ../telemetry-measurements.rst:175 ../telemetry-measurements.rst:179 #: ../telemetry-measurements.rst:183 ../telemetry-measurements.rst:187 #: ../telemetry-measurements.rst:192 ../telemetry-measurements.rst:196 #: ../telemetry-measurements.rst:271 ../telemetry-measurements.rst:276 #: ../telemetry-measurements.rst:281 ../telemetry-measurements.rst:286 #: ../telemetry-measurements.rst:632 ../telemetry-measurements.rst:636 #: ../telemetry-measurements.rst:639 msgid "interface ID" msgstr "" #: ../telemetry-measurements.rst:167 msgid "network.\\ incoming.\\ bytes" msgstr "" #: ../telemetry-measurements.rst:171 ../telemetry-measurements.rst:271 msgid "Average rate of incoming bytes" msgstr "" #: ../telemetry-measurements.rst:171 ../telemetry-measurements.rst:271 msgid "network.\\ incoming.\\ bytes.rate" msgstr "" #: ../telemetry-measurements.rst:175 msgid "Number of outgoing bytes" msgstr "" #: ../telemetry-measurements.rst:175 msgid "network.\\ outgoing\\ .bytes" msgstr "" #: ../telemetry-measurements.rst:179 ../telemetry-measurements.rst:276 msgid "Average rate of outgoing bytes" msgstr "" #: ../telemetry-measurements.rst:179 ../telemetry-measurements.rst:276 msgid "network.\\ outgoing.\\ bytes.rate" msgstr "" #: ../telemetry-measurements.rst:183 msgid "Number of incoming packets" msgstr "" #: ../telemetry-measurements.rst:183 msgid "network.\\ incoming\\ .packets" msgstr "" #: ../telemetry-measurements.rst:183 ../telemetry-measurements.rst:192 msgid "pac\\ ket" msgstr "" #: ../telemetry-measurements.rst:187 ../telemetry-measurements.rst:281 msgid "Average rate of incoming packets" msgstr "" #: ../telemetry-measurements.rst:187 msgid "network.\\ incoming\\ .packets\\ .rate" msgstr "" #: ../telemetry-measurements.rst:187 ../telemetry-measurements.rst:281 #: ../telemetry-measurements.rst:286 msgid "pack\\ et/s" msgstr "" #: ../telemetry-measurements.rst:192 msgid "Number of outgoing packets" msgstr "" #: ../telemetry-measurements.rst:192 msgid "network.\\ outgoing\\ .packets" msgstr "" #: ../telemetry-measurements.rst:196 ../telemetry-measurements.rst:286 msgid "Average rate of outgoing packets" msgstr "" #: ../telemetry-measurements.rst:196 msgid "network.\\ outgoing\\ .packets\\ .rate" msgstr "" #: ../telemetry-measurements.rst:196 msgid "pac\\ ket/s" msgstr "" #: ../telemetry-measurements.rst:201 msgid "**Meters added or hypervisor support changed in the Juno release**" msgstr "" #: ../telemetry-measurements.rst:203 ../telemetry-measurements.rst:208 #: ../telemetry-measurements.rst:219 ../telemetry-measurements.rst:224 #: ../telemetry-measurements.rst:229 ../telemetry-measurements.rst:271 #: ../telemetry-measurements.rst:276 ../telemetry-measurements.rst:281 #: ../telemetry-measurements.rst:286 ../telemetry-measurements.rst:293 #: ../telemetry-measurements.rst:351 msgid "Libvirt, Hyper-V, vSphere, XenAPI" msgstr "" #: ../telemetry-measurements.rst:203 ../telemetry-measurements.rst:208 #: ../telemetry-measurements.rst:351 msgid "ins\\ tance" msgstr "" #: ../telemetry-measurements.rst:208 ../telemetry-measurements.rst:351 msgid "instance\\ :" msgstr "" #: ../telemetry-measurements.rst:213 msgid "vSphere, XenAPI" msgstr "" #: ../telemetry-measurements.rst:229 msgid "disk.\\ write.\\ bytes.rate" msgstr "" #: ../telemetry-measurements.rst:235 ../telemetry-measurements.rst:239 #: ../telemetry-measurements.rst:244 ../telemetry-measurements.rst:248 #: ../telemetry-measurements.rst:253 ../telemetry-measurements.rst:257 #: ../telemetry-measurements.rst:262 ../telemetry-measurements.rst:266 #: ../telemetry-measurements.rst:310 ../telemetry-measurements.rst:314 #: ../telemetry-measurements.rst:332 ../telemetry-measurements.rst:337 #: ../telemetry-measurements.rst:343 ../telemetry-measurements.rst:608 #: ../telemetry-measurements.rst:611 msgid "disk ID" msgstr "" #: ../telemetry-measurements.rst:235 msgid "disk.dev\\ ice.read\\ .requests" msgstr "" #: ../telemetry-measurements.rst:239 msgid "disk.dev\\ ice.read\\ .requests\\ .rate" msgstr "" #: ../telemetry-measurements.rst:244 msgid "disk.dev\\ ice.write\\ .requests" msgstr "" #: ../telemetry-measurements.rst:248 msgid "disk.dev\\ ice.write\\ .requests\\ .rate" msgstr "" #: ../telemetry-measurements.rst:253 msgid "disk.dev\\ ice.read\\ .bytes" msgstr "" #: ../telemetry-measurements.rst:257 msgid "disk.dev\\ ice.read\\ .bytes .rate" msgstr "" #: ../telemetry-measurements.rst:262 msgid "disk.dev\\ ice.write\\ .bytes" msgstr "" #: ../telemetry-measurements.rst:266 msgid "disk.dev\\ ice.write\\ .bytes .rate" msgstr "" #: ../telemetry-measurements.rst:281 msgid "network.\\ incoming.\\ packets.\\ rate" msgstr "" #: ../telemetry-measurements.rst:286 msgid "network.\\ outgoing.\\ packets.\\ rate" msgstr "" #: ../telemetry-measurements.rst:291 msgid "**Meters added or hypervisor support changed in the Kilo release**" msgstr "" #: ../telemetry-measurements.rst:293 msgid "" "Volume of RAM used by the inst\\ ance from the amount of its allocated memory" msgstr "" #: ../telemetry-measurements.rst:299 ../telemetry-measurements.rst:317 #: ../telemetry-measurements.rst:321 ../telemetry-measurements.rst:327 #: ../telemetry-measurements.rst:332 ../telemetry-measurements.rst:337 #: ../telemetry-measurements.rst:343 msgid "Libvirt" msgstr "" #: ../telemetry-measurements.rst:299 msgid "Volume of RAM u\\ sed by the inst\\ ance on the phy\\ sical machine" msgstr "" #: ../telemetry-measurements.rst:299 msgid "memory.r\\ esident" msgstr "" #: ../telemetry-measurements.rst:304 msgid "Average disk la\\ tency" msgstr "" #: ../telemetry-measurements.rst:304 ../telemetry-measurements.rst:307 #: ../telemetry-measurements.rst:310 ../telemetry-measurements.rst:314 msgid "Hyper-V" msgstr "" #: ../telemetry-measurements.rst:304 msgid "disk.lat\\ ency" msgstr "" #: ../telemetry-measurements.rst:304 ../telemetry-measurements.rst:310 msgid "ms" msgstr "" #: ../telemetry-measurements.rst:307 msgid "Average disk io\\ ps" msgstr "" #: ../telemetry-measurements.rst:307 ../telemetry-measurements.rst:314 msgid "coun\\ t/s" msgstr "" #: ../telemetry-measurements.rst:307 msgid "disk.iop\\ s" msgstr "" #: ../telemetry-measurements.rst:310 msgid "Average disk la\\ tency per device" msgstr "" #: ../telemetry-measurements.rst:310 msgid "disk.dev\\ ice.late\\ ncy" msgstr "" #: ../telemetry-measurements.rst:314 msgid "Average disk io\\ ps per device" msgstr "" #: ../telemetry-measurements.rst:314 msgid "disk.dev\\ ice.iops" msgstr "" #: ../telemetry-measurements.rst:317 msgid "The amount of d\\ isk that the in\\ stance can see" msgstr "" #: ../telemetry-measurements.rst:317 msgid "disk.cap\\ acity" msgstr "" #: ../telemetry-measurements.rst:321 msgid "" "The amount of d\\ isk occupied by the instance o\\ n the host mach\\ ine" msgstr "" #: ../telemetry-measurements.rst:321 msgid "disk.all\\ ocation" msgstr "" #: ../telemetry-measurements.rst:327 msgid "The physical si\\ ze in bytes of the image conta\\ iner on the host" msgstr "" #: ../telemetry-measurements.rst:327 msgid "disk.usa\\ ge" msgstr "" #: ../telemetry-measurements.rst:332 msgid "The amount of d\\ isk per device that the instan\\ ce can see" msgstr "" #: ../telemetry-measurements.rst:332 msgid "disk.dev\\ ice.capa\\ city" msgstr "" #: ../telemetry-measurements.rst:337 msgid "" "The amount of d\\ isk per device occupied by the instance on th\\ e host " "machine" msgstr "" #: ../telemetry-measurements.rst:337 msgid "disk.dev\\ ice.allo\\ cation" msgstr "" #: ../telemetry-measurements.rst:343 msgid "" "The physical si\\ ze in bytes of the image conta\\ iner on the hos\\ t per " "device" msgstr "" #: ../telemetry-measurements.rst:343 msgid "disk.dev\\ ice.usag\\ e" msgstr "" #: ../telemetry-measurements.rst:349 msgid "**Meters deprecated in the Kilo release**" msgstr "" #: ../telemetry-measurements.rst:356 msgid "**Meters added in the Liberty release**" msgstr "" #: ../telemetry-measurements.rst:358 msgid "CPU time used s\\ ince previous d\\ atapoint" msgstr "" #: ../telemetry-measurements.rst:358 msgid "cpu.delta" msgstr "" #: ../telemetry-measurements.rst:367 msgid "" "The ``instance:`` meter can be replaced by using extra parameters in " "both the samples and statistics queries. Sample queries look like:" msgstr "" #: ../telemetry-measurements.rst:380 msgid "" "The Telemetry service supports to create new meters by using transformers. " "For more details about transformers see :ref:`telemetry-transformers`. Among " "the meters gathered from libvirt and Hyper-V there are a few ones which are " "generated from other meters. The list of meters that are created by using " "the ``rate_of_change`` transformer from the above table is the following:" msgstr "" #: ../telemetry-measurements.rst:387 msgid "cpu\\_util" msgstr "" #: ../telemetry-measurements.rst:389 msgid "disk.read.requests.rate" msgstr "" #: ../telemetry-measurements.rst:391 msgid "disk.write.requests.rate" msgstr "" #: ../telemetry-measurements.rst:393 msgid "disk.read.bytes.rate" msgstr "" #: ../telemetry-measurements.rst:395 msgid "disk.write.bytes.rate" msgstr "" #: ../telemetry-measurements.rst:397 msgid "disk.device.read.requests.rate" msgstr "" #: ../telemetry-measurements.rst:399 msgid "disk.device.write.requests.rate" msgstr "" #: ../telemetry-measurements.rst:401 msgid "disk.device.read.bytes.rate" msgstr "" #: ../telemetry-measurements.rst:403 msgid "disk.device.write.bytes.rate" msgstr "" #: ../telemetry-measurements.rst:405 msgid "network.incoming.bytes.rate" msgstr "" #: ../telemetry-measurements.rst:407 msgid "network.outgoing.bytes.rate" msgstr "" #: ../telemetry-measurements.rst:409 msgid "network.incoming.packets.rate" msgstr "" #: ../telemetry-measurements.rst:411 msgid "network.outgoing.packets.rate" msgstr "" #: ../telemetry-measurements.rst:415 msgid "" "To enable the libvirt ``memory.usage`` support, you need to install libvirt " "version 1.1.1+, QEMU version 1.5+, and you also need to prepare suitable " "balloon driver in the image. It is applicable particularly for Windows " "guests, most modern Linux distributions already have it built in. Telemetry " "is not able to fetch the ``memory.usage`` samples without the image balloon " "driver." msgstr "" #: ../telemetry-measurements.rst:422 msgid "" "OpenStack Compute is capable of collecting ``CPU`` related meters from the " "compute host machines. In order to use that you need to set the " "``compute_monitors`` option to ``ComputeDriverCPUMonitor`` in the ``nova." "conf`` configuration file. For further information see the Compute " "configuration section in the `Compute chapter `__ of the OpenStack " "Configuration Reference." msgstr "" #: ../telemetry-measurements.rst:430 msgid "" "The following host machine related meters are collected for OpenStack " "Compute:" msgstr "" #: ../telemetry-measurements.rst:438 msgid "CPU frequency" msgstr "" #: ../telemetry-measurements.rst:438 msgid "MHz" msgstr "" #: ../telemetry-measurements.rst:438 ../telemetry-measurements.rst:441 #: ../telemetry-measurements.rst:444 ../telemetry-measurements.rst:447 #: ../telemetry-measurements.rst:450 ../telemetry-measurements.rst:453 #: ../telemetry-measurements.rst:456 ../telemetry-measurements.rst:459 #: ../telemetry-measurements.rst:462 ../telemetry-measurements.rst:465 #: ../telemetry-measurements.rst:493 ../telemetry-measurements.rst:496 #: ../telemetry-measurements.rst:499 ../telemetry-measurements.rst:503 #: ../telemetry-measurements.rst:506 ../telemetry-measurements.rst:1378 #: ../telemetry-measurements.rst:1381 ../telemetry-measurements.rst:1384 #: ../telemetry-measurements.rst:1387 ../telemetry-measurements.rst:1390 #: ../telemetry-measurements.rst:1405 ../telemetry-measurements.rst:1410 #: ../telemetry-measurements.rst:1414 ../telemetry-measurements.rst:1429 #: ../telemetry-measurements.rst:1432 ../telemetry-measurements.rst:1435 msgid "Notification" msgstr "" #: ../telemetry-measurements.rst:438 msgid "compute.node.cpu.\\ frequency" msgstr "" #: ../telemetry-measurements.rst:438 ../telemetry-measurements.rst:441 #: ../telemetry-measurements.rst:444 ../telemetry-measurements.rst:447 #: ../telemetry-measurements.rst:450 ../telemetry-measurements.rst:453 #: ../telemetry-measurements.rst:456 ../telemetry-measurements.rst:459 #: ../telemetry-measurements.rst:462 ../telemetry-measurements.rst:465 #: ../telemetry-measurements.rst:537 ../telemetry-measurements.rst:540 #: ../telemetry-measurements.rst:546 ../telemetry-measurements.rst:549 #: ../telemetry-measurements.rst:552 ../telemetry-measurements.rst:557 #: ../telemetry-measurements.rst:562 ../telemetry-measurements.rst:566 #: ../telemetry-measurements.rst:570 ../telemetry-measurements.rst:599 #: ../telemetry-measurements.rst:602 ../telemetry-measurements.rst:605 #: ../telemetry-measurements.rst:614 ../telemetry-measurements.rst:617 #: ../telemetry-measurements.rst:620 ../telemetry-measurements.rst:623 #: ../telemetry-measurements.rst:626 ../telemetry-measurements.rst:629 #: ../telemetry-measurements.rst:643 ../telemetry-measurements.rst:647 #: ../telemetry-measurements.rst:651 ../telemetry-measurements.rst:656 #: ../telemetry-measurements.rst:661 msgid "host ID" msgstr "" #: ../telemetry-measurements.rst:441 msgid "CPU kernel time" msgstr "" #: ../telemetry-measurements.rst:441 msgid "compute.node.cpu.\\ kernel.time" msgstr "" #: ../telemetry-measurements.rst:444 msgid "CPU idle time" msgstr "" #: ../telemetry-measurements.rst:444 msgid "compute.node.cpu.\\ idle.time" msgstr "" #: ../telemetry-measurements.rst:447 msgid "CPU user mode time" msgstr "" #: ../telemetry-measurements.rst:447 msgid "compute.node.cpu.\\ user.time" msgstr "" #: ../telemetry-measurements.rst:450 msgid "CPU I/O wait time" msgstr "" #: ../telemetry-measurements.rst:450 msgid "compute.node.cpu.\\ iowait.time" msgstr "" #: ../telemetry-measurements.rst:453 msgid "CPU kernel percentage" msgstr "" #: ../telemetry-measurements.rst:453 msgid "compute.node.cpu.\\ kernel.percent" msgstr "" #: ../telemetry-measurements.rst:456 msgid "CPU idle percentage" msgstr "" #: ../telemetry-measurements.rst:456 msgid "compute.node.cpu.\\ idle.percent" msgstr "" #: ../telemetry-measurements.rst:459 msgid "CPU user mode percentage" msgstr "" #: ../telemetry-measurements.rst:459 msgid "compute.node.cpu.\\ user.percent" msgstr "" #: ../telemetry-measurements.rst:462 msgid "CPU I/O wait percentage" msgstr "" #: ../telemetry-measurements.rst:462 msgid "compute.node.cpu.\\ iowait.percent" msgstr "" #: ../telemetry-measurements.rst:465 msgid "CPU utilization" msgstr "" #: ../telemetry-measurements.rst:465 msgid "compute.node.cpu.\\ percent" msgstr "" #: ../telemetry-measurements.rst:474 msgid "" "Telemetry captures notifications that are emitted by the Bare metal service. " "The source of the notifications are IPMI sensors that collect data from the " "host machine." msgstr "" #: ../telemetry-measurements.rst:480 msgid "" "The sensor data is not available in the Bare metal service by default. To " "enable the meters and configure this module to emit notifications about the " "measured values see the `Installation Guide `__ for the Bare metal service." msgstr "" #: ../telemetry-measurements.rst:486 msgid "The following meters are recorded for the Bare metal service:" msgstr "" #: ../telemetry-measurements.rst:491 ../telemetry-measurements.rst:535 #: ../telemetry-measurements.rst:715 ../telemetry-measurements.rst:861 #: ../telemetry-measurements.rst:1084 ../telemetry-measurements.rst:1259 #: ../telemetry-measurements.rst:1325 ../telemetry-measurements.rst:1403 msgid "**Meters added in the Juno release**" msgstr "" #: ../telemetry-measurements.rst:493 ../telemetry-measurements.rst:496 msgid "Fan rounds per minute (RPM)" msgstr "" #: ../telemetry-measurements.rst:493 ../telemetry-measurements.rst:496 msgid "RPM" msgstr "" #: ../telemetry-measurements.rst:493 ../telemetry-measurements.rst:496 msgid "fan sensor" msgstr "" #: ../telemetry-measurements.rst:493 ../telemetry-measurements.rst:496 msgid "hardware.ipmi.fan" msgstr "" #: ../telemetry-measurements.rst:499 ../telemetry-measurements.rst:540 #: ../telemetry-measurements.rst:546 ../telemetry-measurements.rst:549 msgid "C" msgstr "" #: ../telemetry-measurements.rst:499 msgid "Temperate reading from sensor" msgstr "" #: ../telemetry-measurements.rst:499 msgid "hardware.ipmi\\ .temperature" msgstr "" #: ../telemetry-measurements.rst:499 msgid "temper\\ ature sensor" msgstr "" #: ../telemetry-measurements.rst:503 msgid "Current reading from sensor" msgstr "" #: ../telemetry-measurements.rst:503 ../telemetry-measurements.rst:537 #: ../telemetry-measurements.rst:1452 msgid "W" msgstr "" #: ../telemetry-measurements.rst:503 msgid "current sensor" msgstr "" #: ../telemetry-measurements.rst:503 msgid "hardware.ipmi\\ .current" msgstr "" #: ../telemetry-measurements.rst:506 msgid "V" msgstr "" #: ../telemetry-measurements.rst:506 msgid "Voltage reading from sensor" msgstr "" #: ../telemetry-measurements.rst:506 msgid "hardware.ipmi\\ .voltage" msgstr "" #: ../telemetry-measurements.rst:506 msgid "voltage sensor" msgstr "" #: ../telemetry-measurements.rst:511 msgid "IPMI based meters" msgstr "" #: ../telemetry-measurements.rst:512 msgid "" "Another way of gathering IPMI based data is to use IPMI sensors " "independently from the Bare metal service's components. Same meters as :ref:" "`telemetry-bare-metal-service` could be fetched except that origin is " "``Pollster`` instead of ``Notification``." msgstr "" #: ../telemetry-measurements.rst:517 msgid "" "You need to deploy the ceilometer-agent-ipmi on each IPMI-capable node in " "order to poll local sensor data. For further information about the IPMI " "agent see :ref:`telemetry-ipmi-agent`." msgstr "" #: ../telemetry-measurements.rst:523 msgid "" "To avoid duplication of metering data and unnecessary load on the IPMI " "interface, do not deploy the IPMI agent on nodes that are managed by the " "Bare metal service and keep the ``conductor.send_sensor_data`` option set to " "``False`` in the ``ironic.conf`` configuration file." msgstr "" #: ../telemetry-measurements.rst:529 msgid "" "Besides generic IPMI sensor data, the following Intel Node Manager meters " "are recorded from capable platform:" msgstr "" #: ../telemetry-measurements.rst:537 msgid "Current power of the system" msgstr "" #: ../telemetry-measurements.rst:537 msgid "hardware.ipmi.node\\ .power" msgstr "" #: ../telemetry-measurements.rst:540 msgid "Current tempera\\ ture of the system" msgstr "" #: ../telemetry-measurements.rst:540 msgid "hardware.ipmi.node\\ .temperature" msgstr "" #: ../telemetry-measurements.rst:544 ../telemetry-measurements.rst:597 #: ../telemetry-measurements.rst:723 ../telemetry-measurements.rst:823 #: ../telemetry-measurements.rst:914 ../telemetry-measurements.rst:1118 #: ../telemetry-measurements.rst:1270 ../telemetry-measurements.rst:1335 #: ../telemetry-measurements.rst:1427 msgid "**Meters added in the Kilo release**" msgstr "" #: ../telemetry-measurements.rst:546 msgid "Inlet temperatu\\ re of the system" msgstr "" #: ../telemetry-measurements.rst:546 msgid "hardware.ipmi.node\\ .inlet_temperature" msgstr "" #: ../telemetry-measurements.rst:549 msgid "Outlet temperat\\ ure of the system" msgstr "" #: ../telemetry-measurements.rst:549 msgid "hardware.ipmi.node\\ .outlet_temperature" msgstr "" #: ../telemetry-measurements.rst:552 msgid "CFM" msgstr "" #: ../telemetry-measurements.rst:552 msgid "Volumetric airf\\ low of the syst\\ em, expressed as 1/10th of CFM" msgstr "" #: ../telemetry-measurements.rst:552 msgid "hardware.ipmi.node\\ .airflow" msgstr "" #: ../telemetry-measurements.rst:557 msgid "CUPS" msgstr "" #: ../telemetry-measurements.rst:557 msgid "CUPS(Compute Us\\ age Per Second) index data of the system" msgstr "" #: ../telemetry-measurements.rst:557 msgid "hardware.ipmi.node\\ .cups" msgstr "" #: ../telemetry-measurements.rst:562 msgid "CPU CUPS utiliz\\ ation of the system" msgstr "" #: ../telemetry-measurements.rst:562 msgid "hardware.ipmi.node\\ .cpu_util" msgstr "" #: ../telemetry-measurements.rst:566 msgid "Memory CUPS utilization of the system" msgstr "" #: ../telemetry-measurements.rst:566 msgid "hardware.ipmi.node\\ .mem_util" msgstr "" #: ../telemetry-measurements.rst:570 msgid "IO CUPS utilization of the system" msgstr "" #: ../telemetry-measurements.rst:570 msgid "hardware.ipmi.node\\ .io_util" msgstr "" #: ../telemetry-measurements.rst:578 msgid "Meters renamed in the Kilo release" msgstr "" #: ../telemetry-measurements.rst:580 msgid "**New Name**" msgstr "" #: ../telemetry-measurements.rst:580 msgid "**Original Name**" msgstr "" #: ../telemetry-measurements.rst:582 msgid "hardware.ipmi.node.inlet_temperature" msgstr "" #: ../telemetry-measurements.rst:582 msgid "hardware.ipmi.node.temperature" msgstr "" #: ../telemetry-measurements.rst:586 msgid "SNMP based meters" msgstr "" #: ../telemetry-measurements.rst:588 msgid "" "Telemetry supports gathering SNMP based generic host meters. In order to be " "able to collect this data you need to run smpd on each target host." msgstr "" #: ../telemetry-measurements.rst:591 msgid "" "The following meters are available about the host machines by using SNMP:" msgstr "" #: ../telemetry-measurements.rst:599 msgid "CPU load in the past 1 minute" msgstr "" #: ../telemetry-measurements.rst:599 msgid "hardware.cpu.load.\\ 1min" msgstr "" #: ../telemetry-measurements.rst:599 ../telemetry-measurements.rst:602 #: ../telemetry-measurements.rst:605 msgid "proc\\ ess" msgstr "" #: ../telemetry-measurements.rst:602 msgid "CPU load in the past 5 minutes" msgstr "" #: ../telemetry-measurements.rst:602 msgid "hardware.cpu.load.\\ 5min" msgstr "" #: ../telemetry-measurements.rst:605 msgid "CPU load in the past 10 minutes" msgstr "" #: ../telemetry-measurements.rst:605 msgid "hardware.cpu.load.\\ 10min" msgstr "" #: ../telemetry-measurements.rst:608 ../telemetry-measurements.rst:611 #: ../telemetry-measurements.rst:614 ../telemetry-measurements.rst:617 #: ../telemetry-measurements.rst:620 ../telemetry-measurements.rst:623 #: ../telemetry-measurements.rst:626 ../telemetry-measurements.rst:629 msgid "KB" msgstr "" #: ../telemetry-measurements.rst:608 msgid "Total disk size" msgstr "" #: ../telemetry-measurements.rst:608 msgid "hardware.disk.size\\ .total" msgstr "" #: ../telemetry-measurements.rst:611 msgid "Used disk size" msgstr "" #: ../telemetry-measurements.rst:611 msgid "hardware.disk.size\\ .used" msgstr "" #: ../telemetry-measurements.rst:614 msgid "Total physical memory size" msgstr "" #: ../telemetry-measurements.rst:614 msgid "hardware.memory.to\\ tal" msgstr "" #: ../telemetry-measurements.rst:617 msgid "Used physical m\\ emory size" msgstr "" #: ../telemetry-measurements.rst:617 msgid "hardware.memory.us\\ ed" msgstr "" #: ../telemetry-measurements.rst:620 msgid "Physical memory buffer size" msgstr "" #: ../telemetry-measurements.rst:620 msgid "hardware.memory.bu\\ ffer" msgstr "" #: ../telemetry-measurements.rst:623 msgid "Cached physical memory size" msgstr "" #: ../telemetry-measurements.rst:623 msgid "hardware.memory.ca\\ ched" msgstr "" #: ../telemetry-measurements.rst:626 msgid "Total swap space size" msgstr "" #: ../telemetry-measurements.rst:626 msgid "hardware.memory.sw\\ ap.total" msgstr "" #: ../telemetry-measurements.rst:629 msgid "Available swap space size" msgstr "" #: ../telemetry-measurements.rst:629 msgid "hardware.memory.sw\\ ap.avail" msgstr "" #: ../telemetry-measurements.rst:632 msgid "Bytes received by network inte\\ rface" msgstr "" #: ../telemetry-measurements.rst:632 msgid "hardware.network.i\\ ncoming.bytes" msgstr "" #: ../telemetry-measurements.rst:636 msgid "Bytes sent by n\\ etwork interface" msgstr "" #: ../telemetry-measurements.rst:636 msgid "hardware.network.o\\ utgoing.bytes" msgstr "" #: ../telemetry-measurements.rst:639 msgid "Sending error o\\ f network inter\\ face" msgstr "" #: ../telemetry-measurements.rst:639 msgid "hardware.network.o\\ utgoing.errors" msgstr "" #: ../telemetry-measurements.rst:639 msgid "pack\\ et" msgstr "" #: ../telemetry-measurements.rst:643 msgid "Number of recei\\ ved datagrams" msgstr "" #: ../telemetry-measurements.rst:643 ../telemetry-measurements.rst:647 msgid "data\\ grams" msgstr "" #: ../telemetry-measurements.rst:643 msgid "hardware.network.i\\ p.incoming.datagra\\ ms" msgstr "" #: ../telemetry-measurements.rst:647 msgid "Number of sent datagrams" msgstr "" #: ../telemetry-measurements.rst:647 msgid "hardware.network.i\\ p.outgoing.datagra\\ ms" msgstr "" #: ../telemetry-measurements.rst:651 msgid "Aggregated numb\\ er of blocks re\\ ceived to block device" msgstr "" #: ../telemetry-measurements.rst:651 ../telemetry-measurements.rst:656 msgid "bloc\\ ks" msgstr "" #: ../telemetry-measurements.rst:651 msgid "hardware.system_st\\ ats.io.incoming.bl\\ ocks" msgstr "" #: ../telemetry-measurements.rst:656 msgid "Aggregated numb\\ er of blocks se\\ nt to block dev\\ ice" msgstr "" #: ../telemetry-measurements.rst:656 msgid "hardware.system_st\\ ats.io.outgoing.bl\\ ocks" msgstr "" #: ../telemetry-measurements.rst:661 msgid "CPU idle percen\\ tage" msgstr "" #: ../telemetry-measurements.rst:661 msgid "hardware.system_st\\ ats.cpu.idle" msgstr "" #: ../telemetry-measurements.rst:668 msgid "The following meters are collected for OpenStack Image service:" msgstr "" #: ../telemetry-measurements.rst:675 msgid "Existence of the image" msgstr "" #: ../telemetry-measurements.rst:675 ../telemetry-measurements.rst:679 #: ../telemetry-measurements.rst:974 ../telemetry-measurements.rst:1086 #: ../telemetry-measurements.rst:1090 ../telemetry-measurements.rst:1094 #: ../telemetry-measurements.rst:1098 ../telemetry-measurements.rst:1164 #: ../telemetry-measurements.rst:1168 ../telemetry-measurements.rst:1172 #: ../telemetry-measurements.rst:1176 ../telemetry-measurements.rst:1180 #: ../telemetry-measurements.rst:1261 ../telemetry-measurements.rst:1265 #: ../telemetry-measurements.rst:1290 ../telemetry-measurements.rst:1304 #: ../telemetry-measurements.rst:1327 ../telemetry-measurements.rst:1331 msgid "Notifica\\ tion, Po\\ llster" msgstr "" #: ../telemetry-measurements.rst:675 ../telemetry-measurements.rst:679 #: ../telemetry-measurements.rst:683 ../telemetry-measurements.rst:686 #: ../telemetry-measurements.rst:689 msgid "image" msgstr "" #: ../telemetry-measurements.rst:675 ../telemetry-measurements.rst:679 #: ../telemetry-measurements.rst:683 ../telemetry-measurements.rst:686 #: ../telemetry-measurements.rst:689 ../telemetry-measurements.rst:692 #: ../telemetry-measurements.rst:695 msgid "image ID" msgstr "" #: ../telemetry-measurements.rst:679 msgid "Size of the upl\\ oaded image" msgstr "" #: ../telemetry-measurements.rst:679 msgid "image.size" msgstr "" #: ../telemetry-measurements.rst:683 ../telemetry-measurements.rst:686 #: ../telemetry-measurements.rst:689 ../telemetry-measurements.rst:692 #: ../telemetry-measurements.rst:695 ../telemetry-measurements.rst:709 #: ../telemetry-measurements.rst:712 ../telemetry-measurements.rst:717 #: ../telemetry-measurements.rst:720 ../telemetry-measurements.rst:725 #: ../telemetry-measurements.rst:728 ../telemetry-measurements.rst:731 #: ../telemetry-measurements.rst:735 ../telemetry-measurements.rst:738 #: ../telemetry-measurements.rst:742 ../telemetry-measurements.rst:746 #: ../telemetry-measurements.rst:749 ../telemetry-measurements.rst:752 #: ../telemetry-measurements.rst:755 ../telemetry-measurements.rst:758 #: ../telemetry-measurements.rst:863 ../telemetry-measurements.rst:866 #: ../telemetry-measurements.rst:869 ../telemetry-measurements.rst:872 #: ../telemetry-measurements.rst:875 ../telemetry-measurements.rst:878 #: ../telemetry-measurements.rst:881 ../telemetry-measurements.rst:884 #: ../telemetry-measurements.rst:887 ../telemetry-measurements.rst:890 #: ../telemetry-measurements.rst:893 ../telemetry-measurements.rst:896 #: ../telemetry-measurements.rst:899 ../telemetry-measurements.rst:902 #: ../telemetry-measurements.rst:905 ../telemetry-measurements.rst:908 #: ../telemetry-measurements.rst:911 ../telemetry-measurements.rst:916 #: ../telemetry-measurements.rst:920 ../telemetry-measurements.rst:935 #: ../telemetry-measurements.rst:938 ../telemetry-measurements.rst:942 #: ../telemetry-measurements.rst:945 ../telemetry-measurements.rst:948 #: ../telemetry-measurements.rst:952 ../telemetry-measurements.rst:955 #: ../telemetry-measurements.rst:958 ../telemetry-measurements.rst:961 #: ../telemetry-measurements.rst:964 ../telemetry-measurements.rst:967 #: ../telemetry-measurements.rst:971 ../telemetry-measurements.rst:978 #: ../telemetry-measurements.rst:981 ../telemetry-measurements.rst:984 #: ../telemetry-measurements.rst:1120 ../telemetry-measurements.rst:1124 #: ../telemetry-measurements.rst:1128 ../telemetry-measurements.rst:1132 #: ../telemetry-measurements.rst:1136 ../telemetry-measurements.rst:1140 #: ../telemetry-measurements.rst:1144 ../telemetry-measurements.rst:1149 #: ../telemetry-measurements.rst:1200 ../telemetry-measurements.rst:1204 #: ../telemetry-measurements.rst:1208 ../telemetry-measurements.rst:1212 #: ../telemetry-measurements.rst:1216 ../telemetry-measurements.rst:1220 #: ../telemetry-measurements.rst:1224 ../telemetry-measurements.rst:1229 #: ../telemetry-measurements.rst:1234 ../telemetry-measurements.rst:1239 #: ../telemetry-measurements.rst:1272 ../telemetry-measurements.rst:1276 #: ../telemetry-measurements.rst:1280 ../telemetry-measurements.rst:1285 #: ../telemetry-measurements.rst:1294 ../telemetry-measurements.rst:1299 #: ../telemetry-measurements.rst:1308 ../telemetry-measurements.rst:1312 #: ../telemetry-measurements.rst:1337 ../telemetry-measurements.rst:1341 #: ../telemetry-measurements.rst:1345 ../telemetry-measurements.rst:1350 #: ../telemetry-measurements.rst:1355 ../telemetry-measurements.rst:1359 #: ../telemetry-measurements.rst:1364 msgid "Notifica\\ tion" msgstr "" #: ../telemetry-measurements.rst:683 msgid "Number of updat\\ es on the image" msgstr "" #: ../telemetry-measurements.rst:686 msgid "Number of uploa\\ ds on the image" msgstr "" #: ../telemetry-measurements.rst:689 msgid "Number of delet\\ es on the image" msgstr "" #: ../telemetry-measurements.rst:692 msgid "Image is downlo\\ aded" msgstr "" #: ../telemetry-measurements.rst:692 msgid "image.download" msgstr "" #: ../telemetry-measurements.rst:695 msgid "Image is served out" msgstr "" #: ../telemetry-measurements.rst:695 msgid "image.serve" msgstr "" #: ../telemetry-measurements.rst:702 msgid "The following meters are collected for OpenStack Block Storage:" msgstr "" #: ../telemetry-measurements.rst:709 msgid "Existence of the volume" msgstr "" #: ../telemetry-measurements.rst:709 ../telemetry-measurements.rst:725 #: ../telemetry-measurements.rst:728 ../telemetry-measurements.rst:731 #: ../telemetry-measurements.rst:735 ../telemetry-measurements.rst:738 #: ../telemetry-measurements.rst:742 ../telemetry-measurements.rst:752 #: ../telemetry-measurements.rst:755 ../telemetry-measurements.rst:758 msgid "volume" msgstr "" #: ../telemetry-measurements.rst:709 ../telemetry-measurements.rst:712 #: ../telemetry-measurements.rst:725 ../telemetry-measurements.rst:728 #: ../telemetry-measurements.rst:731 ../telemetry-measurements.rst:735 #: ../telemetry-measurements.rst:738 ../telemetry-measurements.rst:742 msgid "volume ID" msgstr "" #: ../telemetry-measurements.rst:712 msgid "Size of the vol\\ ume" msgstr "" #: ../telemetry-measurements.rst:712 msgid "volume.size" msgstr "" #: ../telemetry-measurements.rst:717 msgid "Existence of the snapshot" msgstr "" #: ../telemetry-measurements.rst:717 ../telemetry-measurements.rst:746 #: ../telemetry-measurements.rst:749 msgid "snapsh\\ ot" msgstr "" #: ../telemetry-measurements.rst:717 msgid "snapshot" msgstr "" #: ../telemetry-measurements.rst:717 ../telemetry-measurements.rst:720 #: ../telemetry-measurements.rst:746 ../telemetry-measurements.rst:749 msgid "snapshot ID" msgstr "" #: ../telemetry-measurements.rst:720 msgid "Size of the sna\\ pshot" msgstr "" #: ../telemetry-measurements.rst:720 msgid "snapshot.size" msgstr "" #: ../telemetry-measurements.rst:725 msgid "Creation of the volume" msgstr "" #: ../telemetry-measurements.rst:725 msgid "volume.create.(sta\\ rt|end)" msgstr "" #: ../telemetry-measurements.rst:728 msgid "Deletion of the volume" msgstr "" #: ../telemetry-measurements.rst:728 msgid "volume.delete.(sta\\ rt|end)" msgstr "" #: ../telemetry-measurements.rst:731 msgid "Update the name or description of the volume" msgstr "" #: ../telemetry-measurements.rst:731 msgid "volume.update.(sta\\ rt|end)" msgstr "" #: ../telemetry-measurements.rst:735 msgid "Update the size of the volume" msgstr "" #: ../telemetry-measurements.rst:735 msgid "volume.resize.(sta\\ rt|end)" msgstr "" #: ../telemetry-measurements.rst:738 msgid "Attaching the v\\ olume to an ins\\ tance" msgstr "" #: ../telemetry-measurements.rst:738 msgid "volume.attach.(sta\\ rt|end)" msgstr "" #: ../telemetry-measurements.rst:742 msgid "Detaching the v\\ olume from an i\\ nstance" msgstr "" #: ../telemetry-measurements.rst:742 msgid "volume.detach.(sta\\ rt|end)" msgstr "" #: ../telemetry-measurements.rst:746 msgid "Creation of the snapshot" msgstr "" #: ../telemetry-measurements.rst:746 msgid "snapshot.create.(s\\ tart|end)" msgstr "" #: ../telemetry-measurements.rst:749 msgid "Deletion of the snapshot" msgstr "" #: ../telemetry-measurements.rst:749 msgid "snapshot.delete.(s\\ tart|end)" msgstr "" #: ../telemetry-measurements.rst:752 msgid "Creation of the volume backup" msgstr "" #: ../telemetry-measurements.rst:752 ../telemetry-measurements.rst:755 #: ../telemetry-measurements.rst:758 msgid "backup ID" msgstr "" #: ../telemetry-measurements.rst:752 msgid "volume.backup.crea\\ te.(start|end)" msgstr "" #: ../telemetry-measurements.rst:755 msgid "Deletion of the volume backup" msgstr "" #: ../telemetry-measurements.rst:755 msgid "volume.backup.dele\\ te.(start|end)" msgstr "" #: ../telemetry-measurements.rst:758 msgid "Restoration of the volume back\\ up" msgstr "" #: ../telemetry-measurements.rst:758 msgid "volume.backup.rest\\ ore.(start|end)" msgstr "" #: ../telemetry-measurements.rst:768 msgid "The following meters are collected for OpenStack Object Storage:" msgstr "" #: ../telemetry-measurements.rst:775 msgid "Number of objec\\ ts" msgstr "" #: ../telemetry-measurements.rst:775 ../telemetry-measurements.rst:795 #: ../telemetry-measurements.rst:825 ../telemetry-measurements.rst:838 msgid "object" msgstr "" #: ../telemetry-measurements.rst:775 ../telemetry-measurements.rst:778 #: ../telemetry-measurements.rst:781 ../telemetry-measurements.rst:784 #: ../telemetry-measurements.rst:787 ../telemetry-measurements.rst:790 #: ../telemetry-measurements.rst:825 ../telemetry-measurements.rst:827 #: ../telemetry-measurements.rst:830 ../telemetry-measurements.rst:833 msgid "storage ID" msgstr "" #: ../telemetry-measurements.rst:775 msgid "storage.objects" msgstr "" #: ../telemetry-measurements.rst:778 ../telemetry-measurements.rst:827 msgid "Total size of s\\ tored objects" msgstr "" #: ../telemetry-measurements.rst:778 msgid "storage.objects.si\\ ze" msgstr "" #: ../telemetry-measurements.rst:781 ../telemetry-measurements.rst:830 msgid "Number of conta\\ iners" msgstr "" #: ../telemetry-measurements.rst:781 msgid "conta\\ iner" msgstr "" #: ../telemetry-measurements.rst:781 msgid "storage.objects.co\\ ntainers" msgstr "" #: ../telemetry-measurements.rst:784 msgid "Number of incom\\ ing bytes" msgstr "" #: ../telemetry-measurements.rst:784 msgid "storage.objects.in\\ coming.bytes" msgstr "" #: ../telemetry-measurements.rst:787 msgid "Number of outgo\\ ing bytes" msgstr "" #: ../telemetry-measurements.rst:787 msgid "storage.objects.ou\\ tgoing.bytes" msgstr "" #: ../telemetry-measurements.rst:790 msgid "Number of API r\\ equests against OpenStack Obje\\ ct Storage" msgstr "" #: ../telemetry-measurements.rst:790 msgid "requ\\ est" msgstr "" #: ../telemetry-measurements.rst:790 msgid "storage.api.request" msgstr "" #: ../telemetry-measurements.rst:795 ../telemetry-measurements.rst:838 msgid "Number of objec\\ ts in container" msgstr "" #: ../telemetry-measurements.rst:795 ../telemetry-measurements.rst:798 #: ../telemetry-measurements.rst:838 ../telemetry-measurements.rst:841 msgid "storage ID\\ /container" msgstr "" #: ../telemetry-measurements.rst:795 msgid "storage.containers\\ .objects" msgstr "" #: ../telemetry-measurements.rst:798 msgid "Total size of s\\ tored objects i\\ n container" msgstr "" #: ../telemetry-measurements.rst:798 msgid "storage.containers\\ .objects.size" msgstr "" #: ../telemetry-measurements.rst:804 msgid "Ceph Object Storage" msgstr "" #: ../telemetry-measurements.rst:805 msgid "" "In order to gather meters from Ceph, you have to install and configure the " "Ceph Object Gateway (radosgw) as it is described in the `Installation Manual " "`__. You have to enable `usage " "logging `__ in " "order to get the related meters from Ceph. You will also need an ``admin`` " "user with ``users``, ``buckets``, ``metadata`` and ``usage`` ``caps`` " "configured." msgstr "" #: ../telemetry-measurements.rst:813 msgid "" "In order to access Ceph from Telemetry, you need to specify a ``service " "group`` for ``radosgw`` in the ``ceilometer.conf`` configuration file along " "with ``access_key`` and ``secret_key`` of the ``admin`` user mentioned above." msgstr "" #: ../telemetry-measurements.rst:818 msgid "The following meters are collected for Ceph Object Storage:" msgstr "" #: ../telemetry-measurements.rst:825 msgid "Number of objects" msgstr "" #: ../telemetry-measurements.rst:825 msgid "radosgw.objects" msgstr "" #: ../telemetry-measurements.rst:827 msgid "radosgw.objects.\\ size" msgstr "" #: ../telemetry-measurements.rst:830 msgid "contai\\ ner" msgstr "" #: ../telemetry-measurements.rst:830 msgid "radosgw.objects.\\ containers" msgstr "" #: ../telemetry-measurements.rst:833 msgid "Number of API r\\ equests against Ceph Object Ga\\ teway (radosgw)" msgstr "" #: ../telemetry-measurements.rst:833 msgid "radosgw.api.requ\\ est" msgstr "" #: ../telemetry-measurements.rst:833 msgid "request" msgstr "" #: ../telemetry-measurements.rst:838 msgid "radosgw.containe\\ rs.objects" msgstr "" #: ../telemetry-measurements.rst:841 msgid "Total size of s\\ tored objects in container" msgstr "" #: ../telemetry-measurements.rst:841 msgid "radosgw.containe\\ rs.objects.size" msgstr "" #: ../telemetry-measurements.rst:848 msgid "" "The ``usage`` related information may not be updated right after an upload " "or download, because the Ceph Object Gateway needs time to update the usage " "properties. For instance, the default configuration needs approximately 30 " "minutes to generate the usage logs." msgstr "" #: ../telemetry-measurements.rst:854 msgid "OpenStack Identity" msgstr "" #: ../telemetry-measurements.rst:856 msgid "The following meters are collected for OpenStack Identity:" msgstr "" #: ../telemetry-measurements.rst:863 msgid "User successful\\ ly authenticated" msgstr "" #: ../telemetry-measurements.rst:863 msgid "identity.authent\\ icate.success" msgstr "" #: ../telemetry-measurements.rst:863 ../telemetry-measurements.rst:866 #: ../telemetry-measurements.rst:869 ../telemetry-measurements.rst:872 #: ../telemetry-measurements.rst:875 ../telemetry-measurements.rst:878 msgid "user" msgstr "" #: ../telemetry-measurements.rst:863 ../telemetry-measurements.rst:866 #: ../telemetry-measurements.rst:869 ../telemetry-measurements.rst:872 #: ../telemetry-measurements.rst:875 ../telemetry-measurements.rst:878 msgid "user ID" msgstr "" #: ../telemetry-measurements.rst:866 msgid "User pending au\\ thentication" msgstr "" #: ../telemetry-measurements.rst:866 msgid "identity.authent\\ icate.pending" msgstr "" #: ../telemetry-measurements.rst:869 msgid "User failed to authenticate" msgstr "" #: ../telemetry-measurements.rst:869 msgid "identity.authent\\ icate.failure" msgstr "" #: ../telemetry-measurements.rst:872 msgid "User is created" msgstr "" #: ../telemetry-measurements.rst:872 msgid "identity.user.cr\\ eated" msgstr "" #: ../telemetry-measurements.rst:875 msgid "User is deleted" msgstr "" #: ../telemetry-measurements.rst:875 msgid "identity.user.de\\ leted" msgstr "" #: ../telemetry-measurements.rst:878 msgid "User is updated" msgstr "" #: ../telemetry-measurements.rst:878 msgid "identity.user.up\\ dated" msgstr "" #: ../telemetry-measurements.rst:881 msgid "Group is created" msgstr "" #: ../telemetry-measurements.rst:881 ../telemetry-measurements.rst:884 #: ../telemetry-measurements.rst:887 msgid "group" msgstr "" #: ../telemetry-measurements.rst:881 ../telemetry-measurements.rst:884 #: ../telemetry-measurements.rst:887 msgid "group ID" msgstr "" #: ../telemetry-measurements.rst:881 msgid "identity.group.c\\ reated" msgstr "" #: ../telemetry-measurements.rst:884 msgid "Group is deleted" msgstr "" #: ../telemetry-measurements.rst:884 msgid "identity.group.d\\ eleted" msgstr "" #: ../telemetry-measurements.rst:887 msgid "Group is updated" msgstr "" #: ../telemetry-measurements.rst:887 msgid "identity.group.u\\ pdated" msgstr "" #: ../telemetry-measurements.rst:890 msgid "Role is created" msgstr "" #: ../telemetry-measurements.rst:890 msgid "identity.role.cr\\ eated" msgstr "" #: ../telemetry-measurements.rst:890 ../telemetry-measurements.rst:893 #: ../telemetry-measurements.rst:896 msgid "role" msgstr "" #: ../telemetry-measurements.rst:890 ../telemetry-measurements.rst:893 #: ../telemetry-measurements.rst:896 ../telemetry-measurements.rst:916 #: ../telemetry-measurements.rst:920 msgid "role ID" msgstr "" #: ../telemetry-measurements.rst:893 msgid "Role is deleted" msgstr "" #: ../telemetry-measurements.rst:893 msgid "identity.role.de\\ leted" msgstr "" #: ../telemetry-measurements.rst:896 msgid "Role is updated" msgstr "" #: ../telemetry-measurements.rst:896 msgid "identity.role.up\\ dated" msgstr "" #: ../telemetry-measurements.rst:899 msgid "Project is crea\\ ted" msgstr "" #: ../telemetry-measurements.rst:899 msgid "identity.project\\ .created" msgstr "" #: ../telemetry-measurements.rst:899 ../telemetry-measurements.rst:902 #: ../telemetry-measurements.rst:905 msgid "project" msgstr "" #: ../telemetry-measurements.rst:899 ../telemetry-measurements.rst:902 #: ../telemetry-measurements.rst:905 msgid "project ID" msgstr "" #: ../telemetry-measurements.rst:902 msgid "Project is dele\\ ted" msgstr "" #: ../telemetry-measurements.rst:902 msgid "identity.project\\ .deleted" msgstr "" #: ../telemetry-measurements.rst:905 msgid "Project is upda\\ ted" msgstr "" #: ../telemetry-measurements.rst:905 msgid "identity.project\\ .updated" msgstr "" #: ../telemetry-measurements.rst:908 msgid "Trust is created" msgstr "" #: ../telemetry-measurements.rst:908 msgid "identity.trust.c\\ reated" msgstr "" #: ../telemetry-measurements.rst:908 ../telemetry-measurements.rst:911 msgid "trust" msgstr "" #: ../telemetry-measurements.rst:908 ../telemetry-measurements.rst:911 msgid "trust ID" msgstr "" #: ../telemetry-measurements.rst:911 msgid "Trust is deleted" msgstr "" #: ../telemetry-measurements.rst:911 msgid "identity.trust.d\\ eleted" msgstr "" #: ../telemetry-measurements.rst:916 msgid "Role is added to an actor on a target" msgstr "" #: ../telemetry-measurements.rst:916 msgid "identity.role_as\\ signment.created" msgstr "" #: ../telemetry-measurements.rst:916 ../telemetry-measurements.rst:920 msgid "role_a\\ ssignm\\ ent" msgstr "" #: ../telemetry-measurements.rst:920 msgid "Role is removed from an actor on a target" msgstr "" #: ../telemetry-measurements.rst:920 msgid "identity.role_as\\ signment.deleted" msgstr "" #: ../telemetry-measurements.rst:928 msgid "The following meters are collected for OpenStack Networking:" msgstr "" #: ../telemetry-measurements.rst:935 msgid "Existence of ne\\ twork" msgstr "" #: ../telemetry-measurements.rst:935 ../telemetry-measurements.rst:938 #: ../telemetry-measurements.rst:942 msgid "networ\\ k" msgstr "" #: ../telemetry-measurements.rst:935 msgid "network" msgstr "" #: ../telemetry-measurements.rst:935 ../telemetry-measurements.rst:938 #: ../telemetry-measurements.rst:942 msgid "network ID" msgstr "" #: ../telemetry-measurements.rst:938 msgid "Creation reques\\ ts for this net\\ work" msgstr "" #: ../telemetry-measurements.rst:938 msgid "network.create" msgstr "" #: ../telemetry-measurements.rst:942 msgid "Update requests for this network" msgstr "" #: ../telemetry-measurements.rst:942 msgid "network.update" msgstr "" #: ../telemetry-measurements.rst:945 msgid "Existence of su\\ bnet" msgstr "" #: ../telemetry-measurements.rst:945 ../telemetry-measurements.rst:948 #: ../telemetry-measurements.rst:952 msgid "subnet" msgstr "" #: ../telemetry-measurements.rst:945 ../telemetry-measurements.rst:948 #: ../telemetry-measurements.rst:952 msgid "subnet ID" msgstr "" #: ../telemetry-measurements.rst:948 msgid "Creation reques\\ ts for this sub\\ net" msgstr "" #: ../telemetry-measurements.rst:948 msgid "subnet.create" msgstr "" #: ../telemetry-measurements.rst:952 msgid "Update requests for this subnet" msgstr "" #: ../telemetry-measurements.rst:952 msgid "subnet.update" msgstr "" #: ../telemetry-measurements.rst:955 ../telemetry-measurements.rst:1002 msgid "Existence of po\\ rt" msgstr "" #: ../telemetry-measurements.rst:955 ../telemetry-measurements.rst:958 #: ../telemetry-measurements.rst:961 msgid "port ID" msgstr "" #: ../telemetry-measurements.rst:958 msgid "Creation reques\\ ts for this port" msgstr "" #: ../telemetry-measurements.rst:958 msgid "port.create" msgstr "" #: ../telemetry-measurements.rst:961 msgid "Update requests for this port" msgstr "" #: ../telemetry-measurements.rst:961 msgid "port.update" msgstr "" #: ../telemetry-measurements.rst:964 msgid "Existence of ro\\ uter" msgstr "" #: ../telemetry-measurements.rst:964 ../telemetry-measurements.rst:967 #: ../telemetry-measurements.rst:971 msgid "router" msgstr "" #: ../telemetry-measurements.rst:964 ../telemetry-measurements.rst:967 #: ../telemetry-measurements.rst:971 msgid "router ID" msgstr "" #: ../telemetry-measurements.rst:967 msgid "Creation reques\\ ts for this rou\\ ter" msgstr "" #: ../telemetry-measurements.rst:967 msgid "router.create" msgstr "" #: ../telemetry-measurements.rst:971 msgid "Update requests for this router" msgstr "" #: ../telemetry-measurements.rst:971 msgid "router.update" msgstr "" #: ../telemetry-measurements.rst:974 msgid "Existence of IP" msgstr "" #: ../telemetry-measurements.rst:974 ../telemetry-measurements.rst:978 #: ../telemetry-measurements.rst:981 msgid "ip" msgstr "" #: ../telemetry-measurements.rst:974 ../telemetry-measurements.rst:978 #: ../telemetry-measurements.rst:981 msgid "ip ID" msgstr "" #: ../telemetry-measurements.rst:974 msgid "ip.floating" msgstr "" #: ../telemetry-measurements.rst:978 msgid "Creation reques\\ ts for this IP" msgstr "" #: ../telemetry-measurements.rst:978 msgid "ip.floating.cr\\ eate" msgstr "" #: ../telemetry-measurements.rst:981 msgid "Update requests for this IP" msgstr "" #: ../telemetry-measurements.rst:981 msgid "ip.floating.up\\ date" msgstr "" #: ../telemetry-measurements.rst:984 msgid "Bytes through t\\ his l3 metering label" msgstr "" #: ../telemetry-measurements.rst:984 msgid "bandwidth" msgstr "" #: ../telemetry-measurements.rst:984 msgid "label ID" msgstr "" #: ../telemetry-measurements.rst:990 msgid "SDN controllers" msgstr "" #: ../telemetry-measurements.rst:992 msgid "The following meters are collected for SDN:" msgstr "" #: ../telemetry-measurements.rst:999 msgid "Existence of sw\\ itch" msgstr "" #: ../telemetry-measurements.rst:999 msgid "switch" msgstr "" #: ../telemetry-measurements.rst:999 ../telemetry-measurements.rst:1002 #: ../telemetry-measurements.rst:1005 ../telemetry-measurements.rst:1008 #: ../telemetry-measurements.rst:1011 ../telemetry-measurements.rst:1014 #: ../telemetry-measurements.rst:1017 ../telemetry-measurements.rst:1020 #: ../telemetry-measurements.rst:1023 ../telemetry-measurements.rst:1026 #: ../telemetry-measurements.rst:1029 ../telemetry-measurements.rst:1033 #: ../telemetry-measurements.rst:1037 ../telemetry-measurements.rst:1040 #: ../telemetry-measurements.rst:1043 ../telemetry-measurements.rst:1046 #: ../telemetry-measurements.rst:1049 ../telemetry-measurements.rst:1052 #: ../telemetry-measurements.rst:1055 ../telemetry-measurements.rst:1057 #: ../telemetry-measurements.rst:1060 ../telemetry-measurements.rst:1064 #: ../telemetry-measurements.rst:1067 msgid "switch ID" msgstr "" #: ../telemetry-measurements.rst:1002 msgid "switch.port" msgstr "" #: ../telemetry-measurements.rst:1005 ../telemetry-measurements.rst:1008 #: ../telemetry-measurements.rst:1011 ../telemetry-measurements.rst:1014 #: ../telemetry-measurements.rst:1017 ../telemetry-measurements.rst:1020 #: ../telemetry-measurements.rst:1023 ../telemetry-measurements.rst:1026 #: ../telemetry-measurements.rst:1029 ../telemetry-measurements.rst:1033 #: ../telemetry-measurements.rst:1037 ../telemetry-measurements.rst:1040 #: ../telemetry-measurements.rst:1064 ../telemetry-measurements.rst:1067 #: ../telemetry-measurements.rst:1102 ../telemetry-measurements.rst:1184 msgid "Cumula\\ tive" msgstr "" #: ../telemetry-measurements.rst:1005 msgid "Packets receive\\ d on port" msgstr "" #: ../telemetry-measurements.rst:1005 ../telemetry-measurements.rst:1008 #: ../telemetry-measurements.rst:1017 ../telemetry-measurements.rst:1020 #: ../telemetry-measurements.rst:1023 ../telemetry-measurements.rst:1026 #: ../telemetry-measurements.rst:1029 ../telemetry-measurements.rst:1033 #: ../telemetry-measurements.rst:1037 ../telemetry-measurements.rst:1049 #: ../telemetry-measurements.rst:1052 ../telemetry-measurements.rst:1064 msgid "packet" msgstr "" #: ../telemetry-measurements.rst:1005 msgid "switch.port.re\\ ceive.packets" msgstr "" #: ../telemetry-measurements.rst:1008 msgid "Packets transmi\\ tted on port" msgstr "" #: ../telemetry-measurements.rst:1008 msgid "switch.port.tr\\ ansmit.packets" msgstr "" #: ../telemetry-measurements.rst:1011 msgid "Bytes received on port" msgstr "" #: ../telemetry-measurements.rst:1011 msgid "switch.port.re\\ ceive.bytes" msgstr "" #: ../telemetry-measurements.rst:1014 msgid "Bytes transmitt\\ ed on port" msgstr "" #: ../telemetry-measurements.rst:1014 msgid "switch.port.tr\\ ansmit.bytes" msgstr "" #: ../telemetry-measurements.rst:1017 msgid "Drops received on port" msgstr "" #: ../telemetry-measurements.rst:1017 msgid "switch.port.re\\ ceive.drops" msgstr "" #: ../telemetry-measurements.rst:1020 msgid "Drops transmitt\\ ed on port" msgstr "" #: ../telemetry-measurements.rst:1020 msgid "switch.port.tr\\ ansmit.drops" msgstr "" #: ../telemetry-measurements.rst:1023 msgid "Errors received on port" msgstr "" #: ../telemetry-measurements.rst:1023 msgid "switch.port.re\\ ceive.errors" msgstr "" #: ../telemetry-measurements.rst:1026 msgid "Errors transmit\\ ted on port" msgstr "" #: ../telemetry-measurements.rst:1026 msgid "switch.port.tr\\ ansmit.errors" msgstr "" #: ../telemetry-measurements.rst:1029 msgid "Frame alignment errors receive\\ d on port" msgstr "" #: ../telemetry-measurements.rst:1029 msgid "switch.port.re\\ ceive.frame\\_er\\ ror" msgstr "" #: ../telemetry-measurements.rst:1033 msgid "Overrun errors received on port" msgstr "" #: ../telemetry-measurements.rst:1033 msgid "switch.port.re\\ ceive.overrun\\_\\ error" msgstr "" #: ../telemetry-measurements.rst:1037 msgid "CRC errors rece\\ ived on port" msgstr "" #: ../telemetry-measurements.rst:1037 msgid "switch.port.re\\ ceive.crc\\_error" msgstr "" #: ../telemetry-measurements.rst:1040 msgid "Collisions on p\\ ort" msgstr "" #: ../telemetry-measurements.rst:1040 msgid "count" msgstr "" #: ../telemetry-measurements.rst:1040 msgid "switch.port.co\\ llision.count" msgstr "" #: ../telemetry-measurements.rst:1043 msgid "Duration of tab\\ le" msgstr "" #: ../telemetry-measurements.rst:1043 msgid "switch.table" msgstr "" #: ../telemetry-measurements.rst:1043 ../telemetry-measurements.rst:1429 #: ../telemetry-measurements.rst:1432 msgid "table" msgstr "" #: ../telemetry-measurements.rst:1046 msgid "Active entries in table" msgstr "" #: ../telemetry-measurements.rst:1046 msgid "entry" msgstr "" #: ../telemetry-measurements.rst:1046 msgid "switch.table.a\\ ctive.entries" msgstr "" #: ../telemetry-measurements.rst:1049 msgid "Lookup packets for table" msgstr "" #: ../telemetry-measurements.rst:1049 msgid "switch.table.l\\ ookup.packets" msgstr "" #: ../telemetry-measurements.rst:1052 msgid "Packets matches for table" msgstr "" #: ../telemetry-measurements.rst:1052 msgid "switch.table.m\\ atched.packets" msgstr "" #: ../telemetry-measurements.rst:1055 msgid "Duration of flow" msgstr "" #: ../telemetry-measurements.rst:1055 msgid "flow" msgstr "" #: ../telemetry-measurements.rst:1055 msgid "switch.flow" msgstr "" #: ../telemetry-measurements.rst:1057 msgid "Duration of flow in seconds" msgstr "" #: ../telemetry-measurements.rst:1057 msgid "s" msgstr "" #: ../telemetry-measurements.rst:1057 msgid "switch.flow.du\\ ration.seconds" msgstr "" #: ../telemetry-measurements.rst:1060 msgid "Duration of flow in nanoseconds" msgstr "" #: ../telemetry-measurements.rst:1060 msgid "switch.flow.du\\ ration.nanosec\\ onds" msgstr "" #: ../telemetry-measurements.rst:1064 msgid "Packets received" msgstr "" #: ../telemetry-measurements.rst:1064 msgid "switch.flow.pa\\ ckets" msgstr "" #: ../telemetry-measurements.rst:1067 msgid "Bytes received" msgstr "" #: ../telemetry-measurements.rst:1067 msgid "switch.flow.by\\ tes" msgstr "" #: ../telemetry-measurements.rst:1073 msgid "" "These meters are available for OpenFlow based switches. In order to enable " "these meters, each driver needs to be properly configured." msgstr "" #: ../telemetry-measurements.rst:1077 msgid "Load-Balancer-as-a-Service (LBaaS v1)" msgstr "" #: ../telemetry-measurements.rst:1079 msgid "The following meters are collected for LBaaS v1:" msgstr "" #: ../telemetry-measurements.rst:1086 ../telemetry-measurements.rst:1164 msgid "Existence of a LB pool" msgstr "" #: ../telemetry-measurements.rst:1086 ../telemetry-measurements.rst:1164 msgid "network.serv\\ ices.lb.pool" msgstr "" #: ../telemetry-measurements.rst:1086 ../telemetry-measurements.rst:1120 #: ../telemetry-measurements.rst:1124 ../telemetry-measurements.rst:1164 #: ../telemetry-measurements.rst:1200 ../telemetry-measurements.rst:1204 msgid "pool" msgstr "" #: ../telemetry-measurements.rst:1086 ../telemetry-measurements.rst:1102 #: ../telemetry-measurements.rst:1106 ../telemetry-measurements.rst:1110 #: ../telemetry-measurements.rst:1114 ../telemetry-measurements.rst:1120 #: ../telemetry-measurements.rst:1124 ../telemetry-measurements.rst:1164 #: ../telemetry-measurements.rst:1184 ../telemetry-measurements.rst:1188 #: ../telemetry-measurements.rst:1192 ../telemetry-measurements.rst:1196 #: ../telemetry-measurements.rst:1200 ../telemetry-measurements.rst:1204 msgid "pool ID" msgstr "" #: ../telemetry-measurements.rst:1090 msgid "Existence of a LB VIP" msgstr "" #: ../telemetry-measurements.rst:1090 msgid "network.serv\\ ices.lb.vip" msgstr "" #: ../telemetry-measurements.rst:1090 ../telemetry-measurements.rst:1128 #: ../telemetry-measurements.rst:1132 msgid "vip" msgstr "" #: ../telemetry-measurements.rst:1090 ../telemetry-measurements.rst:1128 #: ../telemetry-measurements.rst:1132 msgid "vip ID" msgstr "" #: ../telemetry-measurements.rst:1094 ../telemetry-measurements.rst:1172 msgid "Existence of a LB member" msgstr "" #: ../telemetry-measurements.rst:1094 ../telemetry-measurements.rst:1136 #: ../telemetry-measurements.rst:1140 ../telemetry-measurements.rst:1172 #: ../telemetry-measurements.rst:1216 ../telemetry-measurements.rst:1220 msgid "member" msgstr "" #: ../telemetry-measurements.rst:1094 ../telemetry-measurements.rst:1136 #: ../telemetry-measurements.rst:1140 ../telemetry-measurements.rst:1172 #: ../telemetry-measurements.rst:1216 ../telemetry-measurements.rst:1220 msgid "member ID" msgstr "" #: ../telemetry-measurements.rst:1094 ../telemetry-measurements.rst:1172 msgid "network.serv\\ ices.lb.memb\\ er" msgstr "" #: ../telemetry-measurements.rst:1098 ../telemetry-measurements.rst:1176 msgid "Existence of a LB health probe" msgstr "" #: ../telemetry-measurements.rst:1098 ../telemetry-measurements.rst:1144 #: ../telemetry-measurements.rst:1149 ../telemetry-measurements.rst:1176 #: ../telemetry-measurements.rst:1224 ../telemetry-measurements.rst:1229 msgid "health\\ _monit\\ or" msgstr "" #: ../telemetry-measurements.rst:1098 ../telemetry-measurements.rst:1144 #: ../telemetry-measurements.rst:1149 ../telemetry-measurements.rst:1176 #: ../telemetry-measurements.rst:1224 ../telemetry-measurements.rst:1229 msgid "monitor ID" msgstr "" #: ../telemetry-measurements.rst:1098 ../telemetry-measurements.rst:1176 msgid "network.serv\\ ices.lb.heal\\ th_monitor" msgstr "" #: ../telemetry-measurements.rst:1102 ../telemetry-measurements.rst:1184 msgid "Total connectio\\ ns on a LB" msgstr "" #: ../telemetry-measurements.rst:1102 ../telemetry-measurements.rst:1106 #: ../telemetry-measurements.rst:1184 ../telemetry-measurements.rst:1188 msgid "connec\\ tion" msgstr "" #: ../telemetry-measurements.rst:1102 ../telemetry-measurements.rst:1184 msgid "network.serv\\ ices.lb.tota\\ l.connections" msgstr "" #: ../telemetry-measurements.rst:1106 ../telemetry-measurements.rst:1188 msgid "Active connecti\\ ons on a LB" msgstr "" #: ../telemetry-measurements.rst:1106 ../telemetry-measurements.rst:1188 msgid "network.serv\\ ices.lb.acti\\ ve.connections" msgstr "" #: ../telemetry-measurements.rst:1110 ../telemetry-measurements.rst:1192 msgid "Number of incom\\ ing Bytes" msgstr "" #: ../telemetry-measurements.rst:1110 ../telemetry-measurements.rst:1192 msgid "network.serv\\ ices.lb.inco\\ ming.bytes" msgstr "" #: ../telemetry-measurements.rst:1114 ../telemetry-measurements.rst:1196 msgid "Number of outgo\\ ing Bytes" msgstr "" #: ../telemetry-measurements.rst:1114 ../telemetry-measurements.rst:1196 msgid "network.serv\\ ices.lb.outg\\ oing.bytes" msgstr "" #: ../telemetry-measurements.rst:1120 ../telemetry-measurements.rst:1200 msgid "LB pool was cre\\ ated" msgstr "" #: ../telemetry-measurements.rst:1120 ../telemetry-measurements.rst:1200 msgid "network.serv\\ ices.lb.pool\\ .create" msgstr "" #: ../telemetry-measurements.rst:1124 ../telemetry-measurements.rst:1204 msgid "LB pool was upd\\ ated" msgstr "" #: ../telemetry-measurements.rst:1124 ../telemetry-measurements.rst:1204 msgid "network.serv\\ ices.lb.pool\\ .update" msgstr "" #: ../telemetry-measurements.rst:1128 msgid "LB VIP was crea\\ ted" msgstr "" #: ../telemetry-measurements.rst:1128 msgid "network.serv\\ ices.lb.vip.\\ create" msgstr "" #: ../telemetry-measurements.rst:1132 msgid "LB VIP was upda\\ ted" msgstr "" #: ../telemetry-measurements.rst:1132 msgid "network.serv\\ ices.lb.vip.\\ update" msgstr "" #: ../telemetry-measurements.rst:1136 ../telemetry-measurements.rst:1216 msgid "LB member was c\\ reated" msgstr "" #: ../telemetry-measurements.rst:1136 ../telemetry-measurements.rst:1216 msgid "network.serv\\ ices.lb.memb\\ er.create" msgstr "" #: ../telemetry-measurements.rst:1140 ../telemetry-measurements.rst:1220 msgid "LB member was u\\ pdated" msgstr "" #: ../telemetry-measurements.rst:1140 ../telemetry-measurements.rst:1220 msgid "network.serv\\ ices.lb.memb\\ er.update" msgstr "" #: ../telemetry-measurements.rst:1144 ../telemetry-measurements.rst:1224 msgid "LB health probe was created" msgstr "" #: ../telemetry-measurements.rst:1144 msgid "network.serv\\ ices.lb.heal\\ th_monitor.c\\ reate" msgstr "" #: ../telemetry-measurements.rst:1149 ../telemetry-measurements.rst:1229 msgid "LB health probe was updated" msgstr "" #: ../telemetry-measurements.rst:1149 msgid "network.serv\\ ices.lb.heal\\ th_monitor.u\\ pdate" msgstr "" #: ../telemetry-measurements.rst:1156 msgid "Load-Balancer-as-a-Service (LBaaS v2)" msgstr "" #: ../telemetry-measurements.rst:1158 msgid "" "The following meters are collected for LBaaS v2. They are added in Mitaka " "release:" msgstr "" #: ../telemetry-measurements.rst:1168 msgid "Existence of a LB listener" msgstr "" #: ../telemetry-measurements.rst:1168 ../telemetry-measurements.rst:1208 #: ../telemetry-measurements.rst:1212 msgid "listen\\ er" msgstr "" #: ../telemetry-measurements.rst:1168 ../telemetry-measurements.rst:1208 #: ../telemetry-measurements.rst:1212 msgid "listener ID" msgstr "" #: ../telemetry-measurements.rst:1168 msgid "network.serv\\ ices.lb.list\\ ener" msgstr "" #: ../telemetry-measurements.rst:1180 msgid "Existence of a LB loadbalancer" msgstr "" #: ../telemetry-measurements.rst:1180 msgid "loadba\\ lancer" msgstr "" #: ../telemetry-measurements.rst:1180 ../telemetry-measurements.rst:1234 #: ../telemetry-measurements.rst:1239 msgid "loadbala\\ ncer ID" msgstr "" #: ../telemetry-measurements.rst:1180 msgid "network.serv\\ ices.lb.load\\ balancer" msgstr "" #: ../telemetry-measurements.rst:1208 msgid "LB listener was created" msgstr "" #: ../telemetry-measurements.rst:1208 msgid "network.serv\\ ices.lb.list\\ ener.create" msgstr "" #: ../telemetry-measurements.rst:1212 msgid "LB listener was updated" msgstr "" #: ../telemetry-measurements.rst:1212 msgid "network.serv\\ ices.lb.list\\ ener.update" msgstr "" #: ../telemetry-measurements.rst:1224 msgid "network.serv\\ ices.lb.heal\\ thmonitor.cr\\ eate" msgstr "" #: ../telemetry-measurements.rst:1229 msgid "network.serv\\ ices.lb.heal\\ thmonitor.up\\ date" msgstr "" #: ../telemetry-measurements.rst:1234 msgid "LB loadbalancer was created" msgstr "" #: ../telemetry-measurements.rst:1234 ../telemetry-measurements.rst:1239 msgid "loadba\\ lancer\\" msgstr "" #: ../telemetry-measurements.rst:1234 msgid "network.serv\\ ices.lb.load\\ balancer.cre\\ ate" msgstr "" #: ../telemetry-measurements.rst:1239 msgid "LB loadbalancer was updated" msgstr "" #: ../telemetry-measurements.rst:1239 msgid "network.serv\\ ices.lb.load\\ balancer.upd\\ ate" msgstr "" #: ../telemetry-measurements.rst:1247 msgid "" "The above meters are experimental and may generate a large load against the " "Neutron APIs. The future enhancement will be implemented when Neutron " "supports the new APIs." msgstr "" #: ../telemetry-measurements.rst:1252 msgid "VPN-as-a-Service (VPNaaS)" msgstr "" #: ../telemetry-measurements.rst:1254 msgid "The following meters are collected for VPNaaS:" msgstr "" #: ../telemetry-measurements.rst:1261 msgid "Existence of a VPN" msgstr "" #: ../telemetry-measurements.rst:1261 msgid "network.serv\\ ices.vpn" msgstr "" #: ../telemetry-measurements.rst:1261 ../telemetry-measurements.rst:1272 #: ../telemetry-measurements.rst:1276 msgid "vpn ID" msgstr "" #: ../telemetry-measurements.rst:1261 ../telemetry-measurements.rst:1272 #: ../telemetry-measurements.rst:1276 msgid "vpnser\\ vice" msgstr "" #: ../telemetry-measurements.rst:1265 msgid "Existence of an IPSec connection" msgstr "" #: ../telemetry-measurements.rst:1265 ../telemetry-measurements.rst:1280 #: ../telemetry-measurements.rst:1285 msgid "connection ID" msgstr "" #: ../telemetry-measurements.rst:1265 ../telemetry-measurements.rst:1280 #: ../telemetry-measurements.rst:1285 msgid "ipsec\\_\\ site\\_c\\ onnect\\ ion" msgstr "" #: ../telemetry-measurements.rst:1265 msgid "network.serv\\ ices.vpn.con\\ nections" msgstr "" #: ../telemetry-measurements.rst:1272 msgid "VPN was created" msgstr "" #: ../telemetry-measurements.rst:1272 msgid "network.serv\\ ices.vpn.cre\\ ate" msgstr "" #: ../telemetry-measurements.rst:1276 msgid "VPN was updated" msgstr "" #: ../telemetry-measurements.rst:1276 msgid "network.serv\\ ices.vpn.upd\\ ate" msgstr "" #: ../telemetry-measurements.rst:1280 msgid "IPSec connection was created" msgstr "" #: ../telemetry-measurements.rst:1280 msgid "network.serv\\ ices.vpn.con\\ nections.cre\\ ate" msgstr "" #: ../telemetry-measurements.rst:1285 msgid "IPSec connection was updated" msgstr "" #: ../telemetry-measurements.rst:1285 msgid "network.serv\\ ices.vpn.con\\ nections.upd\\ ate" msgstr "" #: ../telemetry-measurements.rst:1290 msgid "Existence of an IPSec policy" msgstr "" #: ../telemetry-measurements.rst:1290 ../telemetry-measurements.rst:1294 #: ../telemetry-measurements.rst:1299 msgid "ipsecp\\ olicy" msgstr "" #: ../telemetry-measurements.rst:1290 ../telemetry-measurements.rst:1294 #: ../telemetry-measurements.rst:1299 msgid "ipsecpolicy ID" msgstr "" #: ../telemetry-measurements.rst:1290 msgid "network.serv\\ ices.vpn.ips\\ ecpolicy" msgstr "" #: ../telemetry-measurements.rst:1294 msgid "IPSec policy was created" msgstr "" #: ../telemetry-measurements.rst:1294 msgid "network.serv\\ ices.vpn.ips\\ ecpolicy.cre\\ ate" msgstr "" #: ../telemetry-measurements.rst:1299 msgid "IPSec policy was updated" msgstr "" #: ../telemetry-measurements.rst:1299 msgid "network.serv\\ ices.vpn.ips\\ ecpolicy.upd\\ ate" msgstr "" #: ../telemetry-measurements.rst:1304 msgid "Existence of an Ike policy" msgstr "" #: ../telemetry-measurements.rst:1304 ../telemetry-measurements.rst:1308 #: ../telemetry-measurements.rst:1312 msgid "ikepol\\ icy" msgstr "" #: ../telemetry-measurements.rst:1304 ../telemetry-measurements.rst:1308 #: ../telemetry-measurements.rst:1312 msgid "ikepolicy ID" msgstr "" #: ../telemetry-measurements.rst:1304 msgid "network.serv\\ ices.vpn.ike\\ policy" msgstr "" #: ../telemetry-measurements.rst:1308 msgid "Ike policy was created" msgstr "" #: ../telemetry-measurements.rst:1308 msgid "network.serv\\ ices.vpn.ike\\ policy.create" msgstr "" #: ../telemetry-measurements.rst:1312 msgid "Ike policy was updated" msgstr "" #: ../telemetry-measurements.rst:1312 msgid "network.serv\\ ices.vpn.ike\\ policy.update" msgstr "" #: ../telemetry-measurements.rst:1318 msgid "Firewall-as-a-Service (FWaaS)" msgstr "" #: ../telemetry-measurements.rst:1320 msgid "The following meters are collected for FWaaS:" msgstr "" #: ../telemetry-measurements.rst:1327 msgid "Existence of a firewall" msgstr "" #: ../telemetry-measurements.rst:1327 ../telemetry-measurements.rst:1337 #: ../telemetry-measurements.rst:1341 msgid "firewall" msgstr "" #: ../telemetry-measurements.rst:1327 ../telemetry-measurements.rst:1331 #: ../telemetry-measurements.rst:1337 ../telemetry-measurements.rst:1341 msgid "firewall ID" msgstr "" #: ../telemetry-measurements.rst:1327 msgid "network.serv\\ ices.firewall" msgstr "" #: ../telemetry-measurements.rst:1331 msgid "Existence of a firewall policy" msgstr "" #: ../telemetry-measurements.rst:1331 ../telemetry-measurements.rst:1345 #: ../telemetry-measurements.rst:1350 msgid "firewa\\ ll_pol\\ icy" msgstr "" #: ../telemetry-measurements.rst:1331 msgid "network.serv\\ ices.firewal\\ l.policy" msgstr "" #: ../telemetry-measurements.rst:1337 msgid "Firewall was cr\\ eated" msgstr "" #: ../telemetry-measurements.rst:1337 msgid "network.serv\\ ices.firewal\\ l.create" msgstr "" #: ../telemetry-measurements.rst:1341 msgid "Firewall was up\\ dated" msgstr "" #: ../telemetry-measurements.rst:1341 msgid "network.serv\\ ices.firewal\\ l.update" msgstr "" #: ../telemetry-measurements.rst:1345 msgid "Firewall policy was created" msgstr "" #: ../telemetry-measurements.rst:1345 msgid "network.serv\\ ices.firewal\\ l.policy.cre\\ ate" msgstr "" #: ../telemetry-measurements.rst:1345 ../telemetry-measurements.rst:1350 msgid "policy ID" msgstr "" #: ../telemetry-measurements.rst:1350 msgid "Firewall policy was updated" msgstr "" #: ../telemetry-measurements.rst:1350 msgid "network.serv\\ ices.firewal\\ l.policy.upd\\ ate" msgstr "" #: ../telemetry-measurements.rst:1355 msgid "Existence of a firewall rule" msgstr "" #: ../telemetry-measurements.rst:1355 ../telemetry-measurements.rst:1359 #: ../telemetry-measurements.rst:1364 msgid "firewa\\ ll_rule" msgstr "" #: ../telemetry-measurements.rst:1355 msgid "network.serv\\ ices.firewal\\ l.rule" msgstr "" #: ../telemetry-measurements.rst:1355 ../telemetry-measurements.rst:1359 #: ../telemetry-measurements.rst:1364 msgid "rule ID" msgstr "" #: ../telemetry-measurements.rst:1359 msgid "Firewall rule w\\ as created" msgstr "" #: ../telemetry-measurements.rst:1359 msgid "network.serv\\ ices.firewal\\ l.rule.create" msgstr "" #: ../telemetry-measurements.rst:1364 msgid "Firewall rule w\\ as updated" msgstr "" #: ../telemetry-measurements.rst:1364 msgid "network.serv\\ ices.firewal\\ l.rule.update" msgstr "" #: ../telemetry-measurements.rst:1371 msgid "The following meters are collected for the Orchestration service:" msgstr "" #: ../telemetry-measurements.rst:1378 msgid "Stack was success\\ fully created" msgstr "" #: ../telemetry-measurements.rst:1378 ../telemetry-measurements.rst:1381 #: ../telemetry-measurements.rst:1384 ../telemetry-measurements.rst:1387 #: ../telemetry-measurements.rst:1390 msgid "stack" msgstr "" #: ../telemetry-measurements.rst:1378 ../telemetry-measurements.rst:1381 #: ../telemetry-measurements.rst:1384 ../telemetry-measurements.rst:1387 #: ../telemetry-measurements.rst:1390 msgid "stack ID" msgstr "" #: ../telemetry-measurements.rst:1378 msgid "stack.create" msgstr "" #: ../telemetry-measurements.rst:1381 msgid "Stack was success\\ fully updated" msgstr "" #: ../telemetry-measurements.rst:1381 msgid "stack.update" msgstr "" #: ../telemetry-measurements.rst:1384 msgid "Stack was success\\ fully deleted" msgstr "" #: ../telemetry-measurements.rst:1384 msgid "stack.delete" msgstr "" #: ../telemetry-measurements.rst:1387 msgid "Stack was success\\ fully resumed" msgstr "" #: ../telemetry-measurements.rst:1387 msgid "stack.resume" msgstr "" #: ../telemetry-measurements.rst:1390 msgid "Stack was success\\ fully suspended" msgstr "" #: ../telemetry-measurements.rst:1390 msgid "stack.suspend" msgstr "" #: ../telemetry-measurements.rst:1395 msgid "Data processing service for OpenStack" msgstr "" #: ../telemetry-measurements.rst:1397 msgid "" "The following meters are collected for the Data processing service for " "OpenStack:" msgstr "" #: ../telemetry-measurements.rst:1405 msgid "Cluster was successfully created" msgstr "" #: ../telemetry-measurements.rst:1405 ../telemetry-measurements.rst:1410 #: ../telemetry-measurements.rst:1414 msgid "cluster" msgstr "" #: ../telemetry-measurements.rst:1405 ../telemetry-measurements.rst:1410 #: ../telemetry-measurements.rst:1414 msgid "cluster ID" msgstr "" #: ../telemetry-measurements.rst:1405 msgid "cluster.create" msgstr "" #: ../telemetry-measurements.rst:1410 msgid "Cluster was successfully updated" msgstr "" #: ../telemetry-measurements.rst:1410 msgid "cluster.update" msgstr "" #: ../telemetry-measurements.rst:1414 msgid "Cluster was successfully deleted" msgstr "" #: ../telemetry-measurements.rst:1414 msgid "cluster.delete" msgstr "" #: ../telemetry-measurements.rst:1420 msgid "Key Value Store module" msgstr "" #: ../telemetry-measurements.rst:1422 msgid "The following meters are collected for the Key Value Store module:" msgstr "" #: ../telemetry-measurements.rst:1429 msgid "Table was succe\\ ssfully created" msgstr "" #: ../telemetry-measurements.rst:1429 msgid "magnetodb.table.\\ create" msgstr "" #: ../telemetry-measurements.rst:1429 ../telemetry-measurements.rst:1432 #: ../telemetry-measurements.rst:1435 msgid "table ID" msgstr "" #: ../telemetry-measurements.rst:1432 msgid "Table was succe\\ ssfully deleted" msgstr "" #: ../telemetry-measurements.rst:1432 msgid "magnetodb.table\\ .delete" msgstr "" #: ../telemetry-measurements.rst:1435 msgid "Number of indices created in a table" msgstr "" #: ../telemetry-measurements.rst:1435 msgid "index" msgstr "" #: ../telemetry-measurements.rst:1435 msgid "magnetodb.table\\ .index.count" msgstr "" #: ../telemetry-measurements.rst:1441 msgid "Energy" msgstr "" #: ../telemetry-measurements.rst:1443 msgid "The following energy related meters are available:" msgstr "" #: ../telemetry-measurements.rst:1450 msgid "Amount of energy" msgstr "" #: ../telemetry-measurements.rst:1450 msgid "energy" msgstr "" #: ../telemetry-measurements.rst:1450 msgid "kWh" msgstr "" #: ../telemetry-measurements.rst:1450 ../telemetry-measurements.rst:1452 msgid "probe ID" msgstr "" #: ../telemetry-measurements.rst:1452 msgid "Power consumption" msgstr "" #: ../telemetry-measurements.rst:1452 msgid "power" msgstr "" #: ../telemetry-system-architecture.rst:7 msgid "" "The Telemetry service uses an agent-based architecture. Several modules " "combine their responsibilities to collect data, store samples in a database, " "or provide an API service for handling incoming requests." msgstr "" #: ../telemetry-system-architecture.rst:11 msgid "The Telemetry service is built from the following agents and services:" msgstr "" #: ../telemetry-system-architecture.rst:14 msgid "" "Presents aggregated metering data to consumers (such as billing engines and " "analytics tools)." msgstr "" #: ../telemetry-system-architecture.rst:15 msgid "ceilometer-api" msgstr "" #: ../telemetry-system-architecture.rst:18 msgid "" "Polls for different kinds of meter data by using the polling plug-ins " "(pollsters) registered in different namespaces. It provides a single polling " "interface across different namespaces." msgstr "" #: ../telemetry-system-architecture.rst:20 msgid "ceilometer-polling" msgstr "" #: ../telemetry-system-architecture.rst:23 msgid "" "Polls the public RESTful APIs of other OpenStack services such as Compute " "service and Image service, in order to keep tabs on resource existence, by " "using the polling plug-ins (pollsters) registered in the central polling " "namespace." msgstr "" #: ../telemetry-system-architecture.rst:26 msgid "ceilometer-agent-central" msgstr "" #: ../telemetry-system-architecture.rst:29 msgid "" "Polls the local hypervisor or libvirt daemon to acquire performance data for " "the local instances, messages and emits the data as AMQP messages, by using " "the polling plug-ins (pollsters) registered in the compute polling namespace." msgstr "" #: ../telemetry-system-architecture.rst:32 msgid "ceilometer-agent-compute" msgstr "" #: ../telemetry-system-architecture.rst:35 msgid "" "Polls the local node with IPMI support, in order to acquire IPMI sensor data " "and Intel Node Manager data, by using the polling plug-ins (pollsters) " "registered in the IPMI polling namespace." msgstr "" #: ../telemetry-system-architecture.rst:37 msgid "ceilometer-agent-ipmi" msgstr "" #: ../telemetry-system-architecture.rst:40 msgid "Consumes AMQP messages from other OpenStack services." msgstr "" #: ../telemetry-system-architecture.rst:40 msgid "ceilometer-agent-notification" msgstr "" #: ../telemetry-system-architecture.rst:43 msgid "" "Consumes AMQP notifications from the agents, then dispatches these data to " "the appropriate data store." msgstr "" #: ../telemetry-system-architecture.rst:44 msgid "ceilometer-collector" msgstr "" #: ../telemetry-system-architecture.rst:47 msgid "" "Determines when alarms fire due to the associated statistic trend crossing a " "threshold over a sliding time window." msgstr "" #: ../telemetry-system-architecture.rst:48 msgid "ceilometer-alarm-evaluator" msgstr "" #: ../telemetry-system-architecture.rst:51 msgid "" "Initiates alarm actions, for example calling out to a webhook with a " "description of the alarm state transition." msgstr "" #: ../telemetry-system-architecture.rst:56 msgid "" "The ``ceilometer-polling`` service is available since the Kilo release. It " "is intended to replace ``ceilometer-agent-central``, ``ceilometer-agent-" "compute``, and ``ceilometer-agent-ipmi``." msgstr "" #: ../telemetry-system-architecture.rst:58 msgid "ceilometer-alarm-notifier" msgstr "" #: ../telemetry-system-architecture.rst:60 msgid "" "Besides the ``ceilometer-agent-compute`` and the ``ceilometer-agent-ipmi`` " "services, all the other services are placed on one or more controller nodes." msgstr "" #: ../telemetry-system-architecture.rst:64 msgid "" "The Telemetry architecture highly depends on the AMQP service both for " "consuming notifications coming from OpenStack services and internal " "communication." msgstr "" #: ../telemetry-system-architecture.rst:72 msgid "Supported databases" msgstr "" #: ../telemetry-system-architecture.rst:74 msgid "" "The other key external component of Telemetry is the database, where events, " "samples, alarm definitions, and alarms are stored." msgstr "" #: ../telemetry-system-architecture.rst:79 msgid "" "Multiple database back ends can be configured in order to store events, " "samples, and alarms separately." msgstr "" #: ../telemetry-system-architecture.rst:82 msgid "The list of supported database back ends:" msgstr "" #: ../telemetry-system-architecture.rst:84 msgid "`ElasticSearch (events only) `__" msgstr "" #: ../telemetry-system-architecture.rst:86 msgid "`MongoDB `__" msgstr "" #: ../telemetry-system-architecture.rst:88 msgid "`MySQL `__" msgstr "" #: ../telemetry-system-architecture.rst:90 msgid "`PostgreSQL `__" msgstr "" #: ../telemetry-system-architecture.rst:92 msgid "`HBase `__" msgstr "" #: ../telemetry-system-architecture.rst:98 msgid "Supported hypervisors" msgstr "" #: ../telemetry-system-architecture.rst:100 msgid "" "The Telemetry service collects information about the virtual machines, which " "requires close connection to the hypervisor that runs on the compute hosts." msgstr "" #: ../telemetry-system-architecture.rst:104 msgid "The following is a list of supported hypervisors." msgstr "" #: ../telemetry-system-architecture.rst:106 msgid "" "The following hypervisors are supported via `libvirt `__" msgstr "" #: ../telemetry-system-architecture.rst:110 msgid "`Quick Emulator (QEMU) `__" msgstr "" #: ../telemetry-system-architecture.rst:114 msgid "`User-mode Linux (UML) `__" msgstr "" #: ../telemetry-system-architecture.rst:118 msgid "" "For details about hypervisor support in libvirt please check the `Libvirt " "API support matrix `__." msgstr "" #: ../telemetry-system-architecture.rst:123 msgid "`XEN `__" msgstr "" #: ../telemetry-system-architecture.rst:129 msgid "Supported networking services" msgstr "" #: ../telemetry-system-architecture.rst:131 msgid "" "Telemetry is able to retrieve information from OpenStack Networking and " "external networking services:" msgstr "" #: ../telemetry-system-architecture.rst:134 msgid "OpenStack Networking:" msgstr "" #: ../telemetry-system-architecture.rst:136 msgid "Basic network meters" msgstr "" #: ../telemetry-system-architecture.rst:138 msgid "Firewall-as-a-Service (FWaaS) meters" msgstr "" #: ../telemetry-system-architecture.rst:140 msgid "Load-Balancer-as-a-Service (LBaaS) meters" msgstr "" #: ../telemetry-system-architecture.rst:142 msgid "VPN-as-a-Service (VPNaaS) meters" msgstr "" #: ../telemetry-system-architecture.rst:144 msgid "SDN controller meters:" msgstr "" #: ../telemetry-system-architecture.rst:146 msgid "`OpenDaylight `__" msgstr "" #: ../telemetry-system-architecture.rst:148 msgid "`OpenContrail `__" msgstr "" #: ../telemetry-system-architecture.rst:154 msgid "Users, roles, and tenants" msgstr "" #: ../telemetry-system-architecture.rst:156 msgid "" "This service of OpenStack uses OpenStack Identity for authenticating and " "authorizing users. The required configuration options are listed in the " "`Telemetry section `__ in the OpenStack Configuration Reference." msgstr "" #: ../telemetry-system-architecture.rst:162 msgid "" "The system uses two roles:``admin`` and ``non-admin``. The authorization " "happens before processing each API request. The amount of returned data " "depends on the role the requestor owns." msgstr "" #: ../telemetry-system-architecture.rst:166 msgid "" "The creation of alarm definitions also highly depends on the role of the " "user, who initiated the action. Further details about :ref:`telemetry-" "alarms` handling can be found in this guide." msgstr "" #: ../telemetry-troubleshooting-guide.rst:2 msgid "Troubleshoot Telemetry" msgstr "" #: ../telemetry-troubleshooting-guide.rst:5 msgid "Logging in Telemetry" msgstr "" #: ../telemetry-troubleshooting-guide.rst:7 msgid "" "The Telemetry service has similar log settings as the other OpenStack " "services. Multiple options are available to change the target of logging, " "the format of the log entries and the log levels." msgstr "" #: ../telemetry-troubleshooting-guide.rst:11 msgid "" "The log settings can be changed in ``ceilometer.conf``. The list of " "configuration options are listed in the logging configuration options table " "in the `Telemetry section `__ in the OpenStack Configuration Reference." msgstr "" #: ../telemetry-troubleshooting-guide.rst:17 msgid "" "By default ``stderr`` is used as standard output for the log messages. It " "can be changed to either a log file or syslog. The ``debug`` and ``verbose`` " "options are also set to false in the default settings, the default log " "levels of the corresponding modules can be found in the table referred above." msgstr "" #: ../telemetry-troubleshooting-guide.rst:25 msgid "Recommended order of starting services" msgstr "" #: ../telemetry-troubleshooting-guide.rst:27 msgid "" "As it can be seen in `Bug 1355809 `__, the wrong ordering of service startup can result in data " "loss." msgstr "" #: ../telemetry-troubleshooting-guide.rst:31 msgid "" "When the services are started for the first time or in line with the message " "queue service restart, it takes time while the ``ceilometer-collector`` " "service establishes the connection and joins or rejoins to the configured " "exchanges. Therefore, if the ``ceilometer-agent-compute``, ``ceilometer-" "agent-central``, and the ``ceilometer-agent-notification`` services are " "started before the ``ceilometer-collector`` service, the ``ceilometer-" "collector`` service may lose some messages while connecting to the message " "queue service." msgstr "" #: ../telemetry-troubleshooting-guide.rst:40 msgid "" "The possibility of this issue to happen is higher, when the polling interval " "is set to a relatively short period. In order to avoid this situation, the " "recommended order of service startup is to start or restart the ``ceilometer-" "collector`` service after the message queue. All the other Telemetry " "services should be started or restarted after and the ``ceilometer-agent-" "compute`` should be the last in the sequence, as this component emits " "metering messages in order to send the samples to the collector." msgstr "" #: ../telemetry-troubleshooting-guide.rst:51 msgid "Notification agent" msgstr "" #: ../telemetry-troubleshooting-guide.rst:53 msgid "" "In the Icehouse release of OpenStack a new service was introduced to be " "responsible for consuming notifications that are coming from other OpenStack " "services." msgstr "" #: ../telemetry-troubleshooting-guide.rst:57 msgid "" "If the ``ceilometer-agent-notification`` service is not installed and " "started, samples originating from notifications will not be generated. In " "case of the lack of notification based samples, the state of this service " "and the log file of Telemetry should be checked first." msgstr "" #: ../telemetry-troubleshooting-guide.rst:62 msgid "" "For the list of meters that are originated from notifications, see the " "`Telemetry Measurements Reference `__." msgstr "" #: ../telemetry-troubleshooting-guide.rst:68 msgid "Recommended ``auth_url`` to be used" msgstr "" #: ../telemetry-troubleshooting-guide.rst:70 msgid "" "When using the Telemetry command-line client, the credentials and the " "``os_auth_url`` have to be set in order for the client to authenticate " "against OpenStack Identity. For further details about the credentials that " "have to be provided see the `Telemetry Python API `__." msgstr "" #: ../telemetry-troubleshooting-guide.rst:76 msgid "" "The service catalog provided by OpenStack Identity contains the URLs that " "are available for authentication. The URLs have different ``port``\\s, based " "on whether the type of the given URL is ``public``, ``internal`` or " "``admin``." msgstr "" #: ../telemetry-troubleshooting-guide.rst:81 msgid "" "OpenStack Identity is about to change API version from v2 to v3. The " "``adminURL`` endpoint (which is available via the port: ``35357``) supports " "only the v3 version, while the other two supports both." msgstr "" #: ../telemetry-troubleshooting-guide.rst:85 msgid "" "The Telemetry command line client is not adapted to the v3 version of the " "OpenStack Identity API. If the ``adminURL`` is used as ``os_auth_url``, the :" "command:`ceilometer` command results in the following error message:" msgstr "" #: ../telemetry-troubleshooting-guide.rst:96 msgid "" "Therefore when specifying the ``os_auth_url`` parameter on the command line " "or by using environment variable, use the ``internalURL`` or ``publicURL``." msgstr "" #: ../telemetry-troubleshooting-guide.rst:100 msgid "" "For more details check the bug report `Bug 1351841 `__." msgstr "" #: ../telemetry.rst:5 msgid "Telemetry" msgstr "" #: ../telemetry.rst:7 msgid "" "Even in the cloud industry, providers must use a multi-step process for " "billing. The required steps to bill for usage in a cloud environment are " "metering, rating, and billing. Because the provider's requirements may be " "far too specific for a shared solution, rating and billing solutions cannot " "be designed in a common module that satisfies all. Providing users with " "measurements on cloud services is required to meet the ``measured service`` " "definition of cloud computing." msgstr "" #: ../telemetry.rst:15 msgid "" "The Telemetry service was originally designed to support billing systems for " "OpenStack cloud resources. This project only covers the metering portion of " "the required processing for billing. This service collects information about " "the system and stores it in the form of samples in order to provide data " "about anything that can be billed." msgstr "" #: ../telemetry.rst:21 msgid "" "In addition to system measurements, the Telemetry service also captures " "event notifications triggered when various actions are executed in the " "OpenStack system. This data is captured as Events and stored alongside " "metering data." msgstr "" #: ../telemetry.rst:26 msgid "" "The list of meters is continuously growing, which makes it possible to use " "the data collected by Telemetry for different purposes, other than billing. " "For example, the autoscaling feature in the Orchestration service can be " "triggered by alarms this module sets and then gets notified within Telemetry." msgstr "" #: ../telemetry.rst:32 msgid "" "The sections in this document contain information about the architecture and " "usage of Telemetry. The first section contains a brief summary about the " "system architecture used in a typical OpenStack deployment. The second " "section describes the data collection mechanisms. You can also read about " "alarming to understand how alarm definitions can be posted to Telemetry and " "what actions can happen if an alarm is raised. The last section contains a " "troubleshooting guide, which mentions error situations and possible " "solutions to the problems." msgstr "" #: ../telemetry.rst:42 msgid "" "You can retrieve the collected samples in three different ways: with the " "REST API, with the command-line interface, or with the Metering tab on an " "OpenStack dashboard." msgstr "" #: ../ts-HTTP-bad-req-in-cinder-vol-log.rst:3 msgid "HTTP bad request in cinder volume log" msgstr "" #: ../ts-HTTP-bad-req-in-cinder-vol-log.rst:8 msgid "These errors appear in the ``cinder-volume.log`` file:" msgstr "" #: ../ts-HTTP-bad-req-in-cinder-vol-log.rst:45 msgid "" "You need to update your copy of the ``hp_3par_fc.py`` driver which contains " "the synchronization code." msgstr "" #: ../ts-duplicate-3par-host.rst:3 msgid "Duplicate 3PAR host" msgstr "" #: ../ts-duplicate-3par-host.rst:8 msgid "" "This error may be caused by a volume being exported outside of OpenStack " "using a host name different from the system name that OpenStack expects. " "This error could be displayed with the :term:`IQN` if the host was exported " "using iSCSI:" msgstr "" #: ../ts-duplicate-3par-host.rst:22 msgid "" "Change the 3PAR host name to match the one that OpenStack expects. The 3PAR " "host constructed by the driver uses just the local host name, not the fully " "qualified domain name (FQDN) of the compute host. For example, if the FQDN " "was *myhost.example.com*, just *myhost* would be used as the 3PAR host name. " "IP addresses are not allowed as host names on the 3PAR storage server." msgstr "" #: ../ts-eql-volume-size.rst:3 msgid "" "Addressing discrepancies in reported volume sizes for EqualLogic storage" msgstr "" #: ../ts-eql-volume-size.rst:8 msgid "" "There is a discrepancy between both the actual volume size in EqualLogic " "(EQL) storage and the image size in the Image service, with what is reported " "to OpenStack database. This could lead to confusion if a user is creating " "volumes from an image that was uploaded from an EQL volume (through the " "Image service). The image size is slightly larger than the target volume " "size; this is because EQL size reporting accounts for additional storage " "used by EQL for internal volume metadata." msgstr "" #: ../ts-eql-volume-size.rst:16 msgid "To reproduce the issue follow the steps in the following procedure." msgstr "" #: ../ts-eql-volume-size.rst:18 msgid "" "This procedure assumes that the EQL array is provisioned, and that " "appropriate configuration settings have been included in ``/etc/cinder/" "cinder.conf`` to connect to the EQL array." msgstr "" #: ../ts-eql-volume-size.rst:22 msgid "" "Create a new volume. Note the ID and size of the volume. In the following " "example, the ID and size are ``74cf9c04-4543-47ae-a937-a9b7c6c921e7`` and " "``1``, respectively:" msgstr "" #: ../ts-eql-volume-size.rst:48 msgid "" "Verify the volume size on the EQL array by using its command-line interface." msgstr "" #: ../ts-eql-volume-size.rst:51 msgid "" "The actual size (``VolReserve``) is 1.01 GB. The EQL Group Manager should " "also report a volume size of 1.01 GB:" msgstr "" #: ../ts-eql-volume-size.rst:81 msgid "Create a new image from this volume:" msgstr "" #: ../ts-eql-volume-size.rst:103 msgid "" "When you uploaded the volume in the previous step, the Image service " "reported the volume's size as ``1`` (GB). However, when using :command:" "`glance image-list` to list the image, the displayed size is 1085276160 " "bytes, or roughly 1.01 GB:" msgstr "" #: ../ts-eql-volume-size.rst:109 msgid "Container Format" msgstr "" #: ../ts-eql-volume-size.rst:109 msgid "Disk Format" msgstr "" #: ../ts-eql-volume-size.rst:109 msgid "Size" msgstr "" #: ../ts-eql-volume-size.rst:112 msgid "*1085276160*" msgstr "" #: ../ts-eql-volume-size.rst:112 msgid "bare" msgstr "" #: ../ts-eql-volume-size.rst:112 msgid "image\\_from\\_volume1" msgstr "" #: ../ts-eql-volume-size.rst:117 msgid "" "Create a new volume using the previous image (``image_id 3020a21d-ba37-4495 " "-8899-07fc201161b9`` in this example) as the source. Set the target volume " "size to 1 GB; this is the size reported by the ``cinder`` tool when you " "uploaded the volume to the Image service:" msgstr "" #: ../ts-eql-volume-size.rst:130 msgid "" "The attempt to create a new volume based on the size reported by the " "``cinder`` tool will then fail." msgstr "" #: ../ts-eql-volume-size.rst:136 msgid "" "To work around this problem, increase the target size of the new image to " "the next whole number. In the problem example, you created a 1 GB volume to " "be used as volume-backed image, so a new volume using this volume-backed " "image should use a size of 2 GB:" msgstr "" #: ../ts-eql-volume-size.rst:167 msgid "" "The dashboard suggests a suitable size when you create a new volume based on " "a volume-backed image." msgstr "" #: ../ts-eql-volume-size.rst:170 msgid "You can then check this new volume into the EQL array:" msgstr "" #: ../ts-failed-attach-vol-after-detach.rst:3 msgid "Failed to attach volume after detaching" msgstr "" #: ../ts-failed-attach-vol-after-detach.rst:8 msgid "Failed to attach a volume after detaching the same volume." msgstr "" #: ../ts-failed-attach-vol-after-detach.rst:13 msgid "" "You must change the device name on the :command:`nova-attach` command. The " "VM might not clean up after a :command:`nova-detach` command runs. This " "example shows how the :command:`nova-attach` command fails when you use the " "``vdb``, ``vdc``, or ``vdd`` device names:" msgstr "" #: ../ts-failed-attach-vol-after-detach.rst:33 msgid "" "You might also have this problem after attaching and detaching the same " "volume from the same VM with the same mount point multiple times. In this " "case, restart the KVM host." msgstr "" #: ../ts-failed-attach-vol-no-sysfsutils.rst:3 msgid "Failed to attach volume, systool is not installed" msgstr "" #: ../ts-failed-attach-vol-no-sysfsutils.rst:8 msgid "" "This warning and error occurs if you do not have the required ``sysfsutils`` " "package installed on the compute node:" msgstr "" #: ../ts-failed-attach-vol-no-sysfsutils.rst:25 msgid "" "Run the following command on the compute node to install the ``sysfsutils`` " "packages:" msgstr "" #: ../ts-failed-connect-vol-FC-SAN.rst:3 msgid "Failed to connect volume in FC SAN" msgstr "" #: ../ts-failed-connect-vol-FC-SAN.rst:8 msgid "" "The compute node failed to connect to a volume in a Fibre Channel (FC) SAN " "configuration. The WWN may not be zoned correctly in your FC SAN that links " "the compute host to the storage array:" msgstr "" #: ../ts-failed-connect-vol-FC-SAN.rst:28 msgid "" "The network administrator must configure the FC SAN fabric by correctly " "zoning the WWN (port names) from your compute node HBAs." msgstr "" #: ../ts_cinder_config.rst:3 msgid "Troubleshoot the Block Storage configuration" msgstr "" #: ../ts_cinder_config.rst:5 msgid "" "Most Block Storage errors are caused by incorrect volume configurations that " "result in volume creation failures. To resolve these failures, review these " "logs:" msgstr "" #: ../ts_cinder_config.rst:9 msgid "``cinder-api`` log (``/var/log/cinder/api.log``)" msgstr "" #: ../ts_cinder_config.rst:11 msgid "``cinder-volume`` log (``/var/log/cinder/volume.log``)" msgstr "" #: ../ts_cinder_config.rst:13 msgid "" "The ``cinder-api`` log is useful for determining if you have endpoint or " "connectivity issues. If you send a request to create a volume and it fails, " "review the ``cinder-api`` log to determine whether the request made it to " "the Block Storage service. If the request is logged and you see no errors or " "tracebacks, check the ``cinder-volume`` log for errors or tracebacks." msgstr "" #: ../ts_cinder_config.rst:22 msgid "Create commands are listed in the ``cinder-api`` log." msgstr "" #: ../ts_cinder_config.rst:24 msgid "" "These entries in the ``cinder.openstack.common.log`` file can be used to " "assist in troubleshooting your Block Storage configuration." msgstr "" #: ../ts_cinder_config.rst:99 msgid "" "These common issues might occur during configuration, and the following " "potential solutions describe how to address the issues." msgstr "" #: ../ts_cinder_config.rst:103 msgid "Issues with ``state_path`` and ``volumes_dir`` settings" msgstr "" #: ../ts_cinder_config.rst:108 msgid "" "The OpenStack Block Storage uses ``tgtd`` as the default iSCSI helper and " "implements persistent targets. This means that in the case of a ``tgt`` " "restart, or even a node reboot, your existing volumes on that node will be " "restored automatically with their original :term:`IQN`." msgstr "" #: ../ts_cinder_config.rst:113 msgid "" "By default, Block Storage uses a ``state_path`` variable, which if " "installing with Yum or APT should be set to ``/var/lib/cinder/``. The next " "part is the ``volumes_dir`` variable, by default this appends a ``volumes`` " "directory to the ``state_path``. The result is a file-tree: ``/var/lib/" "cinder/volumes/``." msgstr "" #: ../ts_cinder_config.rst:122 msgid "" "In order to ensure nodes are restored to their original :term:`IQN`, the " "iSCSI target information needs to be stored in a file on creation that can " "be queried in case of restart of the ``tgt daemon``. While the installer " "should handle all this, it can go wrong." msgstr "" #: ../ts_cinder_config.rst:127 msgid "" "If you have trouble creating volumes and this directory does not exist you " "should see an error message in the ``cinder-volume`` log indicating that the " "``volumes_dir`` does not exist, and it should provide information about " "which path it was looking for." msgstr "" #: ../ts_cinder_config.rst:133 msgid "The persistent tgt include file" msgstr "" #: ../ts_cinder_config.rst:138 msgid "" "The Block Storage service may have issues locating the persistent ``tgt " "include`` file. Along with the ``volumes_dir`` option, the iSCSI target " "driver also needs to be configured to look in the correct place for the " "persistent ``tgt include `` file. This is an entry in the ``/etc/tgt/conf." "d`` file that should have been set during the OpenStack installation." msgstr "" #: ../ts_cinder_config.rst:148 msgid "" "If issues occur, verify that you have a ``/etc/tgt/conf.d/cinder.conf`` " "file. If the file is not present, create it with:" msgstr "" #: ../ts_cinder_config.rst:156 msgid "No sign of attach call in the ``cinder-api`` log" msgstr "" #: ../ts_cinder_config.rst:161 msgid "" "The attach call is unavailble, or not appearing in the ``cinder-api`` log." msgstr "" #: ../ts_cinder_config.rst:166 msgid "" "Adjust the ``nova.conf`` file, and make sure that your ``nova.conf`` has " "this entry:" msgstr "" #: ../ts_cinder_config.rst:174 msgid "Failed to create iscsi target error in the ``cinder-volume.log`` file" msgstr "" #: ../ts_cinder_config.rst:186 msgid "" "You might see this error in ``cinder-volume.log`` after trying to create a " "volume that is 1 GB." msgstr "" #: ../ts_cinder_config.rst:192 msgid "" "To fix this issue, change the content of the ``/etc/tgt/targets.conf`` file " "from ``include /etc/tgt/conf.d/*.conf`` to ``include /etc/tgt/conf.d/" "cinder_tgt.conf``, as follows:" msgstr "" #: ../ts_cinder_config.rst:202 msgid "" "Restart ``tgt`` and ``cinder-*`` services, so they pick up the new " "configuration." msgstr "" #: ../ts_multipath_warn.rst:3 msgid "Multipath call failed exit" msgstr "" #: ../ts_multipath_warn.rst:8 msgid "" "Multipath call failed exit. This warning occurs in the Compute log if you do " "not have the optional ``multipath-tools`` package installed on the compute " "node. This is an optional package and the volume attachment does work " "without the multipath tools installed. If the ``multipath-tools`` package is " "installed on the compute node, it is used to perform the volume attachment. " "The IDs in your message are unique to your system." msgstr "" #: ../ts_multipath_warn.rst:25 msgid "" "Run the following command on the compute node to install the ``multipath-" "tools`` packages." msgstr "" #: ../ts_no_emulator_x86_64.rst:3 msgid "Cannot find suitable emulator for x86_64" msgstr "" #: ../ts_no_emulator_x86_64.rst:8 msgid "" "When you attempt to create a VM, the error shows the VM is in the ``BUILD`` " "then ``ERROR`` state." msgstr "" #: ../ts_no_emulator_x86_64.rst:14 msgid "" "On the KVM host, run :command:`cat /proc/cpuinfo`. Make sure the ``vmx`` or " "``svm`` flags are set." msgstr "" #: ../ts_no_emulator_x86_64.rst:17 msgid "" "Follow the instructions in the `Enable KVM `__ section in the " "OpenStack Configuration Reference to enable hardware virtualization support " "in your BIOS." msgstr "" #: ../ts_non_existent_host.rst:3 msgid "Non-existent host" msgstr "" #: ../ts_non_existent_host.rst:8 msgid "" "This error could be caused by a volume being exported outside of OpenStack " "using a host name different from the system name that OpenStack expects. " "This error could be displayed with the :term:`IQN` if the host was exported " "using iSCSI." msgstr "" #: ../ts_non_existent_host.rst:21 msgid "" "Host names constructed by the driver use just the local host name, not the " "fully qualified domain name (FQDN) of the Compute host. For example, if the " "FQDN was **myhost.example.com**, just **myhost** would be used as the 3PAR " "host name. IP addresses are not allowed as host names on the 3PAR storage " "server." msgstr "" #: ../ts_non_existent_vlun.rst:3 msgid "Non-existent VLUN" msgstr "" #: ../ts_non_existent_vlun.rst:8 msgid "" "This error occurs if the 3PAR host exists with the correct host name that " "the OpenStack Block Storage drivers expect but the volume was created in a " "different Domain." msgstr "" #: ../ts_non_existent_vlun.rst:20 msgid "" "The ``hpe3par_domain`` configuration items either need to be updated to use " "the domain the 3PAR host currently resides in, or the 3PAR host needs to be " "moved to the domain that the volume was created in." msgstr "" #: ../ts_vol_attach_miss_sg_scan.rst:3 msgid "Failed to Attach Volume, Missing sg_scan" msgstr "" #: ../ts_vol_attach_miss_sg_scan.rst:8 msgid "" "Failed to attach volume to an instance, ``sg_scan`` file not found. This " "warning and error occur when the sg3-utils package is not installed on the " "compute node. The IDs in your message are unique to your system:" msgstr "" #: ../ts_vol_attach_miss_sg_scan.rst:24 msgid "" "Run this command on the compute node to install the ``sg3-utils`` package:" msgstr ""