# SOME DESCRIPTIVE TITLE. # Copyright (C) 2015-2016, OpenStack contributors # This file is distributed under the same license as the Networking Guide package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: Networking Guide 0.9\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2017-03-09 14:01+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: ../config-address-scopes.rst:5 msgid "Address scopes" msgstr "" #: ../config-address-scopes.rst:7 msgid "" "Address scopes build from subnet pools. While subnet pools provide a " "mechanism for controlling the allocation of addresses to subnets, address " "scopes show where addresses can be routed between networks, preventing the " "use of overlapping addresses in any two subnets. Because all addresses " "allocated in the address scope do not overlap, neutron routers do not NAT " "between your tenants' network and your external network. As long as the " "addresses within an address scope match, the Networking service performs " "simple routing between networks." msgstr "" #: ../config-address-scopes.rst:17 msgid "Accessing address scopes" msgstr "" #: ../config-address-scopes.rst:19 msgid "" "Anyone with access to the Networking service can create their own address " "scopes. However, network administrators can create shared address scopes, " "allowing other projects to create networks within that address scope." msgstr "" #: ../config-address-scopes.rst:23 msgid "" "Access to addresses in a scope are managed through subnet pools. Subnet " "pools can either be created in an address scope, or updated to belong to an " "address scope." msgstr "" #: ../config-address-scopes.rst:27 msgid "" "With subnet pools, all addresses in use within the address scope are unique " "from the point of view of the address scope owner. Therefore, add more than " "one subnet pool to an address scope if the pools have different owners, " "allowing for delegation of parts of the address scope. Delegation prevents " "address overlap across the whole scope. Otherwise, you receive an error if " "two pools have the same address ranges." msgstr "" #: ../config-address-scopes.rst:35 msgid "" "Each router interface is associated with an address scope by looking at " "subnets connected to the network. When a router connects to an external " "network with matching address scopes, network traffic routes between without " "Network address translation (NAT). The router marks all traffic connections " "originating from each interface with its corresponding address scope. If " "traffic leaves an interface in the wrong scope, the router blocks the " "traffic." msgstr "" #: ../config-address-scopes.rst:44 msgid "Backwards compatibility" msgstr "" #: ../config-address-scopes.rst:46 msgid "" "Networks created before the Mitaka release do not contain explicitly named " "address scopes, unless the network contains subnets from a subnet pool that " "belongs to a created or updated address scope. The Networking service " "preserves backwards compatibility with pre-Mitaka networks through special " "address scope properties so that these networks can perform advanced routing:" msgstr "" #: ../config-address-scopes.rst:53 msgid "Unlimited address overlap is allowed." msgstr "" #: ../config-address-scopes.rst:54 msgid "" "Neutron routers, by default, will NAT traffic from internal networks to " "external networks." msgstr "" #: ../config-address-scopes.rst:56 msgid "" "Pre-Mitaka address scopes are not visible through the API. You cannot list " "address scopes or show details. Scopes exist implicitly as a catch-all for " "addresses that are not explicitly scoped." msgstr "" #: ../config-address-scopes.rst:61 msgid "Create shared address scopes as an administrative user" msgstr "" #: ../config-address-scopes.rst:63 msgid "" "This section shows how to set up shared address scopes to allow simple " "routing for project networks with the same subnet pools." msgstr "" #: ../config-address-scopes.rst:66 msgid "" "Irrelevant fields have been trimmed from the output of these commands for " "brevity." msgstr "" #: ../config-address-scopes.rst:69 msgid "Create IPv6 and IPv4 address scopes:" msgstr "" #: ../config-address-scopes.rst:97 msgid "" "Create subnet pools specifying the name (or UUID) of the address scope that " "the subnet pool belongs to. If you have existing subnet pools, use the " "``subnetpool-update`` command to put them in a new address scope:" msgstr "" #: ../config-address-scopes.rst:138 msgid "" "Make sure that subnets on external networks are created from the subnet " "pools created above:" msgstr "" #: ../config-address-scopes.rst:174 msgid "Routing with address scopes for non-privileged users" msgstr "" #: ../config-address-scopes.rst:176 msgid "" "This section shows how non-privileged users can use address scopes to route " "straight to an external network without NAT." msgstr "" #: ../config-address-scopes.rst:179 msgid "Create a couple of networks to host subnets:" msgstr "" #: ../config-address-scopes.rst:205 msgid "Create a subnet not associated with a subnet pool or an address scope:" msgstr "" #: ../config-address-scopes.rst:238 msgid "" "Create a subnet using a subnet pool associated with a address scope from an " "external network:" msgstr "" #: ../config-address-scopes.rst:272 msgid "" "By creating subnets from scoped subnet pools, the network is associated with " "the address scope." msgstr "" #: ../config-address-scopes.rst:289 msgid "" "Connect a router to each of the tenant subnets that have been created, for " "example, using a router called ``router1``:" msgstr "" #: ../config-address-scopes.rst:304 msgid "Checking connectivity" msgstr "" #: ../config-address-scopes.rst:306 msgid "" "This example shows how to check the connectivity between networks with " "address scopes." msgstr "" #: ../config-address-scopes.rst:309 msgid "" "Launch two instances, ``instance1`` on ``network1`` and ``instance2`` on " "``network2``. Associate a floating IP address to both instances." msgstr "" #: ../config-address-scopes.rst:313 msgid "Adjust security groups to allow pings and SSH (both IPv4 and IPv6):" msgstr "" #: ../config-address-scopes.rst:325 msgid "" "Regardless of address scopes, the floating IPs can be pinged from the " "external network:" msgstr "" #: ../config-address-scopes.rst:335 msgid "" "You can now ping ``instance2`` directly because ``instance2`` shares the " "same address scope as the external network:" msgstr "" #: ../config-address-scopes.rst:338 msgid "" "BGP routing can be used to automatically set up a static route for your " "instances." msgstr "" #: ../config-address-scopes.rst:353 msgid "" "You cannot ping ``instance1`` directly because the address scopes do not " "match:" msgstr "" #: ../config-address-scopes.rst:368 msgid "" "If the address scopes match between networks then pings and other traffic " "route directly through. If the scopes do not match between networks, the " "router either drops the traffic or applies NAT to cross scope boundaries." msgstr "" #: ../config-auto-allocation.rst:5 msgid "Automatic allocation of network topologies" msgstr "" #: ../config-auto-allocation.rst:7 msgid "" "The auto-allocation feature introduced in Mitaka simplifies the procedure of " "setting up an external connectivity for end-users, and is also known as " "**Get Me A Network**." msgstr "" #: ../config-auto-allocation.rst:11 msgid "" "The operator must create a default external network and default subnetpools " "(one for IPv4, or one for IPv6, or one of each). Once these are in place, " "users can get their auto-allocated topologies with a single command." msgstr "" #: ../config-auto-allocation.rst:16 msgid "Enabling the deployment for auto-allocation" msgstr "" #: ../config-auto-allocation.rst:18 msgid "" "To use this feature, the neutron service must have the following extensions " "enabled:" msgstr "" #: ../config-auto-allocation.rst:21 msgid "``auto-allocated-topology``" msgstr "" #: ../config-auto-allocation.rst:22 msgid "``subnet_allocation``" msgstr "" #: ../config-auto-allocation.rst:23 msgid "``external-net``" msgstr "" #: ../config-auto-allocation.rst:24 msgid "``router``" msgstr "" #: ../config-auto-allocation.rst:26 msgid "" "Before the end-user can use the auto-allocation feature, the operator must " "create the resources that will be used for the auto-allocated network " "topology creation. To perform this task, proceed with the following steps:" msgstr "" #: ../config-auto-allocation.rst:30 msgid "Set up a default external network" msgstr "" #: ../config-auto-allocation.rst:32 msgid "" "Setting up an external network is described in `OpenStack Administrator " "Guide `_. Assuming the external network to be used for the auto-allocation " "feature is named ``public``, make it the default external network with the " "following command:" msgstr "" #: ../config-auto-allocation.rst:43 msgid "Create default subnetpools" msgstr "" #: ../config-auto-allocation.rst:45 msgid "" "The auto-allocation feature requires at least one default subnetpool. One " "for IPv4, or one for IPv6, or one of each." msgstr "" #: ../config-auto-allocation.rst:94 msgid "Get Me A Network" msgstr "" #: ../config-auto-allocation.rst:96 msgid "" "In a deployment where the operator has set up the resources as described " "above, users can get their auto-allocated network topology as follows:" msgstr "" #: ../config-auto-allocation.rst:109 msgid "" "Operators (and users with admin role) can get the auto-allocated topology " "for a tenant by specifying the tenant ID:" msgstr "" #: ../config-auto-allocation.rst:122 msgid "" "The ID returned by this command is a network which can be used for booting a " "VM." msgstr "" #: ../config-auto-allocation.rst:130 msgid "The auto-allocated topology for a user never changes." msgstr "" #: ../config-auto-allocation.rst:133 msgid "Validating the requirements for auto-allocation" msgstr "" #: ../config-auto-allocation.rst:135 msgid "" "To validate that the required resources are correctly set up for auto-" "allocation, use the ``--dry-run`` option:" msgstr "" #: ../config-auto-allocation.rst:157 msgid "" "The validation option behaves identically for all users. However, it is " "considered primarily an admin utility since it is the operator who must set " "up the requirements." msgstr "" #: ../config-auto-allocation.rst:162 msgid "Project resources created by auto-allocation" msgstr "" #: ../config-auto-allocation.rst:164 msgid "" "The auto-allocation feature creates one network topology in every project " "where it is used. The auto-allocated network topology for a project contains " "the following resources:" msgstr "" #: ../config-auto-allocation.rst:169 msgid "Name" msgstr "" # #-#-#-#-# config-auto-allocation.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-dns-int.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-auto-allocation.rst:169 ../config-dns-int.rst:24 msgid "Resource" msgstr "" #: ../config-auto-allocation.rst:171 msgid "``auto_allocated_network``" msgstr "" #: ../config-auto-allocation.rst:171 msgid "network" msgstr "" #: ../config-auto-allocation.rst:173 msgid "``auto_allocated_subnet_v4``" msgstr "" #: ../config-auto-allocation.rst:173 msgid "subnet (IPv4)" msgstr "" #: ../config-auto-allocation.rst:175 msgid "``auto_allocated_subnet_v6``" msgstr "" #: ../config-auto-allocation.rst:175 msgid "subnet (IPv6)" msgstr "" #: ../config-auto-allocation.rst:177 msgid "``auto_allocated_router``" msgstr "" #: ../config-auto-allocation.rst:177 msgid "router" msgstr "" #: ../config-az.rst:5 msgid "Availability zones" msgstr "" #: ../config-az.rst:7 msgid "" "An availability zone groups network nodes that run services like DHCP, L3, " "FW, and others. It is defined as an agent's attribute on the network node. " "This allows users to associate an availability zone with their resources so " "that the resources get high availability." msgstr "" #: ../config-az.rst:14 msgid "Use case" msgstr "" #: ../config-az.rst:16 msgid "" "An availability zone is used to make network resources highly available. The " "operators group the nodes that are attached to different power sources under " "separate availability zones and configure scheduling for resources with high " "availability so that they are scheduled on different availability zones." msgstr "" #: ../config-az.rst:23 msgid "Required extensions" msgstr "" #: ../config-az.rst:25 msgid "" "The core plug-in must support the ``availability_zone`` extension. The core " "plug-in also must support the ``network_availability_zone`` extension to " "schedule a network according to availability zones. The ``Ml2Plugin`` " "supports it. The router service plug-in must support the " "``router_availability_zone`` extension to schedule a router according to the " "availability zones. The ``L3RouterPlugin`` supports it." msgstr "" #: ../config-az.rst:49 msgid "Availability zone of agents" msgstr "" #: ../config-az.rst:51 msgid "" "The ``availability_zone`` attribute can be defined in ``dhcp-agent`` and " "``l3-agent``. To define an availability zone for each agent, set the value " "into ``[AGENT]`` section of ``/etc/neutron/dhcp_agent.ini`` or ``/etc/" "neutron/l3_agent.ini``:" msgstr "" #: ../config-az.rst:61 msgid "To confirm the agent's availability zone:" msgstr "" #: ../config-az.rst:124 msgid "Availability zone related attributes" msgstr "" #: ../config-az.rst:126 msgid "The following attributes are added into network and router:" msgstr "" #: ../config-az.rst:132 msgid "Attribute name" msgstr "" #: ../config-az.rst:133 msgid "Access" msgstr "" #: ../config-az.rst:134 msgid "Required" msgstr "" #: ../config-az.rst:135 msgid "Input type" msgstr "" # #-#-#-#-# config-az.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-dhcp-ha.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-ipv6.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-az.rst:136 ../config-dhcp-ha.rst:45 ../config-ipv6.rst:92 msgid "Description" msgstr "" #: ../config-az.rst:138 msgid "availability_zone_hints" msgstr "" #: ../config-az.rst:139 msgid "RW(POST only)" msgstr "" # #-#-#-#-# config-az.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-dns-int.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-az.rst:140 ../config-dns-int.rst:29 ../config-dns-int.rst:31 msgid "No" msgstr "" #: ../config-az.rst:141 ../config-az.rst:147 msgid "list of string" msgstr "" #: ../config-az.rst:142 msgid "availability zone candidates for the resource" msgstr "" #: ../config-az.rst:144 msgid "availability_zones" msgstr "" #: ../config-az.rst:145 msgid "RO" msgstr "" #: ../config-az.rst:146 msgid "N/A" msgstr "" #: ../config-az.rst:148 msgid "availability zones for the resource" msgstr "" #: ../config-az.rst:150 msgid "" "Use ``availability_zone_hints`` to specify the zone in which the resource is " "hosted:" msgstr "" #: ../config-az.rst:202 msgid "" "Availability zone is selected from ``default_availability_zones`` in ``/etc/" "neutron/neutron.conf`` if a resource is created without " "``availability_zone_hints``:" msgstr "" #: ../config-az.rst:210 msgid "To confirm the availability zone defined by the system:" msgstr "" #: ../config-az.rst:224 msgid "" "Look at the ``availability_zones`` attribute of each resource to confirm in " "which zone the resource is hosted:" msgstr "" #: ../config-az.rst:275 msgid "" "The ``availability_zones`` attribute does not have a value until the " "resource is scheduled. Once the Networking service schedules the resource to " "zones according to ``availability_zone_hints``, ``availability_zones`` shows " "in which zone the resource is hosted practically. The ``availability_zones`` " "may not match ``availability_zone_hints``. For example, even if you specify " "a zone with ``availability_zone_hints``, all agents of the zone may be dead " "before the resource is scheduled. In general, they should match, unless " "there are failures or there is no capacity left in the zone requested." msgstr "" #: ../config-az.rst:287 msgid "Availability zone aware scheduler" msgstr "" #: ../config-az.rst:290 msgid "Network scheduler" msgstr "" #: ../config-az.rst:292 msgid "" "Set ``AZAwareWeightScheduler`` to ``network_scheduler_driver`` in ``/etc/" "neutron/neutron.conf`` so that the Networking service schedules a network " "according to the availability zone:" msgstr "" #: ../config-az.rst:301 msgid "" "The Networking service schedules a network to one of the agents within the " "selected zone as with ``WeightScheduler``. In this case, scheduler refers to " "``dhcp_load_type`` as well." msgstr "" #: ../config-az.rst:307 msgid "Router scheduler" msgstr "" #: ../config-az.rst:309 msgid "" "Set ``AZLeastRoutersScheduler`` to ``router_scheduler_driver`` in file ``/" "etc/neutron/neutron.conf`` so that the Networking service schedules a router " "according to the availability zone:" msgstr "" #: ../config-az.rst:317 msgid "" "The Networking service schedules a router to one of the agents within the " "selected zone as with ``LeastRouterScheduler``." msgstr "" #: ../config-az.rst:322 msgid "Achieving high availability with availability zone" msgstr "" #: ../config-az.rst:324 msgid "" "Although, the Networking service provides high availability for routers and " "high availability and fault tolerance for networks' DHCP services, " "availability zones provide an extra layer of protection by segmenting a " "Networking service deployment in isolated failure domains. By deploying HA " "nodes across different availability zones, it is guaranteed that network " "services remain available in face of zone-wide failures that affect the " "deployment." msgstr "" #: ../config-az.rst:331 msgid "" "This section explains how to get high availability with the availability " "zone for L3 and DHCP. You should naturally set above configuration options " "for the availability zone." msgstr "" #: ../config-az.rst:336 msgid "L3 high availability" msgstr "" #: ../config-az.rst:338 msgid "" "Set the following configuration options in file ``/etc/neutron/neutron." "conf`` so that you get L3 high availability." msgstr "" #: ../config-az.rst:347 msgid "" "HA routers are created on availability zones you selected when creating the " "router." msgstr "" #: ../config-az.rst:351 msgid "DHCP high availability" msgstr "" #: ../config-az.rst:353 msgid "" "Set the following configuration options in file ``/etc/neutron/neutron." "conf`` so that you get DHCP high availability." msgstr "" #: ../config-az.rst:360 msgid "" "DHCP services are created on availability zones you selected when creating " "the network." msgstr "" #: ../config-bgp-dynamic-routing.rst:5 msgid "BGP dynamic routing" msgstr "" #: ../config-bgp-dynamic-routing.rst:7 msgid "" "BGP dynamic routing enables advertisement of self-service (private) network " "prefixes to physical network devices that support BGP such as routers, thus " "removing the conventional dependency on static routes. The feature relies " "on :ref:`address scopes ` and requires knowledge of " "their operation for proper deployment." msgstr "" #: ../config-bgp-dynamic-routing.rst:13 msgid "" "BGP dynamic routing consists of a service plug-in and an agent. The service " "plug-in implements the Networking service extension and the agent manages " "BGP peering sessions. A cloud administrator creates and configures a BGP " "speaker using the CLI or API and manually schedules it to one or more hosts " "running the agent. Agents can reside on hosts with or without other " "Networking service agents. Prefix advertisement depends on the binding of " "external networks to a BGP speaker and the address scope of external and " "internal IP address ranges or subnets." msgstr "" #: ../config-bgp-dynamic-routing.rst:27 msgid "" "Although self-service networks generally use private IP address ranges " "(RFC1918) for IPv4 subnets, BGP dynamic routing can advertise any IPv4 " "address ranges." msgstr "" # #-#-#-#-# config-bgp-dynamic-routing.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-bgp-dynamic-routing.rst:32 ../config-macvtap.rst:72 #: ../deploy-lb-ha-vrrp.rst:48 ../deploy-lb-provider.rst:66 #: ../deploy-lb-selfservice.rst:53 ../deploy-ovs-ha-dvr.rst:60 #: ../deploy-ovs-ha-vrrp.rst:40 ../deploy-ovs-provider.rst:77 #: ../deploy-ovs-selfservice.rst:47 msgid "Example configuration" msgstr "" #: ../config-bgp-dynamic-routing.rst:34 msgid "The example configuration involves the following components:" msgstr "" #: ../config-bgp-dynamic-routing.rst:36 msgid "One BGP agent." msgstr "" #: ../config-bgp-dynamic-routing.rst:38 msgid "" "One address scope containing IP address range 203.0.113.0/24 for provider " "networks, and IP address ranges 10.0.1.0/24 and 10.0.2.0/24 for self-service " "networks." msgstr "" #: ../config-bgp-dynamic-routing.rst:42 msgid "One provider network using IP address range 203.0.113.0/24." msgstr "" #: ../config-bgp-dynamic-routing.rst:44 msgid "Three self-service networks." msgstr "" #: ../config-bgp-dynamic-routing.rst:46 msgid "" "Self-service networks 1 and 2 use IP address ranges inside of the address " "scope." msgstr "" #: ../config-bgp-dynamic-routing.rst:49 msgid "" "Self-service network 3 uses a unique IP address range 10.0.3.0/24 to " "demonstrate that the BGP speaker does not advertise prefixes outside of " "address scopes." msgstr "" #: ../config-bgp-dynamic-routing.rst:53 msgid "" "Three routers. Each router connects one self-service network to the provider " "network." msgstr "" #: ../config-bgp-dynamic-routing.rst:56 msgid "Router 1 contains IP addresses 203.0.113.11 and 10.0.1.1." msgstr "" #: ../config-bgp-dynamic-routing.rst:58 msgid "Router 2 contains IP addresses 203.0.113.12 and 10.0.2.1." msgstr "" #: ../config-bgp-dynamic-routing.rst:60 msgid "Router 3 contains IP addresses 203.0.113.13 and 10.0.3.1." msgstr "" #: ../config-bgp-dynamic-routing.rst:64 msgid "" "The example configuration assumes sufficient knowledge about the Networking " "service, routing, and BGP. For basic deployment of the Networking service, " "consult one of the :ref:`deploy`. For more information on BGP, see `RFC 4271 " "`_." msgstr "" # #-#-#-#-# config-bgp-dynamic-routing.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-bgp-dynamic-routing.rst:71 ../config-macvtap.rst:78 #: ../deploy-lb-ha-vrrp.rst:55 ../deploy-lb-provider.rst:72 #: ../deploy-lb-selfservice.rst:60 ../deploy-ovs-ha-dvr.rst:67 #: ../deploy-ovs-ha-vrrp.rst:47 ../deploy-ovs-provider.rst:83 #: ../deploy-ovs-selfservice.rst:54 msgid "Controller node" msgstr "" #: ../config-bgp-dynamic-routing.rst:73 msgid "" "In the ``neutron.conf`` file, enable the conventional layer-3 and BGP " "dynamic routing service plug-ins:" msgstr "" #: ../config-bgp-dynamic-routing.rst:82 msgid "Agent nodes" msgstr "" #: ../config-bgp-dynamic-routing.rst:84 msgid "In the ``bgp_dragent.ini`` file:" msgstr "" #: ../config-bgp-dynamic-routing.rst:86 msgid "Configure the driver." msgstr "" #: ../config-bgp-dynamic-routing.rst:95 msgid "The agent currently only supports the Ryu BGP driver." msgstr "" #: ../config-bgp-dynamic-routing.rst:97 msgid "Configure the router ID." msgstr "" #: ../config-bgp-dynamic-routing.rst:104 msgid "" "Replace ``ROUTER_ID`` with a suitable unique 32-bit number, typically an " "IPv4 address on the host running the agent. For example, 192.0.2.2." msgstr "" # #-#-#-#-# config-bgp-dynamic-routing.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-bgp-dynamic-routing.rst:108 ../config-macvtap.rst:144 #: ../deploy-lb-ha-vrrp.rst:129 ../deploy-lb-provider.rst:205 #: ../deploy-lb-selfservice.rst:172 ../deploy-ovs-ha-dvr.rst:141 #: ../deploy-ovs-ha-vrrp.rst:130 ../deploy-ovs-provider.rst:233 #: ../deploy-ovs-selfservice.rst:177 msgid "Verify service operation" msgstr "" # #-#-#-#-# config-bgp-dynamic-routing.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# migration-classic-to-l3ha.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-bgp-dynamic-routing.rst:110 ../config-macvtap.rst:146 #: ../deploy-lb-ha-vrrp.rst:131 ../deploy-lb-provider.rst:207 #: ../deploy-lb-selfservice.rst:174 ../deploy-ovs-ha-dvr.rst:143 #: ../deploy-ovs-ha-dvr.rst:299 ../deploy-ovs-ha-vrrp.rst:132 #: ../deploy-ovs-provider.rst:235 ../deploy-ovs-selfservice.rst:179 #: ../migration-classic-to-l3ha.rst:57 ../migration-classic-to-l3ha.rst:134 #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:1 #: ../shared/deploy-provider-initialnetworks.txt:4 #: ../shared/deploy-selfservice-initialnetworks.txt:11 msgid "Source the administrative project credentials." msgstr "" #: ../config-bgp-dynamic-routing.rst:111 msgid "Verify presence and operation of each BGP dynamic routing agent." msgstr "" #: ../config-bgp-dynamic-routing.rst:123 msgid "Create the address scope and subnet pools" msgstr "" #: ../config-bgp-dynamic-routing.rst:125 msgid "" "Create an address scope. The provider (external) and self-service networks " "must belong to the same address scope for the agent to advertise those self-" "service network prefixes." msgstr "" #: ../config-bgp-dynamic-routing.rst:143 msgid "" "Create subnet pools. The provider and self-service networks use different " "pools." msgstr "" #: ../config-bgp-dynamic-routing.rst:146 msgid "Create the provider network pool." msgstr "" #: ../config-bgp-dynamic-routing.rst:173 msgid "Create the self-service network pool." msgstr "" #: ../config-bgp-dynamic-routing.rst:203 msgid "Create the provider and self-service networks" msgstr "" #: ../config-bgp-dynamic-routing.rst:205 msgid "Create the provider network." msgstr "" #: ../config-bgp-dynamic-routing.rst:228 msgid "" "Create a subnet on the provider network using an IP address range from the " "provider subnet pool." msgstr "" #: ../config-bgp-dynamic-routing.rst:261 msgid "" "The IP address allocation pool starting at ``.11`` improves clarity of the " "diagrams. You can safely omit it." msgstr "" #: ../config-bgp-dynamic-routing.rst:264 msgid "Create the self-service networks." msgstr "" #: ../config-bgp-dynamic-routing.rst:310 msgid "" "Create a subnet on the first two self-service networks using an IP address " "range from the self-service subnet pool." msgstr "" #: ../config-bgp-dynamic-routing.rst:365 msgid "" "Create a subnet on the last self-service network using an IP address range " "outside of the address scope." msgstr "" #: ../config-bgp-dynamic-routing.rst:395 msgid "Create and configure the routers" msgstr "" #: ../config-bgp-dynamic-routing.rst:397 msgid "Create the routers." msgstr "" #: ../config-bgp-dynamic-routing.rst:437 msgid "" "For each router, add one self-service subnet as an interface on the router." msgstr "" #: ../config-bgp-dynamic-routing.rst:450 msgid "Add the provider network as a gateway on each router." msgstr "" #: ../config-bgp-dynamic-routing.rst:464 msgid "Create and configure the BGP speaker" msgstr "" #: ../config-bgp-dynamic-routing.rst:466 msgid "" "The BGP speaker advertises the next-hop IP address for eligible self-service " "networks and floating IP addresses for instances using those networks." msgstr "" #: ../config-bgp-dynamic-routing.rst:469 msgid "Create the BGP speaker." msgstr "" #: ../config-bgp-dynamic-routing.rst:490 msgid "" "Replace ``LOCAL_AS`` with an appropriate local autonomous system number. The " "example configuration uses AS 1234." msgstr "" #: ../config-bgp-dynamic-routing.rst:493 msgid "" "A BGP speaker requires association with a provider network to determine " "eligible prefixes. The association builds a list of all virtual routers with " "gateways on provider and self-service networks in the same address scope so " "the BGP speaker can advertise self-service network prefixes with the " "corresponding router as the next-hop IP address. Associate the BGP speaker " "with the provider network." msgstr "" #: ../config-bgp-dynamic-routing.rst:505 msgid "Verify association of the provider network with the BGP speaker." msgstr "" #: ../config-bgp-dynamic-routing.rst:524 msgid "" "Verify the prefixes and next-hop IP addresses that the BGP speaker " "advertises." msgstr "" #: ../config-bgp-dynamic-routing.rst:537 msgid "Create a BGP peer." msgstr "" #: ../config-bgp-dynamic-routing.rst:555 msgid "" "Replace ``REMOTE_AS`` with an appropriate remote autonomous system number. " "The example configuration uses AS 4321 which triggers EBGP peering." msgstr "" #: ../config-bgp-dynamic-routing.rst:560 msgid "" "The host containing the BGP agent must have layer-3 connectivity to the " "provider router." msgstr "" #: ../config-bgp-dynamic-routing.rst:563 msgid "Add a BGP peer to the BGP speaker." msgstr "" #: ../config-bgp-dynamic-routing.rst:570 msgid "Verify addition of the BGP peer to the BGP speaker." msgstr "" #: ../config-bgp-dynamic-routing.rst:591 msgid "" "After creating a peering session, you cannot change the local or remote " "autonomous system numbers." msgstr "" #: ../config-bgp-dynamic-routing.rst:595 msgid "Schedule the BGP speaker to an agent" msgstr "" #: ../config-bgp-dynamic-routing.rst:597 msgid "" "Unlike most agents, BGP speakers require manual scheduling to an agent. BGP " "speakers only form peering sessions and begin prefix advertisement after " "scheduling to an agent. Schedule the BGP speaker to agent " "``37729181-2224-48d8-89ef-16eca8e2f77e``." msgstr "" #: ../config-bgp-dynamic-routing.rst:607 msgid "Verify scheduling of the BGP speaker to the agent." msgstr "" #: ../config-bgp-dynamic-routing.rst:626 msgid "Prefix advertisement" msgstr "" #: ../config-bgp-dynamic-routing.rst:628 msgid "" "BGP dynamic routing advertises prefixes for self-service networks and host " "routes for floating IP addresses." msgstr "" #: ../config-bgp-dynamic-routing.rst:631 msgid "" "Advertisement of a self-service network requires satisfying the following " "conditions:" msgstr "" #: ../config-bgp-dynamic-routing.rst:634 msgid "The external and self-service network reside in the same address scope." msgstr "" #: ../config-bgp-dynamic-routing.rst:636 msgid "" "The router contains an interface on the self-service subnet and a gateway on " "the external network." msgstr "" #: ../config-bgp-dynamic-routing.rst:639 msgid "" "The BGP speaker associates with the external network that provides a gateway " "on the router." msgstr "" #: ../config-bgp-dynamic-routing.rst:642 msgid "" "The BGP speaker has the ``advertise_tenant_networks`` attribute set to " "``True``." msgstr "" #: ../config-bgp-dynamic-routing.rst:648 msgid "" "Advertisement of a floating IP address requires satisfying the following " "conditions:" msgstr "" #: ../config-bgp-dynamic-routing.rst:651 msgid "" "The router with the floating IP address binding contains a gateway on an " "external network with the BGP speaker association." msgstr "" #: ../config-bgp-dynamic-routing.rst:654 msgid "" "The BGP speaker has the ``advertise_floating_ip_host_routes`` attribute set " "to ``True``." msgstr "" #: ../config-bgp-dynamic-routing.rst:661 msgid "Operation with Distributed Virtual Routers (DVR)" msgstr "" #: ../config-bgp-dynamic-routing.rst:663 msgid "" "In deployments using DVR, the BGP speaker advertises floating IP addresses " "and self-service networks differently. For floating IP addresses, the BGP " "speaker advertises the floating IP agent gateway on the corresponding " "compute node as the next-hop IP address. For self-service networks using " "SNAT, the BGP speaker advertises the DVR SNAT node as the next-hop IP " "address." msgstr "" #: ../config-bgp-dynamic-routing.rst:670 msgid "For example, consider the following components:" msgstr "" #: ../config-bgp-dynamic-routing.rst:672 msgid "" "A provider network using IP address range 203.0.113.0/24, and supporting " "floating IP addresses 203.0.113.101, 203.0.113.102, and 203.0.113.103." msgstr "" #: ../config-bgp-dynamic-routing.rst:675 msgid "A self-service network using IP address range 10.0.1.0/24." msgstr "" #: ../config-bgp-dynamic-routing.rst:677 msgid "The SNAT gateway resides on 203.0.113.11." msgstr "" #: ../config-bgp-dynamic-routing.rst:679 msgid "" "The floating IP agent gateways (one per compute node) reside on " "203.0.113.12, 203.0.113.13, and 203.0.113.14." msgstr "" #: ../config-bgp-dynamic-routing.rst:682 msgid "Three instances, one per compute node, each with a floating IP address." msgstr "" #: ../config-bgp-dynamic-routing.rst:699 msgid "" "DVR lacks support for routing directly to a fixed IP address via the " "floating IP agent gateway port and thus prevents the BGP speaker from " "advertising fixed IP addresses." msgstr "" #: ../config-bgp-dynamic-routing.rst:703 msgid "" "You can also identify floating IP agent gateways in your environment to " "assist with verifying operation of the BGP speaker." msgstr "" # #-#-#-#-# config-bgp-dynamic-routing.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-ipv6.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-bgp-dynamic-routing.rst:718 ../config-ipv6.rst:5 msgid "IPv6" msgstr "" #: ../config-bgp-dynamic-routing.rst:720 msgid "" "BGP dynamic routing supports peering via IPv6 and advertising IPv6 prefixes." msgstr "" #: ../config-bgp-dynamic-routing.rst:722 msgid "" "To enable peering via IPv6, create a BGP peer and use an IPv6 address for " "``peer_ip``." msgstr "" #: ../config-bgp-dynamic-routing.rst:725 msgid "" "To enable advertising IPv6 prefixes, create an address scope with " "``ip_version=6`` and a BGP speaker with ``ip_version=6``." msgstr "" #: ../config-bgp-dynamic-routing.rst:730 msgid "DVR with IPv6 functions similarly to DVR with IPv4." msgstr "" #: ../config-bgp-dynamic-routing.rst:733 msgid "High availability" msgstr "" #: ../config-bgp-dynamic-routing.rst:735 msgid "" "BGP dynamic routing supports scheduling a BGP speaker to multiple agents " "which effectively multiplies prefix advertisements to the same peer. If an " "agent fails, the peer continues to receive advertisements from one or more " "operational agents." msgstr "" #: ../config-bgp-dynamic-routing.rst:740 msgid "Show available dynamic routing agents." msgstr "" #: ../config-bgp-dynamic-routing.rst:752 msgid "Schedule BGP speaker to multiple agents." msgstr "" #: ../config-dhcp-ha.rst:5 msgid "High-availability for DHCP" msgstr "" #: ../config-dhcp-ha.rst:7 msgid "" "This section describes how to use the agent management (alias agent) and " "scheduler (alias agent_scheduler) extensions for DHCP agents scalability and " "HA." msgstr "" #: ../config-dhcp-ha.rst:13 msgid "" "Use the :command:`neutron ext-list` client command to check if these " "extensions are enabled. Check ``agent`` and ``agent_scheduler`` are included " "in the output." msgstr "" #: ../config-dhcp-ha.rst:34 msgid "Demo setup" msgstr "" #: ../config-dhcp-ha.rst:38 msgid "There will be three hosts in the setup." msgstr "" #: ../config-dhcp-ha.rst:44 msgid "Host" msgstr "" #: ../config-dhcp-ha.rst:46 msgid "OpenStack controller host - controlnode" msgstr "" #: ../config-dhcp-ha.rst:47 msgid "" "Runs the Networking, Identity, and Compute services that are required to " "deploy VMs. The node must have at least one network interface that is " "connected to the Management Network. Note that ``nova-network`` should not " "be running because it is replaced by Neutron." msgstr "" #: ../config-dhcp-ha.rst:51 msgid "HostA" msgstr "" #: ../config-dhcp-ha.rst:52 msgid "Runs ``nova-compute``, the Neutron L2 agent and DHCP agent" msgstr "" #: ../config-dhcp-ha.rst:53 msgid "HostB" msgstr "" #: ../config-dhcp-ha.rst:54 msgid "Same as HostA" msgstr "" # #-#-#-#-# config-dhcp-ha.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-ml2.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-qos.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-dhcp-ha.rst:57 ../config-ml2.rst:90 ../config-qos.rst:58 #: ../config.rst:5 msgid "Configuration" msgstr "" #: ../config-dhcp-ha.rst:59 msgid "**controlnode: neutron server**" msgstr "" #: ../config-dhcp-ha.rst:61 ../config-dhcp-ha.rst:97 msgid "Neutron configuration file ``/etc/neutron/neutron.conf``:" msgstr "" #: ../config-dhcp-ha.rst:75 msgid "" "In the above configuration, we use ``dhcp_agents_per_network = 1`` for this " "demonstration. In usual deployments, we suggest setting " "``dhcp_agents_per_network`` to more than one to match the number of DHCP " "agents in your deployment. See :ref:`conf-dhcp-agents-per-network`." msgstr "" #: ../config-dhcp-ha.rst:81 ../config-dhcp-ha.rst:107 msgid "" "Update the plug-in configuration file ``/etc/neutron/plugins/linuxbridge/" "linuxbridge_conf.ini``:" msgstr "" #: ../config-dhcp-ha.rst:95 msgid "**HostA and HostB: L2 agent**" msgstr "" #: ../config-dhcp-ha.rst:121 msgid "Update the nova configuration file ``/etc/nova/nova.conf``:" msgstr "" #: ../config-dhcp-ha.rst:137 msgid "**HostA and HostB: DHCP agent**" msgstr "" #: ../config-dhcp-ha.rst:139 msgid "Update the DHCP configuration file ``/etc/neutron/dhcp_agent.ini``:" msgstr "" #: ../config-dhcp-ha.rst:147 msgid "Prerequisites for demonstration" msgstr "" #: ../config-dhcp-ha.rst:149 msgid "" "Admin role is required to use the agent management and scheduler extensions. " "Ensure you run the following commands under a project with an admin role." msgstr "" #: ../config-dhcp-ha.rst:152 msgid "To experiment, you need VMs and a neutron network:" msgstr "" #: ../config-dhcp-ha.rst:173 msgid "Managing agents in neutron deployment" msgstr "" #: ../config-dhcp-ha.rst:175 msgid "List all agents:" msgstr "" #: ../config-dhcp-ha.rst:189 msgid "" "Every agent that supports these extensions will register itself with the " "neutron server when it starts up." msgstr "" #: ../config-dhcp-ha.rst:192 msgid "" "The output shows information for four agents. The ``alive`` field shows " "``:-)`` if the agent reported its state within the period defined by the " "``agent_down_time`` option in the ``neutron.conf`` file. Otherwise the " "``alive`` is ``xxx``." msgstr "" #: ../config-dhcp-ha.rst:197 msgid "List DHCP agents that host a specified network:" msgstr "" #: ../config-dhcp-ha.rst:208 msgid "List the networks hosted by a given DHCP agent:" msgstr "" #: ../config-dhcp-ha.rst:210 msgid "This command is to show which networks a given dhcp agent is managing." msgstr "" #: ../config-dhcp-ha.rst:221 msgid "Show agent details." msgstr "" #: ../config-dhcp-ha.rst:223 msgid "The :command:`agent-show` command shows details for a specified agent:" msgstr "" #: ../config-dhcp-ha.rst:251 msgid "" "In this output, ``heartbeat_timestamp`` is the time on the neutron server. " "You do not need to synchronize all agents to this time for this extension to " "run correctly. ``configurations`` describes the static configuration for the " "agent or run time data. This agent is a DHCP agent and it hosts one network, " "one subnet, and three ports." msgstr "" #: ../config-dhcp-ha.rst:257 msgid "" "Different types of agents show different details. The following output shows " "information for a Linux bridge agent:" msgstr "" #: ../config-dhcp-ha.rst:284 msgid "" "The output shows ``bridge-mapping`` and the number of virtual network " "devices on this L2 agent." msgstr "" #: ../config-dhcp-ha.rst:288 msgid "Managing assignment of networks to DHCP agent" msgstr "" #: ../config-dhcp-ha.rst:290 msgid "" "A single network can be assigned to more than one DHCP agents and one DHCP " "agent can host more than one network. You can add a network to a DHCP agent " "and remove one from it." msgstr "" #: ../config-dhcp-ha.rst:294 msgid "Default scheduling." msgstr "" #: ../config-dhcp-ha.rst:296 msgid "" "When you create a network with one port, the network will be scheduled to an " "active DHCP agent. If many active DHCP agents are running, select one " "randomly. You can design more sophisticated scheduling algorithms in the " "same way as nova-schedule later on." msgstr "" #: ../config-dhcp-ha.rst:313 msgid "" "It is allocated to DHCP agent on HostA. If you want to validate the behavior " "through the :command:`dnsmasq` command, you must create a subnet for the " "network because the DHCP agent starts the dnsmasq service only if there is a " "DHCP." msgstr "" #: ../config-dhcp-ha.rst:318 msgid "Assign a network to a given DHCP agent." msgstr "" #: ../config-dhcp-ha.rst:320 msgid "To add another DHCP agent to host the network, run this command:" msgstr "" #: ../config-dhcp-ha.rst:334 msgid "Both DHCP agents host the ``net2`` network." msgstr "" #: ../config-dhcp-ha.rst:336 msgid "Remove a network from a specified DHCP agent." msgstr "" #: ../config-dhcp-ha.rst:338 msgid "" "This command is the sibling command for the previous one. Remove ``net2`` " "from the DHCP agent for HostA:" msgstr "" #: ../config-dhcp-ha.rst:353 msgid "" "You can see that only the DHCP agent for HostB is hosting the ``net2`` " "network." msgstr "" #: ../config-dhcp-ha.rst:357 msgid "HA of DHCP agents" msgstr "" #: ../config-dhcp-ha.rst:359 msgid "" "Boot a VM on ``net2``. Let both DHCP agents host ``net2``. Fail the agents " "in turn to see if the VM can still get the desired IP." msgstr "" #: ../config-dhcp-ha.rst:362 msgid "Boot a VM on ``net2``:" msgstr "" #: ../config-dhcp-ha.rst:386 msgid "Make sure both DHCP agents hosting ``net2``:" msgstr "" #: ../config-dhcp-ha.rst:388 msgid "Use the previous commands to assign the network to agents." msgstr "" #: ../config-dhcp-ha.rst:400 msgid "To test the HA of DHCP agent:" msgstr "" #: ../config-dhcp-ha.rst:402 msgid "" "Log in to the ``myserver4`` VM, and run ``udhcpc``, ``dhclient`` or other " "DHCP client." msgstr "" #: ../config-dhcp-ha.rst:405 msgid "" "Stop the DHCP agent on HostA. Besides stopping the ``neutron-dhcp-agent`` " "binary, you must stop the ``dnsmasq`` processes." msgstr "" #: ../config-dhcp-ha.rst:408 msgid "Run a DHCP client in VM to see if it can get the wanted IP." msgstr "" #: ../config-dhcp-ha.rst:410 msgid "Stop the DHCP agent on HostB too." msgstr "" #: ../config-dhcp-ha.rst:412 msgid "Run ``udhcpc`` in the VM; it cannot get the wanted IP." msgstr "" #: ../config-dhcp-ha.rst:414 msgid "Start DHCP agent on HostB. The VM gets the wanted IP again." msgstr "" #: ../config-dhcp-ha.rst:417 msgid "Disabling and removing an agent" msgstr "" #: ../config-dhcp-ha.rst:419 msgid "" "An administrator might want to disable an agent if a system hardware or " "software upgrade is planned. Some agents that support scheduling also " "support disabling and enabling agents, such as L3 and DHCP agents. After the " "agent is disabled, the scheduler does not schedule new resources to the " "agent." msgstr "" #: ../config-dhcp-ha.rst:425 msgid "" "After the agent is disabled, you can safely remove the agent. Even after " "disabling the agent, resources on the agent are kept assigned. Ensure you " "remove the resources on the agent before you delete the agent." msgstr "" #: ../config-dhcp-ha.rst:429 msgid "Disable the DHCP agent on HostA before you stop it:" msgstr "" #: ../config-dhcp-ha.rst:444 msgid "" "After you stop the DHCP agent on HostA, you can delete it by the following " "command:" msgstr "" #: ../config-dhcp-ha.rst:460 msgid "" "After deletion, if you restart the DHCP agent, it appears on the agent list " "again." msgstr "" #: ../config-dhcp-ha.rst:466 msgid "Enabling DHCP high availability by default" msgstr "" #: ../config-dhcp-ha.rst:468 msgid "" "You can control the default number of DHCP agents assigned to a network by " "setting the following configuration option in the file ``/etc/neutron/" "neutron.conf``." msgstr "" #: ../config-dns-int.rst:5 msgid "DNS integration" msgstr "" #: ../config-dns-int.rst:7 msgid "" "This page serves as a guide for how to use the DNS integration functionality " "of the Networking service. The functionality described covers DNS from two " "points of view:" msgstr "" #: ../config-dns-int.rst:11 msgid "" "The internal DNS functionality offered by the Networking service and its " "interaction with the Compute service." msgstr "" #: ../config-dns-int.rst:13 msgid "" "Integration of the Compute service and the Networking service with an " "external DNSaaS (DNS-as-a-Service)." msgstr "" #: ../config-dns-int.rst:16 msgid "" "Users can control the behavior of the Networking service in regards to DNS " "using two attributes associated with ports, networks, and floating IPs. The " "following table shows the attributes available for each one of these " "resources:" msgstr "" #: ../config-dns-int.rst:25 msgid "dns_name" msgstr "" #: ../config-dns-int.rst:26 msgid "dns_domain" msgstr "" # #-#-#-#-# config-dns-int.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# ops-resource-purge.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-dns-int.rst:27 ../intro-os-networking.rst:179 #: ../ops-resource-purge.rst:12 msgid "Ports" msgstr "" #: ../config-dns-int.rst:28 ../config-dns-int.rst:32 ../config-dns-int.rst:34 #: ../config-dns-int.rst:35 msgid "Yes" msgstr "" # #-#-#-#-# config-dns-int.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# ops-resource-purge.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-dns-int.rst:30 ../ops-resource-purge.rst:10 msgid "Networks" msgstr "" #: ../config-dns-int.rst:33 msgid "Floating IPs" msgstr "" #: ../config-dns-int.rst:40 msgid "The Networking service internal DNS resolution" msgstr "" #: ../config-dns-int.rst:42 msgid "" "The Networking service enables users to control the name assigned to ports " "by the internal DNS. To enable this functionality, do the following:" msgstr "" #: ../config-dns-int.rst:45 msgid "" "Edit the ``/etc/neutron/neutron.conf`` file and assign a value different to " "``openstacklocal`` (its default value) to the ``dns_domain`` parameter in " "the ``[default]`` section. As an example:" msgstr "" #: ../config-dns-int.rst:53 msgid "" "Add ``dns`` to ``extension_drivers`` in the ``[ml2]`` section of ``/etc/" "neutron/plugins/ml2/ml2_conf.ini``. The following is an example:" msgstr "" #: ../config-dns-int.rst:61 msgid "" "After re-starting the ``neutron-server``, users will be able to assign a " "``dns_name`` attribute to their ports." msgstr "" #: ../config-dns-int.rst:65 msgid "" "The enablement of this functionality is prerequisite for the enablement of " "the Networking service integration with an external DNS service, which is " "described in detail in :ref:`config-dns-int-ext-serv`." msgstr "" #: ../config-dns-int.rst:69 msgid "" "The following illustrates the creation of a port with ``my-port`` in its " "``dns_name`` attribute." msgstr "" #: ../config-dns-int.rst:73 msgid "" "The name assigned to the port by the Networking service internal DNS is now " "visible in the response in the ``dns_assignment`` attribute." msgstr "" #: ../config-dns-int.rst:101 msgid "" "When this functionality is enabled, it is leveraged by the Compute service " "when creating instances. When allocating ports for an instance during boot, " "the Compute service populates the ``dns_name`` attributes of these ports " "with the ``hostname`` attribute of the instance, which is a DNS sanitized " "version of its display name. As a consequence, at the end of the boot " "process, the allocated ports will be known in the dnsmasq associated to " "their networks by their instance ``hostname``." msgstr "" #: ../config-dns-int.rst:109 msgid "" "The following is an example of an instance creation, showing how its " "``hostname`` populates the ``dns_name`` attribute of the allocated port:" msgstr "" #: ../config-dns-int.rst:181 msgid "In the above example notice that:" msgstr "" #: ../config-dns-int.rst:183 msgid "" "The name given to the instance by the user, ``my_vm``, is sanitized by the " "Compute service and becomes ``my-vm`` as the port's ``dns_name``." msgstr "" #: ../config-dns-int.rst:185 msgid "" "The port's ``dns_assignment`` attribute shows that its FQDN is ``my-vm." "example.org.`` in the Networking service internal DNS, which is the result " "of concatenating the port's ``dns_name`` with the value configured in the " "``dns_domain`` parameter in ``neutron.conf``, as explained previously." msgstr "" #: ../config-dns-int.rst:189 msgid "" "The ``dns_assignment`` attribute also shows that the port's ``hostname`` in " "the Networking service internal DNS is ``my-vm``." msgstr "" #: ../config-dns-int.rst:191 msgid "" "Instead of having the Compute service create the port for the instance, the " "user might have created it and assigned a value to its ``dns_name`` " "attribute. In this case, the value assigned to the ``dns_name`` attribute " "must be equal to the value that Compute service will assign to the " "instance's ``hostname``, in this example ``my-vm``. Otherwise, the instance " "boot will fail." msgstr "" #: ../config-dns-int.rst:199 msgid "Integration with an external DNS service" msgstr "" #: ../config-dns-int.rst:201 msgid "" "Users can also integrate the Networking and Compute services with an " "external DNS. To accomplish this, the users have to:" msgstr "" #: ../config-dns-int.rst:204 msgid "" "Enable the functionality described in :ref:`config-dns-int-dns-resolution`." msgstr "" #: ../config-dns-int.rst:206 msgid "" "Configure an external DNS driver. The Networking service provides a driver " "reference implementation based on the OpenStack DNS service. It is expected " "that third party vendors will provide other implementations in the future. " "For detailed configuration instructions, see :ref:`config-dns-int-ext-serv`." msgstr "" #: ../config-dns-int.rst:212 msgid "" "Once the ``neutron-server`` has been configured and restarted, users will " "have functionality that covers three use cases, described in the following " "sections. In each of the use cases described below:" msgstr "" #: ../config-dns-int.rst:216 msgid "The examples assume the OpenStack DNS service as the external DNS." msgstr "" #: ../config-dns-int.rst:217 msgid "A, AAAA and PTR records will be created in the DNS service." msgstr "" #: ../config-dns-int.rst:218 msgid "" "Before executing any of the use cases, the user must create in the DNS " "service under his project a DNS zone where the A and AAAA records will be " "created. For the description of the use cases below, it is assumed the zone " "``example.org.`` was created previously." msgstr "" #: ../config-dns-int.rst:222 msgid "" "The PTR records will be created in zones owned by a project with admin " "privileges. See :ref:`config-dns-int-ext-serv` for more details." msgstr "" #: ../config-dns-int.rst:228 msgid "Use case 1: Ports are published directly in the external DNS service" msgstr "" #: ../config-dns-int.rst:230 msgid "" "In this case, the user is creating ports or booting instances on a network " "that is accessible externally. The steps to publish the port in the external " "DNS service are the following:" msgstr "" #: ../config-dns-int.rst:234 ../config-dns-int.rst:414 msgid "" "Assign a valid domain name to the network's ``dns_domain`` attribute. This " "name must end with a period (``.``)." msgstr "" #: ../config-dns-int.rst:236 msgid "" "Boot an instance specifying the externally accessible network. " "Alternatively, create a port on the externally accessible network specifying " "a valid value to its ``dns_name`` attribute. If the port is going to be used " "for an instance boot, the value assigned to ``dns_name`` must be equal to " "the ``hostname`` that the Compute service will assign to the instance. " "Otherwise, the boot will fail." msgstr "" #: ../config-dns-int.rst:243 msgid "" "Once these steps are executed, the port's DNS data will be published in the " "external DNS service. This is an example:" msgstr "" #: ../config-dns-int.rst:371 msgid "" "In this example the port is created manually by the user and then used to " "boot an instance. Notice that:" msgstr "" #: ../config-dns-int.rst:374 msgid "" "The port's data was visible in the DNS service as soon as it was created." msgstr "" #: ../config-dns-int.rst:375 msgid "" "See :ref:`config-dns-performance-considerations` for an explanation of the " "potential performance impact associated with this use case." msgstr "" #: ../config-dns-int.rst:378 msgid "" "Following are the PTR records created for this example. Note that for IPv4, " "the value of ipv4_ptr_zone_prefix_size is 24. In the case of IPv6, the value " "of ipv6_ptr_zone_prefix_size is 116. For more details, see :ref:`config-dns-" "int-ext-serv`:" msgstr "" #: ../config-dns-int.rst:403 msgid "" "See :ref:`config-dns-int-ext-serv` for detailed instructions on how to " "create the externally accessible network." msgstr "" #: ../config-dns-int.rst:407 msgid "" "Use case 2: Floating IPs are published with associated port DNS attributes" msgstr "" #: ../config-dns-int.rst:409 msgid "" "In this use case, the address of a floating IP is published in the external " "DNS service in conjunction with the ``dns_name`` of its associated port and " "the ``dns_domain`` of the port's network. The steps to execute in this use " "case are the following:" msgstr "" #: ../config-dns-int.rst:416 msgid "" "Boot an instance or alternatively, create a port specifying a valid value to " "its ``dns_name`` attribute. If the port is going to be used for an instance " "boot, the value assigned to ``dns_name`` must be equal to the ``hostname`` " "that the Compute service will assign to the instance. Otherwise, the boot " "will fail." msgstr "" #: ../config-dns-int.rst:421 msgid "Create a floating IP and associate it to the port." msgstr "" #: ../config-dns-int.rst:423 msgid "Following is an example of these steps:" msgstr "" #: ../config-dns-int.rst:559 msgid "" "In this example, notice that the data is published in the DNS service when " "the floating IP is associated to the port." msgstr "" #: ../config-dns-int.rst:562 ../config-dns-int.rst:730 msgid "" "Following are the PTR records created for this example. Note that for IPv4, " "the value of ipv4_ptr_zone_prefix_size is 24. For more details, see :ref:" "`config-dns-int-ext-serv`:" msgstr "" #: ../config-dns-int.rst:579 msgid "Use case 3: Floating IPs are published in the external DNS service" msgstr "" #: ../config-dns-int.rst:581 msgid "" "In this use case, the user assigns ``dns_name`` and ``dns_domain`` " "attributes to a floating IP when it is created. The floating IP data becomes " "visible in the external DNS service as soon as it is created. The floating " "IP can be associated with a port on creation or later on. The following " "example shows a user booting an instance and then creating a floating IP " "associated to the port allocated for the instance:" msgstr "" #: ../config-dns-int.rst:719 msgid "Note that in this use case:" msgstr "" #: ../config-dns-int.rst:721 msgid "" "The ``dns_name`` and ``dns_domain`` attributes of a floating IP must be " "specified together on creation. They cannot be assigned to the floating IP " "separately." msgstr "" #: ../config-dns-int.rst:724 msgid "" "The ``dns_name`` and ``dns_domain`` of a floating IP have precedence, for " "purposes of being published in the external DNS service, over the " "``dns_name`` of its associated port and the ``dns_domain`` of the port's " "network, whether they are specified or not. Only the ``dns_name`` and the " "``dns_domain`` of the floating IP are published in the external DNS service." msgstr "" #: ../config-dns-int.rst:748 msgid "Performance considerations" msgstr "" #: ../config-dns-int.rst:750 msgid "" "Only for :ref:`config-dns-use-case-1`, if the port binding extension is " "enabled in the Networking service, the Compute service will execute one " "additional port update operation when allocating the port for the instance " "during the boot process. This may have a noticeable adverse effect in the " "performance of the boot process that must be evaluated before adoption of " "this use case." msgstr "" #: ../config-dns-int.rst:760 msgid "" "Configuring OpenStack Networking for integration with an external DNS service" msgstr "" #: ../config-dns-int.rst:762 msgid "" "The first step to configure the integration with an external DNS service is " "to enable the functionality described in :ref:`config-dns-int-dns-" "resolution`. Once this is done, the user has to take the following steps and " "restart ``neutron-server``." msgstr "" #: ../config-dns-int.rst:767 msgid "" "Edit the ``[default]`` section of ``/etc/neutron/neutron.conf`` and specify " "the external DNS service driver to be used in parameter " "``external_dns_driver``. The valid options are defined in namespace " "``neutron.services.external_dns_drivers``. The following example shows how " "to set up the driver for the OpenStack DNS service:" msgstr "" #: ../config-dns-int.rst:777 msgid "" "If the OpenStack DNS service is the target external DNS, the ``[designate]`` " "section of ``/etc/neutron/neutron.conf`` must define the following " "parameters:" msgstr "" #: ../config-dns-int.rst:781 msgid "``url``: the OpenStack DNS service public endpoint URL." msgstr "" #: ../config-dns-int.rst:782 msgid "" "``allow_reverse_dns_lookup``: a boolean value specifying whether to enable " "or not the creation of reverse lookup (PTR) records." msgstr "" #: ../config-dns-int.rst:784 msgid "" "``admin_auth_url``: the Identity service admin authorization endpoint url. " "This endpoint will be used by the Networking service to authenticate as an " "admin user to create and update reverse lookup (PTR) zones." msgstr "" #: ../config-dns-int.rst:787 msgid "" "``admin_username``: the admin user to be used by the Networking service to " "create and update reverse lookup (PTR) zones." msgstr "" #: ../config-dns-int.rst:789 msgid "" "``admin_password``: the password of the admin user to be used by Networking " "service to create and update reverse lookup (PTR) zones." msgstr "" #: ../config-dns-int.rst:791 msgid "" "``admin_tenant_name``: the project of the admin user to be used by the " "Networking service to create and update reverse lookup (PTR) zones." msgstr "" #: ../config-dns-int.rst:793 msgid "" "``ipv4_ptr_zone_prefix_size``: the size in bits of the prefix for the IPv4 " "reverse lookup (PTR) zones." msgstr "" #: ../config-dns-int.rst:795 msgid "" "``ipv6_ptr_zone_prefix_size``: the size in bits of the prefix for the IPv6 " "reverse lookup (PTR) zones." msgstr "" #: ../config-dns-int.rst:798 msgid "The following is an example:" msgstr "" #: ../config-dns-int.rst:813 msgid "Configuration of the externally accessible network for use case 1" msgstr "" #: ../config-dns-int.rst:815 msgid "" "In :ref:`config-dns-use-case-1`, the externally accessible network must meet " "the following requirements:" msgstr "" #: ../config-dns-int.rst:818 msgid "The network cannot have attribute ``router:external`` set to ``True``." msgstr "" #: ../config-dns-int.rst:819 msgid "The network type can be FLAT, VLAN, GRE, VXLAN or GENEVE." msgstr "" #: ../config-dns-int.rst:820 msgid "" "For network types VLAN, GRE, VXLAN or GENEVE, the segmentation ID must be " "outside the ranges assigned to tenant networks." msgstr "" #: ../config-dns-res.rst:5 msgid "Name resolution for instances" msgstr "" #: ../config-dns-res.rst:7 msgid "" "The Networking service offers several methods to configure name resolution " "(DNS) for instances. Most deployments should implement case 1 or 2. Case 3 " "requires security considerations to prevent leaking internal DNS information " "to instances." msgstr "" #: ../config-dns-res.rst:13 msgid "Case 1: Each virtual network uses unique DNS resolver(s)" msgstr "" #: ../config-dns-res.rst:15 msgid "" "In this case, the DHCP agent offers one or more unique DNS resolvers to " "instances via DHCP on each virtual network. You can configure a DNS resolver " "when creating or updating a subnet. To configure more than one DNS resolver, " "use a comma between each value." msgstr "" #: ../config-dns-res.rst:20 msgid "Configure a DNS resolver when creating a subnet." msgstr "" #: ../config-dns-res.rst:26 msgid "" "Replace ``DNS_RESOLVER`` with the IP address of a DNS resolver reachable " "from the virtual network. For example:" msgstr "" #: ../config-dns-res.rst:35 msgid "This command requires other options outside the scope of this content." msgstr "" #: ../config-dns-res.rst:38 msgid "Configure a DNS resolver on an existing subnet." msgstr "" #: ../config-dns-res.rst:44 msgid "" "Replace ``DNS_RESOLVER`` with the IP address of a DNS resolver reachable " "from the virtual network and ``SUBNET_ID_OR_NAME`` with the UUID or name of " "the subnet. For example, using the ``selfservice`` subnet:" msgstr "" #: ../config-dns-res.rst:53 msgid "Case 2: All virtual networks use same DNS resolver(s)" msgstr "" #: ../config-dns-res.rst:55 msgid "" "In this case, the DHCP agent offers the same DNS resolver(s) to instances " "via DHCP on all virtual networks." msgstr "" #: ../config-dns-res.rst:58 msgid "" "In the ``dhcp_agent.ini`` file, configure one or more DNS resolvers. To " "configure more than one DNS resolver, use a comma between each value." msgstr "" #: ../config-dns-res.rst:66 msgid "" "Replace ``DNS_RESOLVER`` with the IP address of a DNS resolver reachable " "from all virtual networks. For example:" msgstr "" #: ../config-dns-res.rst:76 ../config-dns-res.rst:96 msgid "" "You must configure this option for all eligible DHCP agents and restart them " "to activate the values." msgstr "" #: ../config-dns-res.rst:80 msgid "Case 3: All virtual networks use DNS resolver(s) on the host" msgstr "" #: ../config-dns-res.rst:82 msgid "" "In this case, the DHCP agent offers the DNS resolver(s) in the ``resolv." "conf`` file on the host running the DHCP agent via DHCP to instances on all " "virtual networks." msgstr "" #: ../config-dns-res.rst:86 msgid "" "In the ``dhcp_agent.ini`` file, enable advertisement of the DNS resolver(s) " "on the host." msgstr "" #: ../config-dvr-ha-snat.rst:5 msgid "Distributed Virtual Routing with VRRP" msgstr "" #: ../config-dvr-ha-snat.rst:7 msgid "" ":ref:`deploy-ovs-ha-dvr` supports augmentation using Virtual Router " "Redundancy Protocol (VRRP). Using this configuration, virtual routers " "support both the ``--distributed`` and ``--ha`` options." msgstr "" #: ../config-dvr-ha-snat.rst:11 msgid "" "Similar to legacy HA routers, DVR/SNAT HA routers provide a quick fail over " "of the SNAT service to a backup DVR/SNAT router on an l3-agent running on a " "different node." msgstr "" #: ../config-dvr-ha-snat.rst:15 msgid "" "SNAT high availability is implemented in a manner similar to the :ref:" "`deploy-lb-ha-vrrp` and :ref:`deploy-ovs-ha-vrrp` examples where " "``keepalived`` uses VRRP to provide quick failover of SNAT services." msgstr "" #: ../config-dvr-ha-snat.rst:19 msgid "" "During normal operation, the master router periodically transmits " "*heartbeat* packets over a hidden project network that connects all HA " "routers for a particular project." msgstr "" #: ../config-dvr-ha-snat.rst:23 msgid "" "If the DVR/SNAT backup router stops receiving these packets, it assumes " "failure of the master DVR/SNAT router and promotes itself to master router " "by configuring IP addresses on the interfaces in the ``snat`` namespace. In " "environments with more than one backup router, the rules of VRRP are " "followed to select a new master router." msgstr "" # #-#-#-#-# config-dvr-ha-snat.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-ipam.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-ovsfwdriver.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-dvr-ha-snat.rst:31 ../config-ipam.rst:9 #: ../config-ovsfwdriver.rst:9 msgid "Experimental feature or incomplete documentation." msgstr "" #: ../config-dvr-ha-snat.rst:35 msgid "Configuration example" msgstr "" #: ../config-dvr-ha-snat.rst:37 msgid "" "The basic deployment model consists of one controller node, two or more " "network nodes, and multiple computes nodes." msgstr "" #: ../config-dvr-ha-snat.rst:41 msgid "Controller node configuration" msgstr "" #: ../config-dvr-ha-snat.rst:43 msgid "Add the following to ``/etc/neutron/neutron.conf``:" msgstr "" #: ../config-dvr-ha-snat.rst:57 msgid "" "When the ``router_distributed = True`` flag is configured, routers created " "by all users are distributed. Without it, only privileged users can create " "distributed routers by using :option:`--distributed True`." msgstr "" #: ../config-dvr-ha-snat.rst:61 msgid "" "Similarly, when the ``l3_ha = True`` flag is configured, routers created by " "all users default to HA." msgstr "" #: ../config-dvr-ha-snat.rst:64 msgid "" "It follows that with these two flags set to ``True`` in the configuration " "file, routers created by all users will default to distributed HA routers " "(DVR HA)." msgstr "" #: ../config-dvr-ha-snat.rst:68 msgid "" "The same can explicitly be accomplished by a user with administrative " "credentials setting the flags in the :command:`router-create` command:" msgstr "" #: ../config-dvr-ha-snat.rst:78 msgid "" "The *max_l3_agents_per_router* and *min_l3_agents_per_router* determine the " "number of backup DVR/SNAT routers which will be instantiated." msgstr "" #: ../config-dvr-ha-snat.rst:81 msgid "Add the following to ``/etc/neutron/plugins/ml2/ml2_conf.ini``:" msgstr "" #: ../config-dvr-ha-snat.rst:97 msgid "" "Replace ``MIN_VXLAN_ID`` and ``MAX_VXLAN_ID`` with VXLAN ID minimum and " "maximum values suitable for your environment." msgstr "" #: ../config-dvr-ha-snat.rst:102 msgid "" "The first value in the ``tenant_network_types`` option becomes the default " "project network type when a regular user creates a network." msgstr "" # #-#-#-#-# config-dvr-ha-snat.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-dvr-ha-snat.rst:106 ../config-macvtap.rst:110 msgid "Network nodes" msgstr "" #: ../config-dvr-ha-snat.rst:108 ../config-dvr-ha-snat.rst:146 msgid "" "Configure the Open vSwitch agent. Add the following to ``/etc/neutron/" "plugins/ml2/ml2_conf.ini``:" msgstr "" #: ../config-dvr-ha-snat.rst:125 ../config-dvr-ha-snat.rst:172 msgid "" "Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface " "that handles VXLAN project networks." msgstr "" #: ../config-dvr-ha-snat.rst:128 ../config-dvr-ha-snat.rst:163 msgid "" "Configure the L3 agent. Add the following to ``/etc/neutron/l3_agent.ini``:" msgstr "" # #-#-#-#-# config-dvr-ha-snat.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-dvr-ha-snat.rst:140 ../deploy-lb-ha-vrrp.rst:115 #: ../deploy-lb-selfservice.rst:143 ../deploy-ovs-ha-dvr.rst:102 #: ../deploy-ovs-ha-dvr.rst:137 ../deploy-ovs-ha-vrrp.rst:116 #: ../deploy-ovs-selfservice.rst:146 msgid "The ``external_network_bridge`` option intentionally contains no value." msgstr "" # #-#-#-#-# config-dvr-ha-snat.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-dvr-ha-snat.rst:144 ../config-macvtap.rst:115 #: ../deploy-lb-ha-vrrp.rst:124 ../deploy-lb-provider.rst:155 #: ../deploy-lb-selfservice.rst:152 ../deploy-ovs-ha-dvr.rst:111 #: ../deploy-ovs-ha-vrrp.rst:125 ../deploy-ovs-provider.rst:166 #: ../deploy-ovs-selfservice.rst:155 msgid "Compute nodes" msgstr "" # #-#-#-#-# config-dvr-ha-snat.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-ipam.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-sriov.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-dvr-ha-snat.rst:176 ../config-ipam.rst:44 ../config-sriov.rst:63 msgid "Known limitations" msgstr "" #: ../config-dvr-ha-snat.rst:178 msgid "" "Migrating a router from distributed only, HA only, or legacy to distributed " "HA is not supported at this time. The router must be created as distributed " "HA. The reverse direction is also not supported. You cannot reconfigure a " "distributed HA router to be only distributed, only HA, or legacy." msgstr "" #: ../config-dvr-ha-snat.rst:184 msgid "" "There are certain scenarios where l2pop and distributed HA routers do not " "interact in an expected manner. These situations are the same that affect HA " "only routers and l2pop." msgstr "" #: ../config-ipam.rst:5 msgid "IPAM configuration" msgstr "" #: ../config-ipam.rst:11 msgid "" "Starting with the Liberty release, OpenStack Networking includes a pluggable " "interface for the IP Address Management (IPAM) function. This interface " "creates a driver framework for the allocation and de-allocation of subnets " "and IP addresses, enabling the integration of alternate IPAM implementations " "or third-party IP Address Management systems." msgstr "" # #-#-#-#-# config-ipam.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-sriov.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ipam.rst:18 ../config-sriov.rst:13 msgid "The basics" msgstr "" #: ../config-ipam.rst:20 msgid "" "The IPAM implementation within OpenStack Networking provides two basic " "flavors (pluggable IPAM, non-pluggable IPAM). By default, the non-pluggable " "IPAM is enabled. This provides backward compatibility with older releases. " "In contrast, the pluggable implementation will require a database migration " "to support upgraded systems. This migration is planned for the Mitaka " "release." msgstr "" #: ../config-ipam.rst:26 msgid "" "The reference driver for the pluggable implementation is considered " "experimental at this time. It does not provide additional functionality " "beyond the non-pluggable implementation, but does provide a basis for custom " "or third-party developed drivers. This can enable, for example, development " "of drivers that use different algorithms to allocate an IP address." msgstr "" #: ../config-ipam.rst:32 msgid "" "To enable the pluggable implementation, you must specify the driver to use " "in the ``neutron.conf`` file. The ``internal`` driver refers to the " "reference implementation." msgstr "" #: ../config-ipam.rst:40 msgid "" "The documentation for any alternate drivers will include the value to use " "when specifying that driver." msgstr "" #: ../config-ipam.rst:46 msgid "" "The driver interface is designed to allow separate drivers for each subnet " "pool. However, the current implementation allows only a single IPAM driver " "system-wide." msgstr "" #: ../config-ipam.rst:49 msgid "" "Database migrations are not available to convert existing OpenStack " "installations to the new reference implementation of the pluggable IPAM. " "This migration is planned for the Mitaka release." msgstr "" #: ../config-ipam.rst:52 msgid "" "Third-party drivers must provide their own migration mechanisms to convert " "existing OpenStack installations to their IPAM." msgstr "" #: ../config-ipv6.rst:7 msgid "Scope:" msgstr "" #: ../config-ipv6.rst:9 msgid "How to enable dual-stack (IPv4 and IPv6 enabled) instances." msgstr "" #: ../config-ipv6.rst:10 msgid "How those instances receive an IPv6 address." msgstr "" #: ../config-ipv6.rst:11 msgid "" "How those instances communicate across a router to other subnets or the " "internet." msgstr "" #: ../config-ipv6.rst:13 msgid "How those instances interact with other OpenStack services." msgstr "" #: ../config-ipv6.rst:15 msgid "" "To enable a dual-stack network in OpenStack Networking simply requires " "creating a subnet with the ``ip_version`` field set to ``6``, then the IPv6 " "attributes (``ipv6_ra_mode`` and ``ipv6_address_mode``) set. The " "``ipv6_ra_mode`` and ``ipv6_address_mode`` will be described in detail in " "the next section. Finally, the subnets ``cidr`` needs to be provided." msgstr "" #: ../config-ipv6.rst:22 msgid "Not in scope" msgstr "" #: ../config-ipv6.rst:24 msgid "Things not in the scope of this document include:" msgstr "" #: ../config-ipv6.rst:26 msgid "Single stack IPv6 tenant networking" msgstr "" #: ../config-ipv6.rst:27 msgid "" "OpenStack control communication between servers and services over an IPv6 " "network." msgstr "" #: ../config-ipv6.rst:29 msgid "Connection to the OpenStack APIs via an IPv6 transport network" msgstr "" #: ../config-ipv6.rst:30 msgid "IPv6 multicast" msgstr "" #: ../config-ipv6.rst:31 msgid "" "IPv6 support in conjunction with any out of tree routers, switches, services " "or agents whether in physical or virtual form factors." msgstr "" #: ../config-ipv6.rst:36 msgid "Neutron subnets and the IPv6 API attributes" msgstr "" #: ../config-ipv6.rst:38 msgid "" "As of Juno, the OpenStack Networking service (neutron) provides two new " "attributes to the subnet object, which allows users of the API to configure " "IPv6 subnets." msgstr "" #: ../config-ipv6.rst:42 msgid "There are two IPv6 attributes:" msgstr "" #: ../config-ipv6.rst:44 msgid "``ipv6_ra_mode``" msgstr "" #: ../config-ipv6.rst:45 msgid "``ipv6_address_mode``" msgstr "" #: ../config-ipv6.rst:47 msgid "These attributes can be set to the following values:" msgstr "" #: ../config-ipv6.rst:49 msgid "``slaac``" msgstr "" #: ../config-ipv6.rst:50 msgid "``dhcpv6-stateful``" msgstr "" #: ../config-ipv6.rst:51 msgid "``dhcpv6-stateless``" msgstr "" #: ../config-ipv6.rst:53 msgid "The attributes can also be left unset." msgstr "" #: ../config-ipv6.rst:57 msgid "IPv6 addressing" msgstr "" #: ../config-ipv6.rst:59 msgid "" "The ``ipv6_address_mode`` attribute is used to control how addressing is " "handled by OpenStack. There are a number of different ways that guest " "instances can obtain an IPv6 address, and this attribute exposes these " "choices to users of the Networking API." msgstr "" #: ../config-ipv6.rst:66 msgid "Router advertisements" msgstr "" #: ../config-ipv6.rst:68 msgid "" "The ``ipv6_ra_mode`` attribute is used to control router advertisements for " "a subnet." msgstr "" #: ../config-ipv6.rst:71 msgid "" "The IPv6 Protocol uses Internet Control Message Protocol packets (ICMPv6) as " "a way to distribute information about networking. ICMPv6 packets with the " "type flag set to 134 are called \"Router Advertisement\" packets, which " "broadcasts information about the router and the route that can be used by " "guest instances to send network traffic." msgstr "" #: ../config-ipv6.rst:78 msgid "" "The ``ipv6_ra_mode`` is used to specify if the Networking service should " "transmit ICMPv6 packets, for a subnet." msgstr "" #: ../config-ipv6.rst:82 msgid "ipv6_ra_mode and ipv6_address_mode combinations" msgstr "" #: ../config-ipv6.rst:88 msgid "ipv6 ra mode" msgstr "" #: ../config-ipv6.rst:89 msgid "ipv6 address mode" msgstr "" #: ../config-ipv6.rst:90 msgid "radvd A,M,O" msgstr "" #: ../config-ipv6.rst:91 msgid "External Router A,M,O" msgstr "" #: ../config-ipv6.rst:93 ../config-ipv6.rst:94 ../config-ipv6.rst:98 #: ../config-ipv6.rst:103 ../config-ipv6.rst:108 ../config-ipv6.rst:114 #: ../config-ipv6.rst:119 ../config-ipv6.rst:124 msgid "*N/S*" msgstr "" #: ../config-ipv6.rst:95 ../config-ipv6.rst:100 ../config-ipv6.rst:105 #: ../config-ipv6.rst:110 ../config-ipv6.rst:116 ../config-ipv6.rst:121 #: ../config-ipv6.rst:126 ../config-ipv6.rst:131 ../config-ipv6.rst:136 #: ../config-ipv6.rst:142 msgid "Off" msgstr "" #: ../config-ipv6.rst:96 msgid "Not Defined" msgstr "" #: ../config-ipv6.rst:97 msgid "Backwards compatibility with pre-Juno IPv6 behavior." msgstr "" #: ../config-ipv6.rst:99 ../config-ipv6.rst:113 ../config-ipv6.rst:128 #: ../config-ipv6.rst:129 ../config-ipv6.rst:146 ../config-ipv6.rst:151 #: ../config-ipv6.rst:157 ../config-ipv6.rst:167 msgid "slaac" msgstr "" #: ../config-ipv6.rst:101 ../config-ipv6.rst:115 ../config-ipv6.rst:130 msgid "1,0,0" msgstr "" #: ../config-ipv6.rst:102 msgid "" "Guest instance obtains IPv6 address from non-OpenStack router using SLAAC." msgstr "" #: ../config-ipv6.rst:104 ../config-ipv6.rst:118 ../config-ipv6.rst:133 #: ../config-ipv6.rst:134 ../config-ipv6.rst:147 ../config-ipv6.rst:156 #: ../config-ipv6.rst:161 ../config-ipv6.rst:172 msgid "dhcpv6-stateful" msgstr "" #: ../config-ipv6.rst:106 ../config-ipv6.rst:120 ../config-ipv6.rst:135 msgid "0,1,1" msgstr "" #: ../config-ipv6.rst:107 ../config-ipv6.rst:112 ../config-ipv6.rst:117 #: ../config-ipv6.rst:122 ../config-ipv6.rst:127 msgid "Not currently implemented in the reference implementation." msgstr "" #: ../config-ipv6.rst:109 ../config-ipv6.rst:123 ../config-ipv6.rst:139 #: ../config-ipv6.rst:140 ../config-ipv6.rst:152 ../config-ipv6.rst:162 #: ../config-ipv6.rst:166 ../config-ipv6.rst:171 msgid "dhcpv6-stateless" msgstr "" #: ../config-ipv6.rst:111 ../config-ipv6.rst:125 ../config-ipv6.rst:141 msgid "1,0,1" msgstr "" #: ../config-ipv6.rst:132 msgid "" "Guest instance obtains IPv6 address from OpenStack managed radvd using SLAAC." msgstr "" #: ../config-ipv6.rst:137 msgid "" "Guest instance obtains IPv6 address from dnsmasq using DHCPv6 stateful and " "optional info from dnsmasq using DHCPv6." msgstr "" #: ../config-ipv6.rst:143 msgid "" "Guest instance obtains IPv6 address from OpenStack managed radvd using SLAAC " "and optional info from dnsmasq using DHCPv6." msgstr "" #: ../config-ipv6.rst:150 ../config-ipv6.rst:155 ../config-ipv6.rst:160 #: ../config-ipv6.rst:165 ../config-ipv6.rst:170 ../config-ipv6.rst:175 msgid "*Invalid combination.*" msgstr "" #: ../config-ipv6.rst:178 msgid "Tenant network considerations" msgstr "" #: ../config-ipv6.rst:181 msgid "Dataplane" msgstr "" #: ../config-ipv6.rst:183 msgid "" "Both the Linux bridge and the Open vSwitch dataplane modules support " "forwarding IPv6 packets amongst the guests and router ports. Similar to " "IPv4, there is no special configuration or setup required to enable the " "dataplane to properly forward packets from the source to the destination " "using IPv6. Note that these dataplanes will forward Link-local Address (LLA) " "packets between hosts on the same network just fine without any " "participation or setup by OpenStack components after the ports are all " "connected and MAC addresses learned." msgstr "" #: ../config-ipv6.rst:193 msgid "Addresses for subnets" msgstr "" #: ../config-ipv6.rst:195 msgid "There are four methods for a subnet to get its ``cidr`` in OpenStack:" msgstr "" #: ../config-ipv6.rst:197 msgid "Direct assignment during subnet creation via command line or Horizon" msgstr "" #: ../config-ipv6.rst:198 msgid "Referencing a subnet pool during subnet creation" msgstr "" #: ../config-ipv6.rst:200 msgid "" "In the future, different techniques could be used to allocate subnets to " "tenants:" msgstr "" #: ../config-ipv6.rst:203 msgid "Using a PD client to request a prefix for a subnet from a PD server" msgstr "" #: ../config-ipv6.rst:204 msgid "Use of an external IPAM module to allocate the subnet" msgstr "" #: ../config-ipv6.rst:207 msgid "Address modes for ports" msgstr "" #: ../config-ipv6.rst:211 msgid "" "That an external DHCPv6 server in theory could override the full address " "OpenStack assigns based on the EUI-64 address, but that would not be wise as " "it would not be consistent through the system." msgstr "" #: ../config-ipv6.rst:215 msgid "" "IPv6 supports three different addressing schemes for address configuration " "and for providing optional network information." msgstr "" #: ../config-ipv6.rst:219 msgid "Address configuration using Router Advertisement (RA)." msgstr "" #: ../config-ipv6.rst:219 msgid "Stateless Address Auto Configuration (SLAAC)" msgstr "" #: ../config-ipv6.rst:222 msgid "Address configuration using RA and optional information using DHCPv6." msgstr "" #: ../config-ipv6.rst:223 ../config-ipv6.rst:296 ../config-ipv6.rst:297 msgid "DHCPv6-stateless" msgstr "" #: ../config-ipv6.rst:226 msgid "Address configuration and optional information using DHCPv6." msgstr "" #: ../config-ipv6.rst:226 ../config-ipv6.rst:300 ../config-ipv6.rst:301 msgid "DHCPv6-stateful" msgstr "" #: ../config-ipv6.rst:228 msgid "" "OpenStack can be setup such that OpenStack Networking directly provides RA, " "DHCP relay and DHCPv6 address and optional information for their networks or " "this can be delegated to external routers and services based on the drivers " "that are in use. There are two neutron subnet attributes - ``ipv6_ra_mode`` " "and ``ipv6_address_mode`` – that determine how IPv6 addressing and network " "information is provided to tenant instances:" msgstr "" #: ../config-ipv6.rst:236 msgid "``ipv6_ra_mode``: Determines who sends RA." msgstr "" #: ../config-ipv6.rst:237 msgid "" "``ipv6_address_mode``: Determines how instances obtain IPv6 address, default " "gateway, or optional information." msgstr "" #: ../config-ipv6.rst:240 msgid "" "For the above two attributes to be effective, ``enable_dhcp`` of the subnet " "object must be set to True." msgstr "" #: ../config-ipv6.rst:244 msgid "Using SLAAC for addressing" msgstr "" #: ../config-ipv6.rst:246 msgid "" "When using SLAAC, the currently supported combinations for ``ipv6_ra_mode`` " "and ``ipv6_address_mode`` are as follows." msgstr "" #: ../config-ipv6.rst:253 ../config-ipv6.rst:293 msgid "ipv6_ra_mode" msgstr "" #: ../config-ipv6.rst:254 ../config-ipv6.rst:294 msgid "ipv6_address_mode" msgstr "" #: ../config-ipv6.rst:255 ../config-ipv6.rst:295 msgid "Result" msgstr "" #: ../config-ipv6.rst:256 msgid "Not specified." msgstr "" #: ../config-ipv6.rst:257 ../config-ipv6.rst:260 ../config-ipv6.rst:261 msgid "SLAAC" msgstr "" #: ../config-ipv6.rst:258 msgid "" "Addresses are assigned using EUI-64, and an external router will be used for " "routing." msgstr "" #: ../config-ipv6.rst:262 msgid "" "Address are assigned using EUI-64, and OpenStack Networking provides routing." msgstr "" #: ../config-ipv6.rst:265 msgid "" "Setting ``ipv6_ra_mode`` to ``slaac`` will result in OpenStack Networking " "routers being configured to send RA packets, when they are created. This " "results in the following values set for the address configuration flags in " "the RA messages:" msgstr "" #: ../config-ipv6.rst:270 ../config-ipv6.rst:311 msgid "Auto Configuration Flag = 1" msgstr "" #: ../config-ipv6.rst:271 ../config-ipv6.rst:312 msgid "Managed Configuration Flag = 0" msgstr "" #: ../config-ipv6.rst:272 msgid "Other Configuration Flag = 0" msgstr "" #: ../config-ipv6.rst:274 msgid "" "New or existing Neutron networks that contain a SLAAC enabled IPv6 subnet " "will result in all neutron ports attached to the network receiving IPv6 " "addresses. This is because when RA broadcast messages are sent out on a " "neutron network, they are received by all IPv6 capable ports on the network, " "and each port will then configure an IPv6 address based on the information " "contained in the RA packet. In some cases, an IPv6 SLAAC address will be " "added to a port, in addition to other IPv4 and IPv6 addresses that the port " "already has been assigned." msgstr "" #: ../config-ipv6.rst:284 msgid "DHCPv6" msgstr "" #: ../config-ipv6.rst:286 msgid "" "For DHCPv6-stateless, the currently supported combinations are as follows:" msgstr "" #: ../config-ipv6.rst:298 msgid "" "Address and optional information using neutron router and DHCP " "implementation respectively." msgstr "" #: ../config-ipv6.rst:302 msgid "Addresses and optional information are assigned using DHCPv6." msgstr "" #: ../config-ipv6.rst:304 msgid "" "Setting DHCPv6-stateless for ``ipv6_ra_mode`` configures the neutron router " "with radvd agent to send RAs. The table below captures the values set for " "the address configuration flags in the RA packet in this scenario. " "Similarly, setting DHCPv6-stateless for ``ipv6_address_mode`` configures " "neutron DHCP implementation to provide the additional network information." msgstr "" #: ../config-ipv6.rst:313 msgid "Other Configuration Flag = 1" msgstr "" #: ../config-ipv6.rst:316 msgid "Router support" msgstr "" #: ../config-ipv6.rst:318 msgid "" "The behavior of the neutron router for IPv6 is different than IPv4 in a few " "ways." msgstr "" #: ../config-ipv6.rst:321 msgid "" "Internal router ports, that act as default gateway ports for a network, will " "share a common port for all IPv6 subnets associated with the network. This " "implies that there will be an IPv6 internal router interface with multiple " "IPv6 addresses from each of the IPv6 subnets associated with the network and " "a separate IPv4 internal router interface for the IPv4 subnet. On the other " "hand, external router ports are allowed to have a dual-stack configuration " "with both an IPv4 and an IPv6 address assigned to them." msgstr "" #: ../config-ipv6.rst:329 msgid "" "Neutron tenant networks that are assigned Global Unicast Address (GUA) " "prefixes and addresses don’t require NAT on the neutron router external " "gateway port to access the outside world. As a consequence of the lack of " "NAT the external router port doesn’t require a GUA to send and receive to " "the external networks. This implies a GUA IPv6 subnet prefix is not " "necessarily needed for the neutron external network. By default, a IPv6 LLA " "associated with the external gateway port can be used for routing purposes. " "To handle this scenario, the implementation of router-gateway-set API in " "neutron has been modified so that an IPv6 subnet is not required for the " "external network that is associated with the neutron router. The LLA address " "of the upstream router can be learned in two ways." msgstr "" #: ../config-ipv6.rst:341 msgid "" "In the absence of an upstream RA support, ``ipv6_gateway`` flag can be set " "with the external router gateway LLA in the neutron L3 agent configuration " "file. This also requires that no subnet is associated with that port." msgstr "" #: ../config-ipv6.rst:344 msgid "" "The upstream router can send an RA and the neutron router will automatically " "learn the next-hop LLA, provided again that no subnet is assigned and the " "``ipv6_gateway`` flag is not set." msgstr "" #: ../config-ipv6.rst:348 msgid "" "Effectively the ``ipv6_gateway`` flag takes precedence over an RA that is " "received from the upstream router. If it is desired to use a GUA next hop " "that is accomplished by allocating a subnet to the external router port and " "assigning the upstream routers GUA address as the gateway for the subnet." msgstr "" #: ../config-ipv6.rst:356 msgid "" "That it should be possible for tenants to communicate with each other on an " "isolated network (a network without a router port) using LLA with little to " "no participation on the part of OpenStack. The authors of this section have " "not proven that to be true for all scenarios." msgstr "" #: ../config-ipv6.rst:362 msgid "Neutron's Distributed Router feature and IPv6" msgstr "" #: ../config-ipv6.rst:364 msgid "" "IPv6 does work when the Distributed Virtual Router functionality is enabled, " "but all ingress/egress traffic is via the centralized router (hence, not " "distributed). More work is required to fully enable this functionality." msgstr "" #: ../config-ipv6.rst:370 msgid "Advanced services" msgstr "" # #-#-#-#-# config-ipv6.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ipv6.rst:373 ../intro-os-networking.rst:331 msgid "VPNaaS" msgstr "" #: ../config-ipv6.rst:375 msgid "" "VPNaaS supports IPv6, but support in Kilo and prior releases will have some " "bugs that may limit how it can be used. More thorough and complete testing " "and bug fixing is being done as part of the Liberty release. IPv6-based VPN-" "as-a-Service is configured similar to the IPv4 configuration. Either or both " "the ``peer_address`` and the ``peer_cidr`` can specified as an IPv6 address. " "The choice of addressing modes and router modes described above should not " "impact support." msgstr "" # #-#-#-#-# config-ipv6.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ipv6.rst:386 ../intro-os-networking.rst:337 msgid "LBaaS" msgstr "" #: ../config-ipv6.rst:388 msgid "TODO" msgstr "" # #-#-#-#-# config-ipv6.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ipv6.rst:391 ../intro-os-networking.rst:344 msgid "FWaaS" msgstr "" #: ../config-ipv6.rst:393 msgid "FWaaS allows creation of IPv6 based rules." msgstr "" #: ../config-ipv6.rst:396 msgid "NAT & Floating IPs" msgstr "" #: ../config-ipv6.rst:398 msgid "" "At the current time OpenStack Networking does not provide any facility to " "support any flavor of NAT with IPv6. Unlike IPv4 there is no current " "embedded support for floating IPs with IPv6. It is assumed that the IPv6 " "addressing amongst the tenants are using GUAs with no overlap across the " "tenants." msgstr "" #: ../config-ipv6.rst:405 msgid "Security considerations" msgstr "" #: ../config-ipv6.rst:411 msgid "Configuring interfaces of the guest" msgstr "" #: ../config-ipv6.rst:413 msgid "" "OpenStack currently doesn't support the privacy extensions defined by RFC " "4941. The interface identifier and DUID used must be directly derived from " "the MAC as described in RFC 2373. The compute hosts must not be setup to " "utilize the privacy extensions when generating their interface identifier." msgstr "" #: ../config-ipv6.rst:418 msgid "" "There is no provisions for an IPv6-based metadata service similar to what is " "provided for IPv4. In the case of dual stack Guests though it is always " "possible to use the IPv4 metadata service instead." msgstr "" #: ../config-ipv6.rst:422 msgid "" "Unlike IPv4 the MTU of a given network can be conveyed in the RA messages " "sent by the router and not in the DHCP messages." msgstr "" #: ../config-ipv6.rst:426 msgid "OpenStack control & management network considerations" msgstr "" #: ../config-ipv6.rst:428 msgid "" "As of the Kilo release, considerable effort has gone in to ensuring the " "tenant network can handle dual stack IPv6 and IPv4 transport across the " "variety of configurations describe above. This same level of scrutiny has " "not been apply to running the OpenStack control network in a dual stack " "configuration. Similarly, little scrutiny has gone into ensuring that the " "OpenStack API endpoints can be accessed via an IPv6 network. At this time, " "Open vSwitch (OVS) tunnel types - STT, VXLAN, GRE, only support IPv4 " "endpoints, not IPv6, so a full IPv6-only deployment is not possible with " "that technology." msgstr "" #: ../config-ipv6.rst:440 msgid "Prefix delegation" msgstr "" #: ../config-ipv6.rst:442 msgid "" "From the Liberty release onwards, OpenStack Networking supports IPv6 prefix " "delegation. This section describes the configuration and workflow steps " "necessary to use IPv6 prefix delegation to provide automatic allocation of " "subnet CIDRs. This allows you as the OpenStack administrator to rely on an " "external (to the OpenStack Networking service) DHCPv6 server to manage your " "tenant network prefixes." msgstr "" #: ../config-ipv6.rst:451 msgid "" "Prefix delegation became available in the Liberty release, it is not " "available in the Kilo release. HA and DVR routers are not currently " "supported by this feature." msgstr "" #: ../config-ipv6.rst:456 msgid "Configuring OpenStack Networking for prefix delegation" msgstr "" #: ../config-ipv6.rst:458 msgid "" "To enable prefix delegation, edit the ``/etc/neutron/neutron.conf`` file. If " "you are running OpenStack Liberty, make the following change:" msgstr "" #: ../config-ipv6.rst:465 msgid "Otherwise if you are running OpenStack Mitaka, make this change:" msgstr "" #: ../config-ipv6.rst:473 msgid "" "If you are not using the default dibbler-based driver for prefix delegation, " "then you also need to set the driver in ``/etc/neutron/neutron.conf``:" msgstr "" #: ../config-ipv6.rst:481 msgid "" "Drivers other than the default one may require extra configuration, please " "refer to :ref:`extra-driver-conf`" msgstr "" #: ../config-ipv6.rst:484 msgid "" "This tells OpenStack Networking to use the prefix delegation mechanism for " "subnet allocation when the user does not provide a CIDR or subnet pool id " "when creating a subnet." msgstr "" #: ../config-ipv6.rst:489 msgid "Requirements" msgstr "" #: ../config-ipv6.rst:491 msgid "" "To use this feature, you need a prefix delegation capable DHCPv6 server that " "is reachable from your OpenStack Networking node(s). This could be software " "running on the OpenStack Networking node(s) or elsewhere, or a physical " "router. For the purposes of this guide we are using the open-source DHCPv6 " "server, Dibbler. Dibbler is available in many Linux package managers, or " "from source at https://github.com/tomaszmrugalski/dibbler." msgstr "" #: ../config-ipv6.rst:498 msgid "" "When using the reference implementation of the OpenStack Networking prefix " "delegation driver, Dibbler must also be installed on your OpenStack " "Networking node(s) to serve as a DHCPv6 client. Version 1.0.1 or higher is " "required." msgstr "" #: ../config-ipv6.rst:502 msgid "" "This guide assumes that you are running a Dibbler server on the network node " "where the external network bridge exists. If you already have a prefix " "delegation capable DHCPv6 server in place, then you can skip the following " "section." msgstr "" #: ../config-ipv6.rst:508 msgid "Configuring the Dibbler server" msgstr "" #: ../config-ipv6.rst:510 msgid "After installing Dibbler, edit the ``/etc/dibbler/server.conf`` file:" msgstr "" #: ../config-ipv6.rst:523 msgid "The options used in the configuration file above are:" msgstr "" #: ../config-ipv6.rst:525 msgid "" "``script`` Points to a script to be run when a prefix is delegated or " "released. This is only needed if you want instances on your subnets to have " "external network access. More on this below." msgstr "" #: ../config-ipv6.rst:529 msgid "" "``iface`` The name of the network interface on which to listen for prefix " "delegation messages." msgstr "" #: ../config-ipv6.rst:532 msgid "" "``pd-pool`` The larger prefix from which you want your delegated prefixes to " "come. The example given is sufficient if you do not need external network " "access, otherwise a unique globally routable prefix is necessary." msgstr "" #: ../config-ipv6.rst:537 msgid "" "``pd-length`` The length that delegated prefixes will be. This must be 64 to " "work with the current OpenStack Networking reference implementation." msgstr "" #: ../config-ipv6.rst:541 msgid "" "To provide external network access to your instances, your Dibbler server " "also needs to create new routes for each delegated prefix. This is done " "using the script file named in the config file above. Edit the ``/var/lib/" "dibbler/pd-server.sh`` file:" msgstr "" #: ../config-ipv6.rst:557 msgid "The variables used in the script file above are:" msgstr "" #: ../config-ipv6.rst:559 msgid "``$PREFIX1`` The prefix being added/deleted by the Dibbler server." msgstr "" #: ../config-ipv6.rst:561 msgid "``$1`` The operation being performed." msgstr "" #: ../config-ipv6.rst:563 msgid "``$REMOTE_ADDR`` The IP address of the requesting Dibbler client." msgstr "" #: ../config-ipv6.rst:565 msgid "``$IFACE`` The network interface upon which the request was received." msgstr "" #: ../config-ipv6.rst:568 msgid "" "The above is all you need in this scenario, but more information on " "installing, configuring, and running Dibbler is available in the Dibbler " "user guide, at http://klub.com.pl/dhcpv6/doc/dibbler-user.pdf." msgstr "" #: ../config-ipv6.rst:572 msgid "To start your Dibbler server, run:" msgstr "" #: ../config-ipv6.rst:578 msgid "Or to run in headless mode:" msgstr "" #: ../config-ipv6.rst:584 msgid "" "When using DevStack, it is important to start your server after the ``stack." "sh`` script has finished to ensure that the required network interfaces have " "been created." msgstr "" # #-#-#-#-# config-ipv6.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-qos.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# ops-resource-tags.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ipv6.rst:589 ../config-qos.rst:146 ../ops-resource-tags.rst:93 msgid "User workflow" msgstr "" #: ../config-ipv6.rst:591 msgid "First, create a network and IPv6 subnet:" msgstr "" #: ../config-ipv6.rst:635 msgid "" "The subnet is initially created with a temporary CIDR before one can be " "assigned by prefix delegation. Any number of subnets with this temporary " "CIDR can exist without raising an overlap error. The subnetpool_id is " "automatically set to ``prefix_delegation``." msgstr "" #: ../config-ipv6.rst:640 msgid "" "To trigger the prefix delegation process, create a router interface between " "this subnet and a router with an active interface on the external network:" msgstr "" #: ../config-ipv6.rst:650 msgid "" "The prefix delegation mechanism then sends a request via the external " "network to your prefix delegation server, which replies with the delegated " "prefix. The subnet is then updated with the new prefix, including issuing " "new IP addresses to all ports:" msgstr "" #: ../config-ipv6.rst:679 msgid "" "If the prefix delegation server is configured to delegate globally routable " "prefixes and setup routes, then any instance with a port on this subnet " "should now have external network access." msgstr "" #: ../config-ipv6.rst:683 msgid "" "Deleting the router interface causes the subnet to be reverted to the " "temporary CIDR, and all ports have their IPs updated. Prefix leases are " "released and renewed automatically as necessary." msgstr "" #: ../config-ipv6.rst:688 msgid "References" msgstr "" #: ../config-ipv6.rst:690 msgid "" "The following link provides a great step by step tutorial on setting up IPv6 " "with OpenStack: http://www.debug-all.com/?p=52" msgstr "" #: ../config-ipv6.rst:696 msgid "Extra configuration" msgstr "" #: ../config-ipv6.rst:699 msgid "Neutron dhcpv6_pd_agent" msgstr "" #: ../config-ipv6.rst:701 msgid "" "To enable the driver for the dhcpv6_pd_agent, set pd_dhcp_driver to this in " "``/etc/neutron/neutron.conf``:" msgstr "" #: ../config-ipv6.rst:708 msgid "" "To allow the neutron-pd-agent to communicate with prefix delegation servers, " "you must set which network interface to use for external communication. In " "DevStack the default for this is ``br-ex``:" msgstr "" #: ../config-ipv6.rst:716 msgid "" "Once you have stacked run the command below to start the neutron-pd-agent:" msgstr "" #: ../config-lbaas.rst:5 msgid "Load Balancer as a Service (LBaaS)" msgstr "" #: ../config-lbaas.rst:7 msgid "" "The Networking service offers two load balancer implementations through the " "``neutron-lbaas`` service plug-in:" msgstr "" #: ../config-lbaas.rst:10 msgid "LBaaS v1: introduced in Juno (deprecated in Liberty)" msgstr "" #: ../config-lbaas.rst:11 msgid "LBaaS v2: introduced in Kilo" msgstr "" #: ../config-lbaas.rst:13 msgid "" "Both implementations use agents. The agents handle the HAProxy configuration " "and manage the HAProxy daemon. LBaaS v2 adds the concept of listeners to the " "LBaaS v1 load balancers. LBaaS v2 allows you to configure multiple listener " "ports on a single load balancer IP address." msgstr "" #: ../config-lbaas.rst:18 msgid "" "Another LBaaS v2 implementation, `Octavia `_, has a separate API and separate worker processes that " "build load balancers within virtual machines on hypervisors that are managed " "by the Compute service. You do not need an agent for Octavia." msgstr "" #: ../config-lbaas.rst:24 msgid "" "Currently, no migration path exists between v1 and v2 load balancers. If you " "choose to switch from v1 to v2, you must recreate all load balancers, pools, " "and health monitors." msgstr "" #: ../config-lbaas.rst:29 msgid "LBaaS v1" msgstr "" #: ../config-lbaas.rst:31 msgid "" "LBaaS v1 is deprecated in the Liberty release. These links provide more " "details about how LBaaS v1 works and how to configure it:" msgstr "" #: ../config-lbaas.rst:34 msgid "" "`Load-Balancer-as-a-Service (LBaaS) overview `__" msgstr "" #: ../config-lbaas.rst:35 msgid "" "`Basic Load-Balancer-as-a-Service operations `__" msgstr "" #: ../config-lbaas.rst:38 msgid "LBaaS v2" msgstr "" #: ../config-lbaas.rst:40 msgid "LBaaS v2 has several new concepts to understand:" msgstr "" #: ../config-lbaas.rst:46 msgid "" "The load balancer occupies a neutron network port and has an IP address " "assigned from a subnet." msgstr "" #: ../config-lbaas.rst:47 msgid "Load balancer" msgstr "" #: ../config-lbaas.rst:50 msgid "" "Load balancers can listen for requests on multiple ports. Each one of those " "ports is specified by a listener." msgstr "" #: ../config-lbaas.rst:51 msgid "Listener" msgstr "" #: ../config-lbaas.rst:54 msgid "" "A pool holds a list of members that serve content through the load balancer." msgstr "" #: ../config-lbaas.rst:54 msgid "Pool" msgstr "" #: ../config-lbaas.rst:57 msgid "" "Members are servers that serve traffic behind a load balancer. Each member " "is specified by the IP address and port that it uses to serve traffic." msgstr "" #: ../config-lbaas.rst:58 msgid "Member" msgstr "" #: ../config-lbaas.rst:61 msgid "" "Members may go offline from time to time and health monitors divert traffic " "away from members that are not responding properly. Health monitors are " "associated with pools." msgstr "" #: ../config-lbaas.rst:63 msgid "Health monitor" msgstr "" #: ../config-lbaas.rst:65 msgid "" "LBaaS v2 has multiple implementations via different service plug-ins. The " "two most common implementations use either an agent or the Octavia services. " "Both implementations use the `LBaaS v2 API `_." msgstr "" #: ../config-lbaas.rst:70 msgid "Configuring LBaaS v2 with an agent" msgstr "" #: ../config-lbaas.rst:72 ../config-lbaas.rst:140 msgid "" "Add the LBaaS v2 service plug-in to the ``service_plugins`` configuration " "directive in ``/etc/neutron/neutron.conf``. The plug-in list is comma-" "separated:" msgstr "" #: ../config-lbaas.rst:80 msgid "" "Add the LBaaS v2 service provider to the ``service_provider`` configuration " "directive within the ``[service_providers]`` section in ``/etc/neutron/" "neutron.conf``:" msgstr "" #: ../config-lbaas.rst:88 msgid "" "If you have existing service providers for other networking service plug-" "ins, such as VPNaaS or FWaaS, add the ``service_provider`` line shown above " "in the ``[service_providers]`` section as a separate line. These " "configuration directives are repeatable and are not comma-separated." msgstr "" #: ../config-lbaas.rst:93 msgid "" "Select the driver that manages virtual interfaces in ``/etc/neutron/" "lbaas_agent.ini``:" msgstr "" #: ../config-lbaas.rst:101 msgid "" "Replace ``INTERFACE_DRIVER`` with the interface driver that the layer-2 " "agent in your environment uses. For example, ``openvswitch`` for Open " "vSwitch or ``linuxbridge`` for Linux bridge." msgstr "" #: ../config-lbaas.rst:105 msgid "Run the ``neutron-lbaas`` database migration:" msgstr "" #: ../config-lbaas.rst:111 msgid "" "If you have deployed LBaaS v1, **stop the LBaaS v1 agent now**. The v1 and " "v2 agents **cannot** run simultaneously." msgstr "" #: ../config-lbaas.rst:114 msgid "Start the LBaaS v2 agent:" msgstr "" #: ../config-lbaas.rst:122 msgid "" "Restart the Network service to activate the new configuration. You are now " "ready to create load balancers with the LBaaS v2 agent." msgstr "" #: ../config-lbaas.rst:126 msgid "Configuring LBaaS v2 with Octavia" msgstr "" #: ../config-lbaas.rst:128 msgid "" "Octavia provides additional capabilities for load balancers, including using " "a compute driver to build instances that operate as load balancers. The " "`Hands on Lab - Install and Configure OpenStack Octavia `_ session at the OpenStack " "Summit in Tokyo provides an overview of Octavia." msgstr "" #: ../config-lbaas.rst:134 msgid "" "The DevStack documentation offers a `simple method to deploy Octavia `_ " "and test the service with redundant load balancer instances. If you already " "have Octavia installed and configured within your environment, you can " "configure the Network service to use Octavia:" msgstr "" #: ../config-lbaas.rst:148 msgid "" "Add the Octavia service provider to the ``service_provider`` configuration " "directive within the ``[service_providers]`` section in ``/etc/neutron/" "neutron.conf``:" msgstr "" #: ../config-lbaas.rst:156 msgid "" "Ensure that the LBaaS v1 and v2 service providers are removed from the " "``[service_providers]`` section. They are not used with Octavia. **Verify " "that all LBaaS agents are stopped.**" msgstr "" #: ../config-lbaas.rst:160 msgid "" "Restart the Network service to activate the new configuration. You are now " "ready to create and manage load balancers with Octavia." msgstr "" #: ../config-lbaas.rst:164 msgid "Add LBaaS panels to Dashboard" msgstr "" #: ../config-lbaas.rst:166 msgid "" "The Dashboard panels for managing LBaaS v2 are available starting with the " "Mitaka release." msgstr "" #: ../config-lbaas.rst:169 msgid "" "Clone the `neutron-lbaas-dashboard repository `__ and check out the release branch that " "matches the installed version of Dashboard:" msgstr "" #: ../config-lbaas.rst:180 msgid "Install the Dashboard panel plug-in:" msgstr "" #: ../config-lbaas.rst:186 msgid "" "Copy the ``_1481_project_ng_loadbalancersv2_panel.py`` file from the " "``neutron-lbaas-dashboard/enabled`` directory into the Dashboard " "``openstack_dashboard/local/enabled`` directory." msgstr "" #: ../config-lbaas.rst:190 msgid "" "This step ensures that Dashboard can find the plug-in when it enumerates all " "of its available panels." msgstr "" #: ../config-lbaas.rst:193 msgid "" "Enable the plug-in in Dashboard by editing the ``local_settings.py`` file " "and setting ``enable_lb`` to ``True`` in the ``OPENSTACK_NEUTRON_NETWORK`` " "dictionary." msgstr "" #: ../config-lbaas.rst:197 msgid "" "If Dashboard is configured to compress static files for better performance " "(usually set through ``COMPRESS_OFFLINE`` in ``local_settings.py``), " "optimize the static files again:" msgstr "" #: ../config-lbaas.rst:206 msgid "Restart Apache to activate the new panel:" msgstr "" #: ../config-lbaas.rst:212 msgid "" "To find the panel, click on :guilabel:`Project` in Dashboard, then click " "the :guilabel:`Network` drop-down menu and select :guilabel:`Load Balancers`." msgstr "" #: ../config-lbaas.rst:216 msgid "LBaaS v2 operations" msgstr "" #: ../config-lbaas.rst:218 msgid "" "The same neutron commands are used for LBaaS v2 with an agent or with " "Octavia." msgstr "" #: ../config-lbaas.rst:221 msgid "Building an LBaaS v2 load balancer" msgstr "" #: ../config-lbaas.rst:223 msgid "" "Start by creating a load balancer on a network. In this example, the " "``private`` network is an isolated network with two web server instances:" msgstr "" #: ../config-lbaas.rst:230 msgid "" "You can view the load balancer status and IP address with the ``lbaas-" "loadbalancer-show`` command:" msgstr "" #: ../config-lbaas.rst:254 msgid "" "Update the security group to allow traffic to reach the new load balancer. " "Create a new security group along with ingress rules to allow traffic into " "the new load balancer. The neutron port for the load balancer is shown as " "``vip_port_id`` above." msgstr "" #: ../config-lbaas.rst:259 msgid "" "Create a security group and rules to allow TCP port 80, TCP port 443, and " "all ICMP traffic:" msgstr "" #: ../config-lbaas.rst:284 msgid "" "Apply the security group to the load balancer's network port using " "``vip_port_id`` from the :command:`lbaas-loadbalancer-show` command:" msgstr "" #: ../config-lbaas.rst:293 msgid "" "This load balancer is active and ready to serve traffic on ``192.168.1.22``." msgstr "" #: ../config-lbaas.rst:295 msgid "" "Verify that the load balancer is responding to pings before moving further:" msgstr "" #: ../config-lbaas.rst:311 msgid "Adding an HTTP listener" msgstr "" #: ../config-lbaas.rst:313 msgid "" "With the load balancer online, you can add a listener for plaintext HTTP " "traffic on port 80:" msgstr "" #: ../config-lbaas.rst:324 msgid "" "You can begin building a pool and adding members to the pool to serve HTTP " "content on port 80. For this example, the web servers are ``192.168.1.16`` " "and ``192.168.1.17``:" msgstr "" #: ../config-lbaas.rst:346 msgid "" "You can use ``curl`` to verify connectivity through the load balancers to " "your web servers:" msgstr "" #: ../config-lbaas.rst:360 msgid "" "In this example, the load balancer uses the round robin algorithm and the " "traffic alternates between the web servers on the backend." msgstr "" #: ../config-lbaas.rst:363 msgid "" "You can add a health monitor so that unresponsive servers are removed from " "the pool:" msgstr "" #: ../config-lbaas.rst:375 msgid "" "In this example, the health monitor removes the server from the pool if it " "fails a health check at two five-second intervals. When the server recovers " "and begins responding to health checks again, it is added to the pool once " "again." msgstr "" #: ../config-lbaas.rst:381 msgid "Adding an HTTPS listener" msgstr "" #: ../config-lbaas.rst:383 msgid "" "You can add another listener on port 443 for HTTPS traffic. LBaaS v2 offers " "SSL/TLS termination at the load balancer, but this example takes a simpler " "approach and allows encrypted connections to terminate at each member server." msgstr "" #: ../config-lbaas.rst:387 msgid "" "Start by creating a listener, attaching a pool, and then adding members:" msgstr "" #: ../config-lbaas.rst:412 msgid "You can also add a health monitor for the HTTPS pool:" msgstr "" #: ../config-lbaas.rst:423 msgid "The load balancer now handles traffic on ports 80 and 443." msgstr "" #: ../config-lbaas.rst:426 msgid "Associating a floating IP address" msgstr "" #: ../config-lbaas.rst:428 msgid "" "Load balancers that are deployed on a public or provider network that are " "accessible to external clients do not need a floating IP address assigned. " "External clients can directly access the virtual IP address (VIP) of those " "load balancers." msgstr "" #: ../config-lbaas.rst:433 msgid "" "However, load balancers deployed onto private or isolated networks need a " "floating IP address assigned if they must be accessible to external clients. " "To complete this step, you must have a router between the private and public " "networks and an available floating IP address." msgstr "" #: ../config-lbaas.rst:438 msgid "" "You can use the ``lbaas-loadbalancer-show`` command from the beginning of " "this section to locate the ``vip_port_id``. The ``vip_port_id`` is the ID of " "the network port that is assigned to the load balancer. You can associate a " "free floating IP address to the load balancer using ``floatingip-associate``:" msgstr "" #: ../config-lbaas.rst:448 msgid "Setting quotas for LBaaS v2" msgstr "" #: ../config-lbaas.rst:450 msgid "" "Quotas are available for limiting the number of load balancers and load " "balancer pools. By default, both quotas are set to 10." msgstr "" #: ../config-lbaas.rst:453 msgid "You can adjust quotas using the :command:`quota-update` command:" msgstr "" #: ../config-lbaas.rst:460 msgid "A setting of ``-1`` disables the quota for a tenant." msgstr "" #: ../config-lbaas.rst:463 msgid "Retrieving load balancer statistics" msgstr "" #: ../config-lbaas.rst:465 msgid "" "The LBaaS v2 agent collects four types of statistics for each load balancer " "every six seconds. Users can query these statistics with the :command:`lbaas-" "loadbalancer-stats` command:" msgstr "" #: ../config-lbaas.rst:481 msgid "" "The ``active_connections`` count is the total number of connections that " "were active at the time the agent polled the load balancer. The other three " "statistics are cumulative since the load balancer was last started. For " "example, if the load balancer restarts due to a system error or a " "configuration change, these statistics will be reset." msgstr "" #: ../config-macvtap.rst:5 msgid "Macvtap mechanism driver" msgstr "" #: ../config-macvtap.rst:7 msgid "" "The Macvtap mechanism driver for the ML2 plug-in generally increases network " "performance of instances." msgstr "" #: ../config-macvtap.rst:10 msgid "" "Consider the following attributes of this mechanism driver to determine " "practicality in your environment:" msgstr "" #: ../config-macvtap.rst:13 msgid "" "Supports only instance ports. Ports for DHCP and layer-3 (routing) services " "must use another mechanism driver such as Linux bridge or Open vSwitch (OVS)." msgstr "" #: ../config-macvtap.rst:17 msgid "Supports only untagged (flat) and tagged (VLAN) networks." msgstr "" #: ../config-macvtap.rst:19 msgid "" "Lacks support for security groups including basic (sanity) and anti-spoofing " "rules." msgstr "" #: ../config-macvtap.rst:22 msgid "" "Lacks support for layer-3 high-availability mechanisms such as Virtual " "Router Redundancy Protocol (VRRP) and Distributed Virtual Routing (DVR)." msgstr "" #: ../config-macvtap.rst:26 msgid "" "Only compute resources can be attached via macvtap. Attaching other " "resources like DHCP, Routers and others is not supported. Therefore run " "either OVS or linux bridge in VLAN or flat mode on the controller node." msgstr "" #: ../config-macvtap.rst:30 msgid "" "Instance migration requires the same values for the " "``physical_interface_mapping`` configuration option on each compute node. " "For more information, see ``_." msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-ovsfwdriver.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:36 ../config-ovsfwdriver.rst:23 #: ../deploy-lb-ha-vrrp.rst:18 ../deploy-lb-provider.rst:15 #: ../deploy-lb-selfservice.rst:19 ../deploy-ovs-ha-dvr.rst:32 #: ../deploy-ovs-ha-vrrp.rst:10 ../deploy-ovs-provider.rst:25 #: ../deploy-ovs-selfservice.rst:14 ../deploy.rst:29 msgid "Prerequisites" msgstr "" #: ../config-macvtap.rst:38 msgid "" "You can add this mechanism driver to an existing environment using either " "the Linux bridge or OVS mechanism drivers with only provider networks or " "provider and self-service networks. You can change the configuration of " "existing compute nodes or add compute nodes with the Macvtap mechanism " "driver. The example configuration assumes addition of compute nodes with the " "Macvtap mechanism driver to the :ref:`deploy-lb-selfservice` or :ref:`deploy-" "ovs-selfservice` deployment examples." msgstr "" #: ../config-macvtap.rst:46 msgid "Add one or more compute nodes with the following components:" msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:48 ../deploy-lb-ha-vrrp.rst:22 #: ../deploy-lb-selfservice.rst:23 ../deploy-ovs-ha-vrrp.rst:14 #: ../deploy-ovs-selfservice.rst:18 msgid "Three network interfaces: management, provider, and overlay." msgstr "" #: ../config-macvtap.rst:49 msgid "OpenStack Networking Macvtap layer-2 agent and any dependencies." msgstr "" #: ../config-macvtap.rst:53 msgid "" "To support integration with the deployment examples, this content configures " "the Macvtap mechanism driver to use the overlay network for untagged (flat) " "or tagged (VLAN) networks in addition to overlay networks such as VXLAN. " "Your physical network infrastructure must support VLAN (802.1q) tagging on " "the overlay network." msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-ml2.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-sfc.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:60 ../config-ml2.rst:8 ../config-sfc.rst:24 #: ../deploy-lb-ha-vrrp.rst:32 ../deploy-lb-provider.rst:37 #: ../deploy-lb-selfservice.rst:37 ../deploy-ovs-ha-dvr.rst:45 #: ../deploy-ovs-ha-vrrp.rst:24 ../deploy-ovs-provider.rst:47 #: ../deploy-ovs-selfservice.rst:32 msgid "Architecture" msgstr "" #: ../config-macvtap.rst:62 msgid "" "The Macvtap mechanism driver only applies to compute nodes. Otherwise, the " "environment resembles the prerequisite deployment example." msgstr "" #: ../config-macvtap.rst:74 msgid "" "Use the following example configuration as a template to add support for the " "Macvtap mechanism driver to an existing operational environment." msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# config-mtu.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:80 ../config-mtu.rst:51 ../config-mtu.rst:84 #: ../config-mtu.rst:108 ../deploy-lb-provider.rst:104 #: ../deploy-lb-selfservice.rst:72 ../deploy-ovs-provider.rst:115 #: ../deploy-ovs-selfservice.rst:66 msgid "In the ``ml2_conf.ini`` file:" msgstr "" #: ../config-macvtap.rst:82 msgid "Add ``macvtap`` to mechanism drivers." msgstr "" #: ../config-macvtap.rst:89 msgid "Configure network mappings." msgstr "" #: ../config-macvtap.rst:101 msgid "" "Use of ``macvtap`` is arbitrary. Only the self-service deployment examples " "require VLAN ID ranges. Replace ``VLAN_ID_START`` and ``VLAN_ID_END`` with " "appropriate numerical values." msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:105 ../deploy-lb-ha-vrrp.rst:66 #: ../deploy-lb-selfservice.rst:99 ../deploy-lb-selfservice.rst:167 #: ../deploy-ovs-ha-dvr.rst:78 ../deploy-ovs-ha-dvr.rst:105 #: ../deploy-ovs-ha-dvr.rst:122 ../deploy-ovs-ha-vrrp.rst:58 #: ../deploy-ovs-selfservice.rst:93 ../deploy-ovs-selfservice.rst:172 msgid "Restart the following services:" msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:107 ../deploy-lb-ha-vrrp.rst:68 #: ../deploy-lb-provider.rst:152 ../deploy-lb-selfservice.rst:101 #: ../deploy-ovs-ha-dvr.rst:80 ../deploy-ovs-ha-vrrp.rst:60 #: ../deploy-ovs-provider.rst:163 ../deploy-ovs-selfservice.rst:95 #: ../intro-os-networking.rst:287 msgid "Server" msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:112 ../deploy-lb-ha-vrrp.rst:73 #: ../deploy-lb-ha-vrrp.rst:126 ../deploy-ovs-ha-vrrp.rst:65 #: ../deploy-ovs-ha-vrrp.rst:127 msgid "No changes." msgstr "" #: ../config-macvtap.rst:117 msgid "Install the Networking service Macvtap layer-2 agent." msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:119 ../deploy-lb-ha-vrrp.rst:81 #: ../deploy-lb-provider.rst:159 ../deploy-lb-selfservice.rst:108 #: ../deploy-ovs-ha-vrrp.rst:74 ../deploy-ovs-provider.rst:173 #: ../deploy-ovs-selfservice.rst:104 msgid "In the ``neutron.conf`` file, configure common options:" msgstr "" #: ../config-macvtap.rst:123 msgid "In the ``macvtap_agent.ini`` file, configure the layer-2 agent." msgstr "" #: ../config-macvtap.rst:133 msgid "" "Replace ``MACVTAP_INTERFACE`` with the name of the underlying interface that " "handles Macvtap mechanism driver interfaces. If using a prerequisite " "deployment example, replace ``MACVTAP_INTERFACE`` with the name of the " "underlying interface that handles overlay networks. For example, ``eth1``." msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:139 ../deploy-lb-ha-vrrp.rst:118 #: ../deploy-lb-provider.rst:150 ../deploy-lb-provider.rst:198 #: ../deploy-lb-selfservice.rst:146 ../deploy-ovs-ha-vrrp.rst:78 #: ../deploy-ovs-ha-vrrp.rst:119 ../deploy-ovs-provider.rst:161 #: ../deploy-ovs-provider.rst:206 ../deploy-ovs-provider.rst:226 #: ../deploy-ovs-selfservice.rst:108 ../deploy-ovs-selfservice.rst:149 msgid "Start the following services:" msgstr "" #: ../config-macvtap.rst:141 msgid "Macvtap agent" msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:147 ../deploy-lb-provider.rst:208 #: ../deploy-ovs-provider.rst:236 msgid "Verify presence and operation of the agents:" msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:160 ../deploy-lb-ha-vrrp.rst:153 #: ../deploy-lb-provider.rst:225 ../deploy-lb-selfservice.rst:194 #: ../deploy-ovs-ha-dvr.rst:166 ../deploy-ovs-ha-vrrp.rst:154 #: ../deploy-ovs-provider.rst:253 ../deploy-ovs-selfservice.rst:199 msgid "Create initial networks" msgstr "" #: ../config-macvtap.rst:162 msgid "" "This mechanism driver simply changes the virtual network interface driver " "for instances. Thus, you can reference the ``Create initial networks`` " "content for the prerequisite deployment example." msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:167 ../deploy-lb-ha-vrrp.rst:158 #: ../deploy-lb-provider.rst:230 ../deploy-lb-selfservice.rst:199 #: ../deploy-ovs-ha-dvr.rst:297 ../deploy-ovs-ha-vrrp.rst:159 #: ../deploy-ovs-provider.rst:258 ../deploy-ovs-selfservice.rst:204 msgid "Verify network operation" msgstr "" #: ../config-macvtap.rst:169 msgid "" "This mechanism driver simply changes the virtual network interface driver " "for instances. Thus, you can reference the ``Verify network operation`` " "content for the prerequisite deployment example." msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-macvtap.rst:174 ../deploy-lb-ha-vrrp.rst:168 #: ../deploy-lb-provider.rst:235 ../deploy-lb-selfservice.rst:206 #: ../deploy-ovs-ha-dvr.rst:398 ../deploy-ovs-ha-vrrp.rst:169 #: ../deploy-ovs-provider.rst:263 ../deploy-ovs-selfservice.rst:211 msgid "Network traffic flow" msgstr "" #: ../config-macvtap.rst:176 msgid "" "This mechanism driver simply removes the Linux bridge handling security " "groups on the compute nodes. Thus, you can reference the network traffic " "flow scenarios for the prerequisite deployment example." msgstr "" #: ../config-ml2.rst:0 msgid "Mechanism drivers and L2 agents" msgstr "" #: ../config-ml2.rst:0 msgid "Reference implementations and other agents" msgstr "" #: ../config-ml2.rst:5 msgid "ML2 plug-in" msgstr "" #: ../config-ml2.rst:10 msgid "" "The Modular Layer 2 (ML2) neutron plug-in is a framework allowing OpenStack " "Networking to simultaneously use the variety of layer 2 networking " "technologies found in complex real-world data centers. The ML2 framework " "distinguishes between the two kinds of drivers that can be configured:" msgstr "" #: ../config-ml2.rst:15 msgid "Type drivers" msgstr "" #: ../config-ml2.rst:17 msgid "Define how an OpenStack network is technically realized. Example: VXLAN" msgstr "" #: ../config-ml2.rst:19 msgid "" "Each available network type is managed by an ML2 type driver. Type drivers " "maintain any needed type-specific network state. They validate the type " "specific information for provider networks and are responsible for the " "allocation of a free segment in tenant networks." msgstr "" # #-#-#-#-# config-ml2.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ml2.rst:24 ../config-ml2.rst:202 ../deploy.rst:131 msgid "Mechanism drivers" msgstr "" #: ../config-ml2.rst:26 msgid "" "Define the mechanism to access an OpenStack network of a certain type. " "Example: Open vSwitch mechanism driver." msgstr "" #: ../config-ml2.rst:29 msgid "" "The mechanism driver is responsible for taking the information established " "by the type driver and ensuring that it is properly applied given the " "specific networking mechanisms that have been enabled." msgstr "" #: ../config-ml2.rst:33 msgid "" "Mechanism drivers can utilize L2 agents (via RPC) and/or interact directly " "with external devices or controllers." msgstr "" #: ../config-ml2.rst:36 msgid "" "Multiple mechanism and type drivers can be used simultaneously to access " "different ports of the same virtual network." msgstr "" #: ../config-ml2.rst:43 msgid "ML2 driver support matrix" msgstr "" #: ../config-ml2.rst:49 msgid "type driver / mech driver" msgstr "" # #-#-#-#-# config-ml2.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ml2.rst:50 ../config-ml2.rst:111 ../config-ml2.rst:129 #: ../intro-os-networking.rst:133 msgid "Flat" msgstr "" # #-#-#-#-# config-ml2.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ml2.rst:51 ../config-ml2.rst:113 ../config-ml2.rst:136 #: ../config-ml2.rst:176 ../intro-os-networking.rst:141 msgid "VLAN" msgstr "" #: ../config-ml2.rst:52 ../config-ml2.rst:117 ../config-ml2.rst:147 #: ../config-ml2.rst:190 msgid "VXLAN" msgstr "" #: ../config-ml2.rst:53 ../config-ml2.rst:115 ../config-ml2.rst:143 #: ../config-ml2.rst:183 msgid "GRE" msgstr "" #: ../config-ml2.rst:54 ../config-ml2.rst:224 ../config-ml2.rst:408 msgid "Open vSwitch" msgstr "" #: ../config-ml2.rst:55 ../config-ml2.rst:56 ../config-ml2.rst:57 #: ../config-ml2.rst:58 ../config-ml2.rst:60 ../config-ml2.rst:61 #: ../config-ml2.rst:62 ../config-ml2.rst:63 ../config-ml2.rst:65 #: ../config-ml2.rst:66 ../config-ml2.rst:70 ../config-ml2.rst:71 #: ../config-ml2.rst:77 ../config-ml2.rst:78 ../config-ml2.rst:431 #: ../config-ml2.rst:432 ../config-ml2.rst:433 ../config-ml2.rst:434 #: ../config-ml2.rst:436 ../config-ml2.rst:437 ../config-ml2.rst:438 #: ../config-ml2.rst:439 msgid "yes" msgstr "" #: ../config-ml2.rst:59 ../config-ml2.rst:218 ../config-ml2.rst:410 msgid "Linux bridge" msgstr "" #: ../config-ml2.rst:64 ../config-ml2.rst:230 ../config-ml2.rst:412 msgid "SRIOV" msgstr "" #: ../config-ml2.rst:67 ../config-ml2.rst:68 ../config-ml2.rst:72 #: ../config-ml2.rst:73 ../config-ml2.rst:75 ../config-ml2.rst:76 #: ../config-ml2.rst:441 ../config-ml2.rst:442 ../config-ml2.rst:443 #: ../config-ml2.rst:444 ../config-ml2.rst:446 ../config-ml2.rst:447 #: ../config-ml2.rst:448 ../config-ml2.rst:449 msgid "no" msgstr "" #: ../config-ml2.rst:69 ../config-ml2.rst:236 ../config-ml2.rst:414 msgid "MacVTap" msgstr "" #: ../config-ml2.rst:74 ../config-ml2.rst:241 ../config-ml2.rst:416 msgid "L2 population" msgstr "" #: ../config-ml2.rst:82 msgid "" "L2 population is a special mechanism driver that optimizes BUM (Broadcast, " "unknown destination address, multicast) traffic in the overlay networks " "VXLAN and GRE. It needs to be used in conjunction with either the Linux " "bridge or the Open vSwitch mechanism driver and cannot be used as standalone " "mechanism driver. For more information, see the *Mechanism drivers* section " "below." msgstr "" #: ../config-ml2.rst:93 msgid "Network type drivers" msgstr "" #: ../config-ml2.rst:95 msgid "" "To enable type drivers in the ML2 plug-in. Edit the ``/etc/neutron/plugins/" "ml2/ml2_conf.ini`` file:" msgstr "" #: ../config-ml2.rst:106 ../config-ml2.rst:215 msgid "" "For more details, see the `Configuration Reference `__." msgstr "" #: ../config-ml2.rst:109 msgid "The following type drivers are available" msgstr "" #: ../config-ml2.rst:120 msgid "Provider network types" msgstr "" #: ../config-ml2.rst:122 msgid "" "Provider networks provide connectivity like project networks. But only " "administrative (privileged) users can manage those networks because they " "interface with the physical network infrastructure. More information about " "provider networks see :doc:`intro-os-networking` or the `OpenStack " "Administrator Guide `__." msgstr "" #: ../config-ml2.rst:131 ../config-ml2.rst:138 msgid "" "The administrator needs to configure a list of physical network names that " "can be used for provider networks. For more details, see the related section " "in the `Configuration Reference `__." msgstr "" #: ../config-ml2.rst:145 msgid "No additional configuration required." msgstr "" #: ../config-ml2.rst:149 msgid "" "The administrator can configure the VXLAN multicast group that should be " "used." msgstr "" #: ../config-ml2.rst:153 msgid "" "VXLAN multicast group configuration is not applicable for the Open vSwitch " "agent." msgstr "" #: ../config-ml2.rst:156 msgid "" "As of today it is not used in the Linux bridge agent. The Linux bridge agent " "has its own agent specific configuration option. Please see the following " "bug for more details: https://bugs.launchpad.net/neutron/+bug/1523614" msgstr "" #: ../config-ml2.rst:162 msgid "Project network types" msgstr "" #: ../config-ml2.rst:164 msgid "" "Project (tenant) networks provide connectivity to instances for a particular " "project. Regular (non-privileged) users can manage project networks within " "the allocation that an administrator or operator defines for them. More " "information about project and provider networks see :doc:`intro-os-" "networking` or the `OpenStack Administrator Guide `__." msgstr "" #: ../config-ml2.rst:172 msgid "" "Project network configurations are made in the ``/etc/neutron/plugins/ml2/" "ml2_conf.ini`` configuration file on the neutron server:" msgstr "" #: ../config-ml2.rst:178 msgid "" "The administrator needs to configure the range of VLAN IDs that can be used " "for project (tenant) network allocation. For more details, see the related " "section in the `Configuration Reference `__." msgstr "" #: ../config-ml2.rst:185 msgid "" "The administrator needs to configure the range of tunnel IDs that can be " "used for project (tenant) network allocation. For more details, see the " "related section in the `Configuration Reference `__." msgstr "" #: ../config-ml2.rst:192 msgid "" "The administrator needs to configure the range of VXLAN IDs that can be used " "for project (tenant) network allocation. For more details, see the related " "section in the `Configuration Reference `__." msgstr "" #: ../config-ml2.rst:198 msgid "" "Flat networks for project (tenant) allocation are not supported. They only " "can exist as a provider network." msgstr "" #: ../config-ml2.rst:204 msgid "" "To enable mechanism drivers in the ML2 plug-in, edit the ``/etc/neutron/" "plugins/ml2/ml2_conf.ini`` file on the neutron server:" msgstr "" #: ../config-ml2.rst:220 ../config-ml2.rst:226 msgid "" "No additional configurations required for the mechanism driver. Additional " "agent configuration is required. For details, see the related *L2 agent* " "section below." msgstr "" #: ../config-ml2.rst:232 msgid "" "The administrator needs to define a list PCI hardware that shall be used by " "OpenStack. For more details, see the related section in the `Configuration " "Reference `__." msgstr "" #: ../config-ml2.rst:238 msgid "" "No additional configurations required for the mechanism driver. Additional " "agent configuration is required. Please see the related section." msgstr "" #: ../config-ml2.rst:243 msgid "" "The administrator can configure some optional configuration options. For " "more details, see the related section in the `Configuration Reference " "`__." msgstr "" #: ../config-ml2.rst:247 msgid "Specialized" msgstr "" #: ../config-ml2.rst:249 msgid "Open source" msgstr "" #: ../config-ml2.rst:251 msgid "" "External open source mechanism drivers exist as well as the neutron " "integrated reference implementations. Configuration of those drivers is not " "part of this document. For example:" msgstr "" #: ../config-ml2.rst:255 msgid "OpenDaylight" msgstr "" #: ../config-ml2.rst:256 msgid "OpenContrail" msgstr "" #: ../config-ml2.rst:258 msgid "Proprietary (vendor)" msgstr "" #: ../config-ml2.rst:260 msgid "" "External mechanism drivers from various vendors exist as well as the neutron " "integrated reference implementations." msgstr "" #: ../config-ml2.rst:263 msgid "Configuration of those drivers is not part of this document." msgstr "" # #-#-#-#-# config-ml2.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ml2.rst:267 ../intro-os-networking.rst:297 msgid "Agents" msgstr "" #: ../config-ml2.rst:270 ../config-ml2.rst:407 msgid "L2 agent" msgstr "" #: ../config-ml2.rst:272 msgid "" "An L2 agent serves layer 2 (Ethernet) network connectivity to OpenStack " "resources. It typically runs on each Network Node and on each Compute Node." msgstr "" # #-#-#-#-# config-ml2.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ml2.rst:275 ../config-ml2.rst:409 ../deploy-ovs-ha-dvr.rst:107 #: ../deploy-ovs-ha-dvr.rst:124 ../deploy-ovs-ha-vrrp.rst:121 #: ../deploy-ovs-selfservice.rst:151 ../deploy-ovs-selfservice.rst:174 msgid "Open vSwitch agent" msgstr "" #: ../config-ml2.rst:277 msgid "" "The Open vSwitch agent configures the Open vSwitch to realize L2 networks " "for OpenStack resources." msgstr "" #: ../config-ml2.rst:280 msgid "" "Configuration for the Open vSwitch agent is typically done in the " "``openvswitch_agent.ini`` configuration file. Make sure that on agent start " "you pass this configuration file as argument." msgstr "" #: ../config-ml2.rst:284 msgid "" "For a detailed list of configuration options, see the related section in the " "`Configuration Reference `__." msgstr "" # #-#-#-#-# config-ml2.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ml2.rst:287 ../config-ml2.rst:411 ../deploy-lb-ha-vrrp.rst:120 #: ../deploy-lb-provider.rst:200 ../deploy-lb-selfservice.rst:148 #: ../deploy-lb-selfservice.rst:169 msgid "Linux bridge agent" msgstr "" #: ../config-ml2.rst:289 msgid "" "The Linux bridge agent configures Linux bridges to realize L2 networks for " "OpenStack resources." msgstr "" #: ../config-ml2.rst:292 msgid "" "Configuration for the Linux bridge agent is typically done in the " "``linuxbridge_agent.ini`` configuration file. Make sure that on agent start " "you pass this configuration file as argument." msgstr "" #: ../config-ml2.rst:296 msgid "" "For a detailed list of configuration options, see the related section in the " "`Configuration Reference `__." msgstr "" #: ../config-ml2.rst:299 msgid "SRIOV Nic Switch agent" msgstr "" #: ../config-ml2.rst:301 msgid "" "The sriov nic switch agent configures PCI virtual functions to realize L2 " "networks for OpenStack instances. Network attachments for other resources " "like routers, DHCP, and so on are not supported." msgstr "" #: ../config-ml2.rst:305 msgid "" "Configuration for the SRIOV nic switch agent is typically done in the " "``sriov_agent.ini`` configuration file. Make sure that on agent start you " "pass this configuration file as argument." msgstr "" #: ../config-ml2.rst:309 msgid "" "For a detailed list of configuration options, see the related section in the " "`Configuration Reference `__." msgstr "" #: ../config-ml2.rst:312 ../config-ml2.rst:415 msgid "MacVTap agent" msgstr "" #: ../config-ml2.rst:314 msgid "" "The MacVTap agent uses kernel MacVTap devices for realizing L2 networks for " "OpenStack instances. Network attachments for other resources like routers, " "DHCP, and so on are not supported." msgstr "" #: ../config-ml2.rst:318 msgid "" "Configuration for the MacVTap agent is typically done in the ``macvtap_agent." "ini`` configuration file. Make sure that on agent start you pass this " "configuration file as argument." msgstr "" #: ../config-ml2.rst:322 msgid "" "For a detailed list of configuration options, see the related section in the " "`Configuration Reference `__." msgstr "" #: ../config-ml2.rst:326 ../config-ml2.rst:426 msgid "L3 agent" msgstr "" #: ../config-ml2.rst:328 msgid "" "The L3 agent offers advanced layer 3 services, like virtual Routers and " "Floating IPs. It requires an L2 agent running in parallel." msgstr "" #: ../config-ml2.rst:331 msgid "" "Configuration for the L3 agent is typically done in the ``l3_agent.ini`` " "configuration file. Make sure that on agent start you pass this " "configuration file as argument." msgstr "" #: ../config-ml2.rst:335 msgid "" "For a detailed list of configuration options, see the related section in the " "`Configuration Reference `__." msgstr "" # #-#-#-#-# config-ml2.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ml2.rst:339 ../config-ml2.rst:427 ../deploy-lb-provider.rst:201 #: ../deploy-ovs-provider.rst:229 msgid "DHCP agent" msgstr "" #: ../config-ml2.rst:341 msgid "" "The DHCP agent is responsible for :term:`DHCP` (Dynamic Host Configuration " "Protocol) and RADVD (Router Advertisement Daemon) services. It requires a " "running L2 agent on the same node." msgstr "" #: ../config-ml2.rst:345 msgid "" "Configuration for the DHCP agent is typically done in the ``dhcp_agent.ini`` " "configuration file. Make sure that on agent start you pass this " "configuration file as argument." msgstr "" #: ../config-ml2.rst:349 msgid "" "For a detailed list of configuration options, see the related section in the " "`Configuration Reference `__." msgstr "" # #-#-#-#-# config-ml2.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-ml2.rst:353 ../config-ml2.rst:428 ../deploy-lb-provider.rst:202 #: ../deploy-ovs-provider.rst:230 msgid "Metadata agent" msgstr "" #: ../config-ml2.rst:355 msgid "" "The Metadata agent allows instances to access cloud-init meta data and user " "data via the network. It requires a running L2 agent on the same node." msgstr "" #: ../config-ml2.rst:358 msgid "" "Configuration for the Metadata agent is typically done in the " "``metadata_agent.ini`` configuration file. Make sure that on agent start you " "pass this configuration file as argument." msgstr "" #: ../config-ml2.rst:362 msgid "" "For a detailed list of configuration options, see the related section in the " "`Configuration Reference `__." msgstr "" #: ../config-ml2.rst:366 msgid "L3 metering agent" msgstr "" #: ../config-ml2.rst:368 msgid "" "The L3 metering agent enables layer3 traffic metering. It requires a running " "L3 agent on the same node." msgstr "" #: ../config-ml2.rst:371 msgid "" "Configuration for the L3 metering agent is typically done in the " "``metering_agent.ini`` configuration file. Make sure that on agent start you " "pass this configuration file as argument." msgstr "" #: ../config-ml2.rst:375 msgid "" "For a detailed list of configuration options, see the related section in the " "`Configuration Reference `__." msgstr "" #: ../config-ml2.rst:379 msgid "Security" msgstr "" #: ../config-ml2.rst:381 msgid "L2 agents support some important security configurations." msgstr "" #: ../config-ml2.rst:383 msgid "Security Groups" msgstr "" #: ../config-ml2.rst:385 msgid "" "For more details, see the related section in the `Configuration Reference " "`__." msgstr "" #: ../config-ml2.rst:388 msgid "Arp Spoofing Prevention" msgstr "" #: ../config-ml2.rst:390 msgid "Configured in the *L2 agent* configuration." msgstr "" #: ../config-ml2.rst:394 msgid "Reference implementations" msgstr "" #: ../config-ml2.rst:397 msgid "Overview" msgstr "" #: ../config-ml2.rst:399 msgid "" "In this section, the combination of a mechanism driver and an L2 agent is " "called 'reference implementation'. The following table lists these " "implementations:" msgstr "" #: ../config-ml2.rst:406 msgid "Mechanism Driver" msgstr "" #: ../config-ml2.rst:413 msgid "SRIOV nic switch agent" msgstr "" #: ../config-ml2.rst:417 msgid "Open vSwitch agent, Linux bridge agent" msgstr "" #: ../config-ml2.rst:419 msgid "" "The following tables shows which reference implementations support which non-" "L2 neutron agents:" msgstr "" #: ../config-ml2.rst:425 msgid "Reference Implementation" msgstr "" #: ../config-ml2.rst:429 msgid "L3 Metering agent" msgstr "" #: ../config-ml2.rst:430 msgid "Open vSwitch & Open vSwitch agent" msgstr "" #: ../config-ml2.rst:435 msgid "Linux bridge & Linux bridge agent" msgstr "" #: ../config-ml2.rst:440 msgid "SRIOV & SRIOV nic switch agent" msgstr "" #: ../config-ml2.rst:445 msgid "MacVTap & MacVTap agent" msgstr "" #: ../config-ml2.rst:452 msgid "" "L2 population is not listed here, as it is not a standalone mechanism. If " "other agents are supported depends on the conjunctive mechanism driver that " "is used for binding a port." msgstr "" #: ../config-ml2.rst:459 msgid "" "More information about L2 population see the `OpenStack Manuals `__." msgstr "" #: ../config-ml2.rst:464 msgid "Buying guide" msgstr "" #: ../config-ml2.rst:466 msgid "" "This guide characterizes the L2 reference implementations that currently " "exist." msgstr "" #: ../config-ml2.rst:468 msgid "Open vSwitch mechanism and Open vSwitch agent" msgstr "" #: ../config-ml2.rst:470 ../config-ml2.rst:475 msgid "" "Can be used for instance network attachments as well as for attachments of " "other network resources like routers, DHCP, and so on." msgstr "" #: ../config-ml2.rst:473 msgid "Linux bridge mechanism and Linux bridge agent" msgstr "" #: ../config-ml2.rst:478 msgid "SRIOV mechanism driver and SRIOV NIC switch agent" msgstr "" #: ../config-ml2.rst:480 msgid "" "Can only be used for instance network attachments (device_owner = compute)." msgstr "" #: ../config-ml2.rst:482 msgid "" "Is deployed besides an other mechanism driver and L2 agent such as OVS or " "Linux bridge. It offers instances direct access to the network adapter " "through a PCI Virtual Function (VF). This gives an instance direct access to " "hardware capabilities and high performance networking." msgstr "" #: ../config-ml2.rst:487 msgid "" "The cloud consumer can decide via the neutron APIs VNIC_TYPE attribute, if " "an instance gets a normal OVS port or an SRIOV port." msgstr "" #: ../config-ml2.rst:490 msgid "" "Due to direct connection, some features are not available when using SRIOV. " "For example, DVR, security groups, migration." msgstr "" #: ../config-ml2.rst:493 msgid "For more information see the :ref:`config-sriov`." msgstr "" #: ../config-ml2.rst:495 msgid "MacVTap mechanism driver and MacVTap agent" msgstr "" #: ../config-ml2.rst:497 msgid "" "Can only be used for instance network attachments (device_owner = compute) " "and not for attachment of other resources like routers, DHCP, and so on." msgstr "" #: ../config-ml2.rst:500 msgid "" "It is positioned as alternative to Open vSwitch or Linux bridge support on " "the compute node for internal deployments." msgstr "" #: ../config-ml2.rst:503 msgid "" "MacVTap offers a direct connection with very little overhead between " "instances and down to the adapter. You can use MacVTap agent on the compute " "node when you require a network connection that is performance critical. It " "does not require specific hardware (like with SRIOV)." msgstr "" #: ../config-ml2.rst:508 msgid "" "Due to the direct connection, some features are not available when using it " "on the compute node. For example, DVR, security groups and arp-spoofing " "protection." msgstr "" #: ../config-mtu.rst:5 msgid "MTU considerations" msgstr "" #: ../config-mtu.rst:7 msgid "" "The Networking service uses the MTU of the underlying physical network to " "calculate the MTU for virtual network components including instance network " "interfaces. By default, it assumes a standard 1500-byte MTU for the " "underlying physical network." msgstr "" #: ../config-mtu.rst:12 msgid "" "The Networking service only references the underlying physical network MTU. " "Changing the underlying physical network device MTU requires configuration " "of physical network devices such as switches and routers." msgstr "" #: ../config-mtu.rst:18 msgid "" "For existing deployments, MTU values only apply to new network resources." msgstr "" #: ../config-mtu.rst:21 msgid "Jumbo frames" msgstr "" #: ../config-mtu.rst:23 msgid "" "The Networking service supports underlying physical networks using jumbo " "frames and also enables instances to use jumbo frames minus any overlay " "protocol overhead. For example, an underlying physical network with a 9000-" "byte MTU yields a 8950-byte MTU for instances using a VXLAN network with " "IPv4 endpoints. Using IPv6 endpoints for overlay networks adds 20 bytes of " "overhead for any protocol." msgstr "" #: ../config-mtu.rst:30 msgid "" "The Networking service supports the following underlying physical network " "architectures. Case 1 refers to the most common architecture. In general, " "architectures should avoid cases 2 and 3." msgstr "" #: ../config-mtu.rst:35 msgid "Case 1" msgstr "" #: ../config-mtu.rst:37 msgid "" "For typical underlying physical network architectures that implement a " "single MTU value, you can leverage jumbo frames using two options, one in " "the ``neutron.conf`` file and the other in the ``ml2_conf.ini`` file. Most " "environments should use this configuration." msgstr "" #: ../config-mtu.rst:42 msgid "" "For example, referencing an underlying physical network with a 9000-byte MTU:" msgstr "" # #-#-#-#-# config-mtu.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-mtu.rst:44 ../config-mtu.rst:70 ../config-mtu.rst:101 #: ../deploy-lb-ha-vrrp.rst:57 ../deploy-lb-provider.rst:77 #: ../deploy-lb-selfservice.rst:62 ../deploy-ovs-ha-dvr.rst:69 #: ../deploy-ovs-ha-vrrp.rst:49 ../deploy-ovs-provider.rst:88 #: ../deploy-ovs-selfservice.rst:56 msgid "In the ``neutron.conf`` file:" msgstr "" #: ../config-mtu.rst:59 msgid "Case 2" msgstr "" #: ../config-mtu.rst:61 msgid "" "Some underlying physical network architectures contain multiple layer-2 " "networks with different MTU values. You can configure each flat or VLAN " "provider network in the bridge or interface mapping options of the layer-2 " "agent to reference a unique MTU value." msgstr "" #: ../config-mtu.rst:66 msgid "" "For example, referencing a 4000-byte MTU for ``provider2``, a 1500-byte MTU " "for ``provider3``, and a 9000-byte MTU for other networks using the Open " "vSwitch agent:" msgstr "" #: ../config-mtu.rst:77 msgid "In the ``openvswitch_agent.ini`` file:" msgstr "" #: ../config-mtu.rst:93 msgid "Case 3" msgstr "" #: ../config-mtu.rst:95 msgid "" "Some underlying physical network architectures contain a unique layer-2 " "network for overlay networks using protocols such as VXLAN and GRE." msgstr "" #: ../config-mtu.rst:98 msgid "" "For example, referencing a 4000-byte MTU for overlay networks and a 9000-" "byte MTU for other networks:" msgstr "" #: ../config-mtu.rst:117 msgid "" "Other networks including provider networks and flat or VLAN self-service " "networks assume the value of the ``global_physnet_mtu`` option." msgstr "" #: ../config-mtu.rst:122 msgid "Instance network interfaces (VIFs)" msgstr "" #: ../config-mtu.rst:124 msgid "" "By default, the ``advertise_mtu`` option in the ``neutron.conf`` file " "enables the DHCP agent to provide an appropriate MTU value to instances " "using IPv4 and enables the L3 agent to provide an appropriate MTU value to " "instances using IPv6. IPv6 uses RA via the L3 agent because the DHCP agent " "only supports IPv4. Instances using IPv4 and IPv6 should obtain the same MTU " "value regardless of method." msgstr "" #: ../config-ovsfwdriver.rst:5 msgid "Native Open vSwitch firewall driver" msgstr "" #: ../config-ovsfwdriver.rst:11 msgid "" "Historically, Open vSwitch (OVS) could not interact directly with *iptables* " "to implement security groups. Thus, the OVS agent and Compute service use a " "Linux bridge between each instance (VM) and the OVS integration bridge ``br-" "int`` to implement security groups. The Linux bridge device contains the " "*iptables* rules pertaining to the instance. In general, additional " "components between instances and physical network infrastructure cause " "scalability and performance problems. To alleviate such problems, the OVS " "agent includes an optional firewall driver that natively implements security " "groups as flows in OVS rather than Linux bridge and *iptables*, thus " "increasing scalability and performance." msgstr "" #: ../config-ovsfwdriver.rst:25 msgid "" "The native OVS firewall implementation requires kernel and user space " "support for *conntrack*, thus requiring minimum versions of the Linux kernel " "and Open vSwitch. All cases require Open vSwitch version 2.5 or newer." msgstr "" #: ../config-ovsfwdriver.rst:29 msgid "Kernel version 4.3 or newer includes *conntrack* support." msgstr "" #: ../config-ovsfwdriver.rst:30 msgid "" "Kernel version 3.3, but less than 4.3, does not include *conntrack* support " "and requires building the OVS modules." msgstr "" #: ../config-ovsfwdriver.rst:34 msgid "Enable the native OVS firewall driver" msgstr "" #: ../config-ovsfwdriver.rst:36 msgid "" "On nodes running the Open vSwitch agent, edit the ``openvswitch_agent.ini`` " "file and enable the firewall driver." msgstr "" #: ../config-ovsfwdriver.rst:44 msgid "" "For more information, see the `developer documentation `_ and the " "`video `_." msgstr "" #: ../config-qos.rst:5 msgid "Quality of Service (QoS)" msgstr "" #: ../config-qos.rst:7 msgid "" "QoS is defined as the ability to guarantee certain network requirements like " "bandwidth, latency, jitter, and reliability in order to satisfy a Service " "Level Agreement (SLA) between an application provider and end users." msgstr "" #: ../config-qos.rst:12 msgid "" "Network devices such as switches and routers can mark traffic so that it is " "handled with a higher priority to fulfill the QoS conditions agreed under " "the SLA. In other cases, certain network traffic such as Voice over IP " "(VoIP) and video streaming needs to be transmitted with minimal bandwidth " "constraints. On a system without network QoS management, all traffic will be " "transmitted in a \"best-effort\" manner making it impossible to guarantee " "service delivery to customers." msgstr "" #: ../config-qos.rst:20 msgid "" "QoS is an advanced service plug-in. QoS is decoupled from the rest of the " "OpenStack Networking code on multiple levels and it is available through the " "ml2 extension driver." msgstr "" #: ../config-qos.rst:24 msgid "" "Details about the DB models, API extension, and use cases are out of the " "scope of this guide but can be found in the `Neutron QoS specification " "`_." msgstr "" #: ../config-qos.rst:30 msgid "Supported QoS rule types" msgstr "" #: ../config-qos.rst:32 msgid "" "Any plug-in or ml2 mechanism driver can claim support for some QoS rule " "types by providing a plug-in/driver class property called " "``supported_qos_rule_types`` that returns a list of strings that correspond " "to `QoS rule types `_." msgstr "" #: ../config-qos.rst:40 msgid "For the Newton release onward DSCP marking will be supported." msgstr "" #: ../config-qos.rst:42 msgid "" "In the most simple case, the property can be represented by a simple Python " "list defined on the class." msgstr "" #: ../config-qos.rst:45 msgid "" "For an ml2 plug-in, the list of supported QoS rule types is defined as a " "common subset of rules supported by all active mechanism drivers." msgstr "" #: ../config-qos.rst:50 msgid "" "The list of supported rule types reported by core plug-in is not enforced " "when accessing QoS rule resources. This is mostly because then we would not " "be able to create any rules while at least one ml2 driver lacks support for " "QoS (at the moment of writing, only macvtap is such a driver)." msgstr "" #: ../config-qos.rst:60 msgid "To enable the service, follow the steps below:" msgstr "" #: ../config-qos.rst:62 msgid "On network nodes:" msgstr "" #: ../config-qos.rst:64 msgid "" "Add the QoS service to the ``service_plugins`` setting in ``/etc/neutron/" "neutron.conf``. For example:" msgstr "" #: ../config-qos.rst:74 msgid "" "Optionally, set the needed ``notification_drivers`` in the ``[qos]`` section " "in ``/etc/neutron/neutron.conf`` (``message_queue`` is the default)." msgstr "" #: ../config-qos.rst:78 msgid "" "In ``/etc/neutron/plugins/ml2/ml2_conf.ini``, add ``qos`` to " "``extension_drivers`` in the ``[ml2]`` section. For example:" msgstr "" #: ../config-qos.rst:86 msgid "" "If the Open vSwitch agent is being used, set ``extensions`` to ``qos`` in " "the ``[agent]`` section of ``/etc/neutron/plugins/ml2/openvswitch_agent." "ini``. For example:" msgstr "" #: ../config-qos.rst:95 msgid "On compute nodes:" msgstr "" #: ../config-qos.rst:97 msgid "" "In ``/etc/neutron/plugins/ml2/ml2_conf.ini``, add ``qos`` to the " "``extensions`` setting in the ``[agent]`` section. For example:" msgstr "" #: ../config-qos.rst:107 msgid "" "QoS currently works with ml2 only (SR-IOV, Open vSwitch, and linuxbridge are " "drivers that are enabled for QoS in Mitaka release)." msgstr "" #: ../config-qos.rst:111 msgid "Trusted tenants policy.json configuration" msgstr "" #: ../config-qos.rst:113 msgid "" "If tenants are trusted to administrate their own QoS policies in your cloud, " "neutron's file ``policy.json`` can be modified to allow this." msgstr "" #: ../config-qos.rst:116 msgid "Modify ``/etc/neutron/policy.json`` policy entries as follows:" msgstr "" #: ../config-qos.rst:125 msgid "To enable bandwidth limit rule:" msgstr "" #: ../config-qos.rst:135 msgid "To enable DSCP marking rule:" msgstr "" #: ../config-qos.rst:148 msgid "" "QoS policies are only created by admins with the default ``policy.json``. " "Therefore, you should have the cloud operator set them up on behalf of the " "cloud tenants." msgstr "" #: ../config-qos.rst:152 msgid "" "If tenants are trusted to create their own policies, check the trusted " "tenants ``policy.json`` configuration section." msgstr "" #: ../config-qos.rst:155 msgid "First, create a QoS policy and its bandwidth limit rule:" msgstr "" #: ../config-qos.rst:187 msgid "" "The burst value is given in kilobits, not in kilobits per second as the name " "of the parameter might suggest. This is an amount of data which can be sent " "before the bandwidth limit applies." msgstr "" #: ../config-qos.rst:193 msgid "" "The QoS implementation requires a burst value to ensure proper behavior of " "bandwidth limit rules in the Open vSwitch and Linux bridge agents. If you do " "not provide a value, it defaults to 80% of the bandwidth limit which works " "for typical TCP traffic." msgstr "" #: ../config-qos.rst:198 msgid "" "Second, associate the created policy with an existing neutron port. In order " "to do this, user extracts the port id to be associated to the already " "created policy. In the next example, we will assign the ``bw-limiter`` " "policy to the VM with IP address 10.0.0.3" msgstr "" #: ../config-qos.rst:218 msgid "" "In order to detach a port from the QoS policy, simply update again the port " "configuration." msgstr "" #: ../config-qos.rst:227 msgid "Ports can be created with a policy attached to them too." msgstr "" #: ../config-qos.rst:258 msgid "" "You can attach networks to a QoS policy. The meaning of this is that any " "compute port connected to the network will use the network policy by default " "unless the port has a specific policy attached to it. Network owned ports " "like DHCP and router ports are excluded from network policy application." msgstr "" #: ../config-qos.rst:263 msgid "" "In order to attach a QoS policy to a network, update an existing network, or " "initially create the network attached to the policy." msgstr "" #: ../config-qos.rst:273 msgid "" "Configuring the proper burst value is very important. If the burst value is " "set too low, bandwidth usage will be throttled even with a proper bandwidth " "limit setting. This issue is discussed in various documentation sources, for " "example in `Juniper's documentation `_. Burst value for TCP traffic can be set as 80% of desired bandwidth " "limit value. For example, if the bandwidth limit is set to 1000kbps then " "enough burst value will be 800kbit. If the configured burst value is too " "low, achieved bandwidth limit will be lower than expected. If the configured " "burst value is too high, too few packets could be limited and achieved " "bandwidth limit would be higher than expected." msgstr "" #: ../config-qos.rst:286 msgid "Administrator enforcement" msgstr "" #: ../config-qos.rst:288 msgid "" "Administrators are able to enforce policies on tenant ports or networks. As " "long as the policy is not shared, the tenant is not be able to detach any " "policy attached to a network or port." msgstr "" #: ../config-qos.rst:292 msgid "" "If the policy is shared, the tenant is able to attach or detach such policy " "from its own ports and networks." msgstr "" #: ../config-qos.rst:297 msgid "Rule modification" msgstr "" #: ../config-qos.rst:298 msgid "" "You can modify rules at runtime. Rule modifications will be propagated to " "any attached port." msgstr "" #: ../config-qos.rst:319 msgid "" "Just like with bandwidth limiting, create a policy for DSCP marking rule:" msgstr "" #: ../config-qos.rst:337 msgid "" "You can create, update, list, delete, and show DSCP markings with the " "neutron client:" msgstr "" #: ../config-rbac.rst:5 msgid "Role-Based Access Control (RBAC)" msgstr "" #: ../config-rbac.rst:7 msgid "" "The Role-Based Access Control (RBAC) policy framework enables both operators " "and users to grant access to resources for specific projects." msgstr "" #: ../config-rbac.rst:12 msgid "Supported objects for sharing with specific projects" msgstr "" #: ../config-rbac.rst:14 msgid "" "Currently, the access that can be granted using this feature is supported by:" msgstr "" #: ../config-rbac.rst:17 msgid "Regular port creation permissions on networks (since Liberty)." msgstr "" #: ../config-rbac.rst:18 msgid "Binding QoS policies permissions to networks or ports (since Mitaka)." msgstr "" #: ../config-rbac.rst:19 msgid "Attaching router gateways to networks (since Mitaka)." msgstr "" #: ../config-rbac.rst:23 msgid "Sharing an object with specific projects" msgstr "" #: ../config-rbac.rst:25 msgid "" "Sharing an object with a specific project is accomplished by creating a " "policy entry that permits the target project the ``access_as_shared`` action " "on that object." msgstr "" #: ../config-rbac.rst:31 msgid "Sharing a network with specific projects" msgstr "" #: ../config-rbac.rst:33 msgid "Create a network to share:" msgstr "" #: ../config-rbac.rst:58 msgid "" "Create the policy entry using the :command:`rbac-create` command (in this " "example, the ID of the project we want to share with is " "``e28769db97d9449da658bc6931fcb683``):" msgstr "" #: ../config-rbac.rst:79 msgid "" "The ``target-tenant`` parameter specifies the project that requires access " "to the network. The ``action`` parameter specifies what the project is " "allowed to do. The ``type`` parameter says that the target object is a " "network. The final parameter is the ID of the network we are granting access " "to." msgstr "" #: ../config-rbac.rst:85 msgid "" "Project ``e28769db97d9449da658bc6931fcb683`` will now be able to see the " "network when running :command:`net-list` and :command:`net-show` and will " "also be able to create ports on that network. No other users (other than " "admins and the owner) will be able to see the network." msgstr "" #: ../config-rbac.rst:90 ../config-rbac.rst:340 msgid "" "To remove access for that project, delete the policy that allows it using " "the :command:`rbac-delete` command:" msgstr "" #: ../config-rbac.rst:98 msgid "" "If that project has ports on the network, the server will prevent the policy " "from being deleted until the ports have been deleted:" msgstr "" #: ../config-rbac.rst:107 msgid "" "This process can be repeated any number of times to share a network with an " "arbitrary number of projects." msgstr "" #: ../config-rbac.rst:112 msgid "Sharing a QoS policy with specific projects" msgstr "" #: ../config-rbac.rst:114 msgid "Create a QoS policy to share:" msgstr "" #: ../config-rbac.rst:132 msgid "" "Create the RBAC policy entry using the :command:`rbac-create` command (in " "this example, the ID of the project we want to share with is " "``a6bf6cfbcd1f4e32a57d2138b6bd41d1``):" msgstr "" #: ../config-rbac.rst:153 msgid "" "The ``target-tenant`` parameter specifies the project that requires access " "to the QoS policy. The ``action`` parameter specifies what the project is " "allowed to do. The ``type`` parameter says that the target object is a QoS " "policy. The final parameter is the ID of the QoS policy we are granting " "access to." msgstr "" #: ../config-rbac.rst:159 msgid "" "Project ``a6bf6cfbcd1f4e32a57d2138b6bd41d1`` will now be able to see the QoS " "policy when running :command:`qos-policy-list` and :command:`qos-policy-" "show` and will also be able to bind it to its ports or networks. No other " "users (other than admins and the owner) will be able to see the QoS policy." msgstr "" #: ../config-rbac.rst:164 msgid "" "To remove access for that project, delete the RBAC policy that allows it " "using the :command:`rbac-delete` command:" msgstr "" #: ../config-rbac.rst:172 msgid "" "If that project has ports or networks with the QoS policy applied to them, " "the server will not delete the RBAC policy until the QoS policy is no longer " "in use:" msgstr "" #: ../config-rbac.rst:182 msgid "" "This process can be repeated any number of times to share a qos-policy with " "an arbitrary number of projects." msgstr "" #: ../config-rbac.rst:187 msgid "How the 'shared' flag relates to these entries" msgstr "" #: ../config-rbac.rst:189 msgid "" "As introduced in other guide entries, neutron provides a means of making an " "object (``network``, ``qos-policy``) available to every project. This is " "accomplished using the ``shared`` flag on the supported object:" msgstr "" #: ../config-rbac.rst:216 msgid "" "This is the equivalent of creating a policy on the network that permits " "every project to perform the action ``access_as_shared`` on that network. " "Neutron treats them as the same thing, so the policy entry for that network " "should be visible using the :command:`rbac-list` command:" msgstr "" #: ../config-rbac.rst:233 msgid "Use the :command:`rbac-show` command to see the details:" msgstr "" #: ../config-rbac.rst:250 msgid "" "The output shows that the entry allows the action ``access_as_shared`` on " "object ``9a4af544-7158-456d-b180-95f2e11eaa8c`` of type ``network`` to " "target_tenant ``*``, which is a wildcard that represents all projects." msgstr "" #: ../config-rbac.rst:254 msgid "" "Currently, the ``shared`` flag is just a mapping to the underlying RBAC " "policies for a network. Setting the flag to ``True`` on a network creates a " "wildcard RBAC entry. Setting it to ``False`` removes the wildcard entry." msgstr "" #: ../config-rbac.rst:259 msgid "" "When you run :command:`net-list` or :command:`net-show`, the ``shared`` flag " "is calculated by the server based on the calling project and the RBAC " "entries for each network. For QoS objects use :command:`qos-policy-list` or :" "command:`qos-policy-show` respectively. If there is a wildcard entry, the " "``shared`` flag is always set to ``True``. If there are only entries that " "share with specific projects, only the projects the object is shared to will " "see the flag as ``True`` and the rest will see the flag as ``False``." msgstr "" #: ../config-rbac.rst:270 msgid "Allowing a network to be used as an external network" msgstr "" #: ../config-rbac.rst:272 msgid "" "To make a network available as an external network for specific projects " "rather than all projects, use the ``access_as_external`` action." msgstr "" #: ../config-rbac.rst:275 msgid "Create a network that you want to be available as an external network:" msgstr "" #: ../config-rbac.rst:308 msgid "" "Create a policy entry using the :command:`rbac-create` command (in this " "example, the ID of the project we want to share with is " "``e28769db97d9449da658bc6931fcb683``):" msgstr "" #: ../config-rbac.rst:329 msgid "" "The ``target-tenant`` parameter specifies the project that requires access " "to the network. The ``action`` parameter specifies what the project is " "allowed to do. The ``type`` parameter indicates that the target object is a " "network. The final parameter is the ID of the network we are granting " "external access to." msgstr "" #: ../config-rbac.rst:335 msgid "" "Now project ``e28769db97d9449da658bc6931fcb683`` is able to see the network " "when running :command:`net-list` and :command:`net-show` and can attach " "router gateway ports to that network. No other users (other than admins and " "the owner) are able to see the network." msgstr "" #: ../config-rbac.rst:348 msgid "" "If that project has router gateway ports attached to that network, the " "server prevents the policy from being deleted until the ports have been " "deleted:" msgstr "" #: ../config-rbac.rst:358 msgid "" "This process can be repeated any number of times to make a network available " "as external to an arbitrary number of projects." msgstr "" #: ../config-rbac.rst:361 msgid "" "If a network is marked as external during creation, it now implicitly " "creates a wildcard RBAC policy granting everyone access to preserve previous " "behavior before this feature was added." msgstr "" #: ../config-rbac.rst:397 msgid "" "In the output above the standard ``router:external`` attribute is ``True`` " "as expected. Now a wildcard policy is visible in the RBAC policy listings:" msgstr "" #: ../config-rbac.rst:412 msgid "" "You can modify or delete this policy with the same constraints as any other " "RBAC ``access_as_external`` policy." msgstr "" #: ../config-rbac.rst:417 msgid "Preventing regular users from sharing objects with each other" msgstr "" #: ../config-rbac.rst:419 msgid "" "The default ``policy.json`` file will not allow regular users to share " "objects with every other project using a wildcard; however, it will allow " "them to share objects with specific project IDs." msgstr "" #: ../config-rbac.rst:424 msgid "" "If an operator wants to prevent normal users from doing this, the ``" "\"create_rbac_policy\":`` entry in ``policy.json`` can be adjusted from ``" "\"\"`` to ``\"rule:admin_only\"``." msgstr "" # #-#-#-#-# config-rbac.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# ops-resource-tags.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-rbac.rst:430 ../ops-resource-tags.rst:253 msgid "Limitations" msgstr "" #: ../config-rbac.rst:432 msgid "" "A non-admin user that shares a network with another project using this " "feature will not be able to see or delete the ports created under the other " "project. This is because the neutron database operations automatically limit " "database queries to objects owned by the requesting user's project unless " "that user is an admin or a service user. This issue is being tracked by the " "following bug: https://bugs.launchpad.net/neutron/+bug/1498790" msgstr "" #: ../config-sfc.rst:5 msgid "Service function chaining" msgstr "" #: ../config-sfc.rst:7 msgid "" "Service function chaining (SFC) essentially refers to the software-defined " "networking (SDN) version of policy-based routing (PBR). In many cases, SFC " "involves security, although it can include a variety of other features." msgstr "" #: ../config-sfc.rst:11 msgid "" "Fundamentally, SFC routes packets through one or more service functions " "instead of conventional routing that routes packets using destination IP " "address. Service functions essentially emulate a series of physical network " "devices with cables linking them together." msgstr "" #: ../config-sfc.rst:16 msgid "" "A basic example of SFC involves routing packets from one location to another " "through a firewall that lacks a \"next hop\" IP address from a conventional " "routing perspective. A more complex example involves an ordered series of " "service functions, each implemented using multiple instances (VMs). Packets " "must flow through one instance and a hashing algorithm distributes flows " "across multiple instances at each hop." msgstr "" #: ../config-sfc.rst:26 msgid "" "All OpenStack Networking services and OpenStack Compute instances connect to " "a virtual network via ports making it possible to create a traffic steering " "model for service chaining using only ports. Including these ports in a port " "chain enables steering of traffic through one or more instances providing " "service functions." msgstr "" #: ../config-sfc.rst:32 msgid "A port chain, or service function path, consists of the following:" msgstr "" #: ../config-sfc.rst:34 msgid "A set of ports that define the sequence of service functions." msgstr "" #: ../config-sfc.rst:35 msgid "" "A set of flow classifiers that specify the classified traffic flows entering " "the chain." msgstr "" #: ../config-sfc.rst:38 msgid "" "If a service function involves a pair of ports, the first port acts as the " "ingress port of the service function and the second port acts as the egress " "port. If both ports use the same value, they function as a single virtual " "bidirectional port." msgstr "" #: ../config-sfc.rst:43 msgid "" "A port chain is a unidirectional service chain. The first port acts as the " "head of the service function chain and the second port acts as the tail of " "the service function chain. A bidirectional service function chain consists " "of two unidirectional port chains." msgstr "" #: ../config-sfc.rst:48 msgid "" "A flow classifier can only belong to one port chain to prevent ambiguity as " "to which chain should handle packets in the flow. A check prevents such " "ambiguity. However, you can associate multiple flow classifiers with a port " "chain because multiple flows can request the same service function path." msgstr "" #: ../config-sfc.rst:53 msgid "Currently, SFC lacks support for multi-project service functions." msgstr "" #: ../config-sfc.rst:55 msgid "" "The port chain plug-in supports backing service providers including the OVS " "driver and a variety of SDN controller drivers. The common driver API " "enables different drivers to provide different implementations for the " "service chain path rendering." msgstr "" #: ../config-sfc.rst:66 msgid "" "See the `developer documentation `_ for more information." msgstr "" #: ../config-sfc.rst:70 msgid "Resources" msgstr "" #: ../config-sfc.rst:73 msgid "Port chain" msgstr "" #: ../config-sfc.rst:75 msgid "``id`` - Port chain ID" msgstr "" #: ../config-sfc.rst:76 ../config-sfc.rst:102 ../config-sfc.rst:115 #: ../config-sfc.rst:137 msgid "``tenant_id`` - Project ID" msgstr "" #: ../config-sfc.rst:77 ../config-sfc.rst:103 ../config-sfc.rst:116 #: ../config-sfc.rst:138 msgid "``name`` - Readable name" msgstr "" #: ../config-sfc.rst:78 ../config-sfc.rst:104 ../config-sfc.rst:117 #: ../config-sfc.rst:139 msgid "``description`` - Readable description" msgstr "" #: ../config-sfc.rst:79 msgid "``port_pair_groups`` - List of port pair group IDs" msgstr "" #: ../config-sfc.rst:80 msgid "``flow_classifiers`` - List of flow classifier IDs" msgstr "" #: ../config-sfc.rst:81 msgid "``chain_parameters`` - Dictionary of chain parameters" msgstr "" #: ../config-sfc.rst:83 msgid "" "A port chain consists of a sequence of port pair groups. Each port pair " "group is a hop in the port chain. A group of port pairs represents service " "functions providing equivalent functionality. For example, a group of " "firewall service functions." msgstr "" #: ../config-sfc.rst:88 msgid "" "A flow classifier identifies a flow. A port chain can contain multiple flow " "classifiers. Omitting the flow classifier effectively prevents steering of " "traffic through the port chain." msgstr "" #: ../config-sfc.rst:92 msgid "" "The ``chain_parameters`` attribute contains one or more parameters for the " "port chain. Currently, it only supports a correlation parameter that " "defaults to ``mpls`` for consistency with Open vSwitch (OVS) capabilities. " "Future values for the correlation parameter may include the network service " "header (NSH)." msgstr "" #: ../config-sfc.rst:99 msgid "Port pair group" msgstr "" #: ../config-sfc.rst:101 msgid "``id`` - Port pair group ID" msgstr "" #: ../config-sfc.rst:105 msgid "``port_pairs`` - List of service function port pairs" msgstr "" #: ../config-sfc.rst:107 msgid "" "A port pair group may contain one or more port pairs. Multiple port pairs " "enable load balancing/distribution over a set of functionally equivalent " "service functions." msgstr "" #: ../config-sfc.rst:112 msgid "Port pair" msgstr "" #: ../config-sfc.rst:114 msgid "``id`` - Port pair ID" msgstr "" #: ../config-sfc.rst:118 msgid "``ingress`` - Ingress port" msgstr "" #: ../config-sfc.rst:119 msgid "``egress`` - Egress port" msgstr "" #: ../config-sfc.rst:120 msgid "" "``service_function_parameters`` - Dictionary of service function parameters" msgstr "" #: ../config-sfc.rst:122 msgid "" "A port pair represents a service function instance that includes an ingress " "and egress port. A service function containing a bidirectional port uses the " "same ingress and egress port." msgstr "" #: ../config-sfc.rst:126 msgid "" "The ``service_function_parameters`` attribute includes one or more " "parameters for the service function. Currently, it only supports a " "correlation parameter that determines association of a packet with a chain. " "This parameter defaults to ``none`` for legacy service functions that lack " "support for correlation such as the NSH. If set to ``none``, the data plane " "implementation must provide service function proxy functionality." msgstr "" #: ../config-sfc.rst:134 msgid "Flow classifier" msgstr "" #: ../config-sfc.rst:136 msgid "``id`` - Flow classifier ID" msgstr "" #: ../config-sfc.rst:140 msgid "``ethertype`` - Ethertype (IPv4/IPv6)" msgstr "" #: ../config-sfc.rst:141 msgid "``protocol`` - IP protocol" msgstr "" #: ../config-sfc.rst:142 msgid "``source_port_range_min`` - Minimum source protocol port" msgstr "" #: ../config-sfc.rst:143 msgid "``source_port_range_max`` - Maximum source protocol port" msgstr "" #: ../config-sfc.rst:144 msgid "``destination_port_range_min`` - Minimum destination protocol port" msgstr "" #: ../config-sfc.rst:145 msgid "``destination_port_range_max`` - Maximum destination protocol port" msgstr "" #: ../config-sfc.rst:146 msgid "``source_ip_prefix`` - Source IP address or prefix" msgstr "" #: ../config-sfc.rst:147 msgid "``destination_ip_prefix`` - Destination IP address or prefix" msgstr "" #: ../config-sfc.rst:148 msgid "``logical_source_port`` - Source port" msgstr "" #: ../config-sfc.rst:149 msgid "``logical_destination_port`` - Destination port" msgstr "" #: ../config-sfc.rst:150 msgid "``l7_parameters`` - Dictionary of L7 parameters" msgstr "" #: ../config-sfc.rst:152 msgid "" "A combination of the source attributes defines the source of the flow. A " "combination of the destination attributes defines the destination of the " "flow. The ``l7_parameters`` attribute is a place holder that may be used to " "support flow classification using layer 7 fields, such as a URL. If " "unspecified, the ``logical_source_port`` and ``logical_destination_port`` " "attributes default to ``none``, the ``ethertype`` attribute defaults to " "``IPv4``, and all other attributes default to a wildcard value." msgstr "" # #-#-#-#-# config-sfc.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# ops.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-sfc.rst:161 ../ops.rst:5 msgid "Operations" msgstr "" #: ../config-sfc.rst:164 msgid "Create a port chain" msgstr "" #: ../config-sfc.rst:166 msgid "" "The following example uses the ``neutron`` command-line interface (CLI) to " "create a port chain consisting of three service function instances to handle " "HTTP (TCP) traffic flows from 192.168.1.11:1000 to 192.168.2.11:80." msgstr "" # #-#-#-#-# config-sfc.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-sfc.rst:170 ../shared/deploy-provider-networktrafficflow.txt:25 #: ../shared/deploy-selfservice-networktrafficflow.txt:27 msgid "Instance 1" msgstr "" #: ../config-sfc.rst:172 msgid "Name: vm1" msgstr "" #: ../config-sfc.rst:173 ../config-sfc.rst:179 msgid "Function: Firewall" msgstr "" #: ../config-sfc.rst:174 msgid "Port pair: [p1, p2]" msgstr "" # #-#-#-#-# config-sfc.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-sfc.rst:176 ../shared/deploy-provider-networktrafficflow.txt:29 #: ../shared/deploy-selfservice-networktrafficflow.txt:29 msgid "Instance 2" msgstr "" #: ../config-sfc.rst:178 msgid "Name: vm2" msgstr "" #: ../config-sfc.rst:180 msgid "Port pair: [p3, p4]" msgstr "" #: ../config-sfc.rst:182 msgid "Instance 3" msgstr "" #: ../config-sfc.rst:184 msgid "Name: vm3" msgstr "" #: ../config-sfc.rst:185 msgid "Function: Intrusion detection system (IDS)" msgstr "" #: ../config-sfc.rst:186 msgid "Port pair: [p5, p6]" msgstr "" #: ../config-sfc.rst:190 msgid "The example network ``net1`` must exist before creating ports on it." msgstr "" #: ../config-sfc.rst:192 msgid "Source the credentials of the project that owns the ``net1`` network." msgstr "" #: ../config-sfc.rst:194 msgid "Create ports on network ``net1`` and record the UUID values." msgstr "" #: ../config-sfc.rst:205 msgid "" "Launch service function instance ``vm1`` using ports ``p1`` and ``p2``, " "``vm2`` using ports ``p3`` and ``p4``, and ``vm3`` using ports ``p5`` and " "``p6``." msgstr "" #: ../config-sfc.rst:215 msgid "" "Replace ``P1_ID``, ``P2_ID``, ``P3_ID``, ``P4_ID``, ``P5_ID``, and ``P6_ID`` " "with the UUIDs of the respective ports." msgstr "" #: ../config-sfc.rst:220 msgid "" "This command requires additional options to successfully launch an instance. " "See the `CLI reference `_ for more information." msgstr "" #: ../config-sfc.rst:225 msgid "" "Alternatively, you can launch each instance with one network interface and " "attach additional ports later." msgstr "" #: ../config-sfc.rst:228 msgid "" "Create flow classifier ``FC1`` that matches the appropriate packet headers." msgstr "" #: ../config-sfc.rst:241 msgid "" "Create port pair ``PP1`` with ports ``p1`` and ``p2``, ``PP2`` with ports " "``p3`` and ``p4``, and ``PP3`` with ports ``p5`` and ``p6``." msgstr "" #: ../config-sfc.rst:261 msgid "" "Create port pair group ``PPG1`` with port pair ``PP1`` and ``PP2`` and " "``PPG3`` with port pair ``PP3``." msgstr "" #: ../config-sfc.rst:273 msgid "" "You can repeat the ``--port-pair`` option for multiple port pairs of " "functionally equivalent service functions." msgstr "" #: ../config-sfc.rst:276 msgid "" "Create port chain ``PC1`` with port pair groups ``PPG1`` and ``PPG2`` and " "flow classifier ``FC1``." msgstr "" #: ../config-sfc.rst:287 msgid "" "You can repeat the ``--port-pair-group`` option to specify additional port " "pair groups in the port chain. A port chain must contain at least one port " "pair group." msgstr "" #: ../config-sfc.rst:291 msgid "" "You can repeat the ``--flow-classifier`` option to specify multiple flow " "classifiers for a port chain. Each flow classifier identifies a flow." msgstr "" #: ../config-sfc.rst:296 msgid "Update a port chain or port pair group" msgstr "" #: ../config-sfc.rst:298 msgid "" "Use the ``port-chain-update`` command to dynamically add or remove port pair " "groups or flow classifiers on a port chain." msgstr "" #: ../config-sfc.rst:301 msgid "For example, add port pair group ``PPG3`` to port chain ``PC1``:" msgstr "" #: ../config-sfc.rst:309 msgid "For example, add flow classifier ``FC2`` to port chain ``PC1``:" msgstr "" #: ../config-sfc.rst:317 msgid "" "SFC steers traffic matching the additional flow classifier to the port pair " "groups in the port chain." msgstr "" #: ../config-sfc.rst:320 msgid "" "Use the ``port-pair-group-update`` command to perform dynamic scale-out or " "scale-in operations by adding or removing port pairs on a port pair group." msgstr "" #: ../config-sfc.rst:329 msgid "" "SFC performs load balancing/distribution over the additional service " "functions in the port pair group." msgstr "" #: ../config-sriov.rst:5 msgid "Using SR-IOV functionality" msgstr "" #: ../config-sriov.rst:7 msgid "" "The purpose of this page is to describe how to enable SR-IOV functionality " "available in OpenStack (using OpenStack Networking) as of the Juno release. " "This page serves as a how-to guide on configuring OpenStack Networking and " "OpenStack Compute to create neutron SR-IOV ports." msgstr "" #: ../config-sriov.rst:15 msgid "" "PCI-SIG Single Root I/O Virtualization and Sharing (SR-IOV) specification " "defines a standardized mechanism to virtualize PCIe devices. The mechanism " "can virtualize a single PCIe Ethernet controller to appear as multiple PCIe " "devices. You can directly assign each virtual PCIe device to a VM, bypassing " "the hypervisor and virtual switch layer. As a result, users are able to " "achieve low latency and near-line wire speed." msgstr "" #: ../config-sriov.rst:23 msgid "SR-IOV with ethernet" msgstr "" #: ../config-sriov.rst:25 msgid "The following terms are used over the document:" msgstr "" #: ../config-sriov.rst:31 msgid "Term" msgstr "" #: ../config-sriov.rst:32 msgid "Definition" msgstr "" #: ../config-sriov.rst:33 msgid "PF" msgstr "" #: ../config-sriov.rst:34 msgid "" "Physical Function. This is the physical Ethernet controller that supports SR-" "IOV." msgstr "" #: ../config-sriov.rst:36 msgid "VF" msgstr "" #: ../config-sriov.rst:37 msgid "" "Virtual Function. This is a virtual PCIe device created from a physical " "Ethernet controller." msgstr "" #: ../config-sriov.rst:41 msgid "In order to enable SR-IOV, the following steps are required:" msgstr "" #: ../config-sriov.rst:43 ../config-sriov.rst:92 msgid "Create Virtual Functions (Compute)" msgstr "" #: ../config-sriov.rst:44 msgid "Whitelist PCI devices in nova-compute (Compute)" msgstr "" #: ../config-sriov.rst:45 ../config-sriov.rst:242 msgid "Configure neutron-server (Controller)" msgstr "" #: ../config-sriov.rst:46 ../config-sriov.rst:283 msgid "Configure nova-scheduler (Controller)" msgstr "" #: ../config-sriov.rst:47 ../config-sriov.rst:302 msgid "Enable neutron sriov-agent (Compute)" msgstr "" #: ../config-sriov.rst:49 msgid "**Neutron sriov-agent**" msgstr "" #: ../config-sriov.rst:51 msgid "Neutron sriov-agent is required since Mitaka release." msgstr "" #: ../config-sriov.rst:53 msgid "" "Neutron sriov-agent allows you to set the admin state of ports and starting " "from Liberty allows you to control port security (enable and disable spoof " "checking) and QoS rate limit settings." msgstr "" #: ../config-sriov.rst:59 msgid "" "Neutron sriov-agent was optional before Mitaka, and was not enabled by " "default before Liberty." msgstr "" #: ../config-sriov.rst:65 msgid "" "QoS is supported since Liberty, while it has limitations. ``max_burst_kbps`` " "(burst over ``max_kbps``) is not supported. ``max_kbps`` is rounded to Mbps." msgstr "" #: ../config-sriov.rst:68 msgid "" "Security Group is not supported. the agent is only working with " "``firewall_driver = neutron.agent.firewall.NoopFirewallDriver``." msgstr "" #: ../config-sriov.rst:70 msgid "" "No OpenStack Dashboard integration. Users need to use CLI or API to create " "neutron SR-IOV ports." msgstr "" #: ../config-sriov.rst:72 msgid "Live migration is not supported for instances with SR-IOV ports." msgstr "" #: ../config-sriov.rst:73 msgid "" "ARP spoofing filtering was not supported before Mitaka when using neutron " "sriov-agent." msgstr "" #: ../config-sriov.rst:77 msgid "Environment example" msgstr "" #: ../config-sriov.rst:78 msgid "" "We recommend using Open vSwitch with VLAN as segregation. This way you can " "combine normal VMs without SR-IOV ports and instances with SR-IOV ports on a " "single neutron network." msgstr "" #: ../config-sriov.rst:85 msgid "" "Throughout this guide, eth3 is used as the PF and physnet2 is used as the " "provider network configured as a VLAN range. You are expected to change this " "according to your actual environment." msgstr "" #: ../config-sriov.rst:93 msgid "" "In this step, create the VFs for the network interface that will be used for " "SR-IOV. Use eth3 as PF, which is also used as the interface for Open vSwitch " "VLAN and has access to the private networks of all machines." msgstr "" #: ../config-sriov.rst:99 msgid "" "The step to create VFs differ between SR-IOV card Ethernet controller " "manufacturers. Currently the following manufacturers are known to work:" msgstr "" #: ../config-sriov.rst:102 msgid "Intel" msgstr "" #: ../config-sriov.rst:103 msgid "Mellanox" msgstr "" #: ../config-sriov.rst:104 msgid "QLogic" msgstr "" #: ../config-sriov.rst:106 msgid "" "For **Mellanox SR-IOV Ethernet cards** see: `Mellanox: HowTo Configure SR-" "IOV VFs `_" msgstr "" #: ../config-sriov.rst:110 msgid "" "To create the VFs on Ubuntu for **Intel SR-IOV Ethernet cards**, do the " "following:" msgstr "" #: ../config-sriov.rst:113 msgid "" "Make sure SR-IOV is enabled in BIOS, check for VT-d and make sure it is " "enabled. After enabling VT-d, enable IOMMU on Linux by adding " "``intel_iommu=on`` to kernel parameters. Edit the file ``/etc/default/grub``:" msgstr "" #: ../config-sriov.rst:122 msgid "Run the following if you have added new parameters:" msgstr "" #: ../config-sriov.rst:129 msgid "On each compute node, create the VFs via the PCI SYS interface:" msgstr "" #: ../config-sriov.rst:137 msgid "" "On some PCI devices, observe that when changing the amount of VFs you " "receive the error ``Device or resource busy``. In this case, you first need " "to set ``sriov_numvfs`` to ``0``, then set it to your new value." msgstr "" #: ../config-sriov.rst:143 msgid "" "Alternatively, you can create VFs by passing the ``max_vfs`` to the kernel " "module of your network interface. However, the ``max_vfs`` parameter has " "been deprecated, so the PCI SYS interface is the preferred method." msgstr "" #: ../config-sriov.rst:148 msgid "You can determine the maximum number of VFs a PF can support:" msgstr "" #: ../config-sriov.rst:155 msgid "" "If the interface is down, make sure it is set to ``up`` before launching a " "guest, otherwise the instance will fail to spawn:" msgstr "" #: ../config-sriov.rst:173 msgid "" "Now verify that the VFs have been created (should see Virtual Function " "device):" msgstr "" #: ../config-sriov.rst:180 msgid "Persist created VFs on reboot:" msgstr "" #: ../config-sriov.rst:189 msgid "" "The suggested way of making PCI SYS settings persistent is through :file:" "`sysfs.conf` but for unknown reason changing :file:`sysfs.conf` does not " "have any effect on Ubuntu 14.04." msgstr "" #: ../config-sriov.rst:193 msgid "" "For **QLogic SR-IOV Ethernet cards** see: `User's Guide OpenStack Deployment " "with SR-IOV Configuration `_" msgstr "" #: ../config-sriov.rst:199 msgid "Whitelist PCI devices nova-compute (Compute)" msgstr "" #: ../config-sriov.rst:201 msgid "" "Tell ``nova-compute`` which pci devices are allowed to be passed through. " "Edit the file ``nova.conf``:" msgstr "" #: ../config-sriov.rst:209 msgid "" "This tells nova that all VFs belonging to eth3 are allowed to be passed " "through to VMs and belong to the neutron provider network physnet2. Restart " "the ``nova-compute`` service for the changes to go into effect." msgstr "" #: ../config-sriov.rst:213 msgid "" "Alternatively the ``pci_passthrough_whitelist`` parameter also supports " "whitelisting by:" msgstr "" #: ../config-sriov.rst:216 msgid "" "PCI address: The address uses the same syntax as in ``lspci`` and an " "asterisk (*) can be used to match anything." msgstr "" #: ../config-sriov.rst:226 msgid "" "PCI ``vendor_id`` and ``product_id`` as displayed by the Linux utility " "``lspci``." msgstr "" #: ../config-sriov.rst:235 msgid "" "If the device defined by the PCI address or devname corresponds to a SR-IOV " "PF, all VFs under the PF will match the entry. Multiple " "pci_passthrough_whitelist entries per host are supported." msgstr "" #: ../config-sriov.rst:244 msgid "" "Add ``sriovnicswitch`` as mechanism driver, edit the file ``ml2_conf.ini``:" msgstr "" #: ../config-sriov.rst:250 msgid "" "Find out the ``vendor_id`` and ``product_id`` of your **VFs** by logging in " "to your compute node with VFs previously created:" msgstr "" #: ../config-sriov.rst:260 msgid "" "Update the ``ml2_conf_sriov.ini`` on each controller. In our case the " "``vendor_id`` is ``8086`` and the ``product_id`` is ``10ed``. Tell neutron " "the ``vendor_id`` and ``product_id`` of the VFs that are supported." msgstr "" #: ../config-sriov.rst:270 msgid "" "Add the newly configured ``ml2_conf_sriov.ini`` as parameter to the " "``neutron-server`` daemon. Edit the appropriate initialization script to " "configure the ``neutron-server`` service to load the SRIOV configuration " "file:" msgstr "" #: ../config-sriov.rst:280 msgid "" "For the changes to go into effect, restart the ``neutron-server`` service." msgstr "" #: ../config-sriov.rst:285 msgid "" "On every controller node running the ``nova-scheduler`` service, add " "``PciPassthroughFilter`` to the ``scheduler_default_filters`` parameter and " "add a new line for ``scheduler_available_filters`` parameter under the " "``[DEFAULT]`` section in ``nova.conf``:" msgstr "" #: ../config-sriov.rst:298 msgid "Restart the ``nova-scheduler`` service." msgstr "" #: ../config-sriov.rst:304 msgid "On each compute node, edit the file ``sriov_agent.ini``:" msgstr "" #: ../config-sriov.rst:317 msgid "" "The ``physical_device_mappings`` parameter is not limited to be a 1-1 " "mapping between physnets and NICs. This enables you to map the same physnet " "to more than one NIC. For example, if ``physnet2`` is connected to ``eth3`` " "and ``eth4``, then ``physnet2:eth3,physnet2:eth4`` is a valid option." msgstr "" #: ../config-sriov.rst:323 msgid "" "The ``exclude_devices`` parameter is empty, therefore, all the VFs " "associated with eth3 may be configured by the agent. To exclude specific " "VFs, add them to the ``exclude_devices`` parameter as follows:" msgstr "" #: ../config-sriov.rst:331 msgid "Test whether the neutron sriov-agent runs successfully:" msgstr "" #: ../config-sriov.rst:337 msgid "Enable the neutron sriov-agent service." msgstr "" #: ../config-sriov.rst:341 msgid "Creating instances with SR-IOV ports" msgstr "" #: ../config-sriov.rst:342 msgid "" "After the configuration is done, you can now launch Instances with neutron " "SR-IOV ports." msgstr "" #: ../config-sriov.rst:345 msgid "" "Get the id of the neutron network where you want the SR-IOV port to be " "created:" msgstr "" #: ../config-sriov.rst:352 msgid "" "Create the SR-IOV port. We specify ``vnic_type=direct``, but other options " "include ``normal``, ``direct-physical``, and ``macvtap``:" msgstr "" #: ../config-sriov.rst:359 msgid "" "Create the VM. For the nic we specify the SR-IOV port created in step 2:" msgstr "" #: ../config-sriov.rst:368 msgid "" "There are two ways to attach VFs to an instance. You can create a neutron SR-" "IOV port or use the ``pci_alias`` in nova. For more information about using " "``pci_alias``, refer to `nova-api configuration`_." msgstr "" #: ../config-sriov.rst:375 msgid "SR-IOV with InfiniBand" msgstr "" #: ../config-sriov.rst:377 msgid "" "The support for SR-IOV with InfiniBand allows a Virtual PCI device (VF) to " "be directly mapped to the guest, allowing higher performance and advanced " "features such as RDMA (remote direct memory access). To use this feature, " "you must:" msgstr "" #: ../config-sriov.rst:382 msgid "Use InfiniBand enabled network adapters." msgstr "" #: ../config-sriov.rst:384 msgid "Run InfiniBand subnet managers to enable InfiniBand fabric." msgstr "" #: ../config-sriov.rst:386 msgid "" "All InfiniBand networks must have a subnet manager running for the network " "to function. This is true even when doing a simple network of two machines " "with no switch and the cards are plugged in back-to-back. A subnet manager " "is required for the link on the cards to come up. It is possible to have " "more than one subnet manager. In this case, one of them will act as the " "master, and any other will act as a slave that will take over when the " "master subnet manager fails." msgstr "" #: ../config-sriov.rst:394 msgid "Install the ``ebrctl`` utility on the compute nodes." msgstr "" #: ../config-sriov.rst:396 msgid "" "Check that ``ebrctl`` is listed somewhere in ``/etc/nova/rootwrap.d/*``:" msgstr "" #: ../config-sriov.rst:402 msgid "" "If ``ebrctl`` does not appear in any of the rootwrap files, add this to the " "``/etc/nova/rootwrap.d/compute.filters`` file in the ``[Filters]`` section." msgstr "" # #-#-#-#-# config-subnet-pools.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# #: ../config-subnet-pools.rst:5 ../intro-os-networking.rst:165 msgid "Subnet pools" msgstr "" #: ../config-subnet-pools.rst:7 msgid "" "Subnet pools have been made available since the Kilo release. It is a simple " "feature that has the potential to improve your workflow considerably. It " "also provides a building block from which other new features will be built " "in to OpenStack Networking." msgstr "" #: ../config-subnet-pools.rst:12 msgid "" "To see if your cloud has this feature available, you can check that it is " "listed in the supported aliases. You can do this with the neutron client." msgstr "" #: ../config-subnet-pools.rst:21 msgid "Why you need them" msgstr "" #: ../config-subnet-pools.rst:23 msgid "" "Before Kilo, Networking had no automation around the addresses used to " "create a subnet. To create one, you had to come up with the addresses on " "your own without any help from the system. There are valid use cases for " "this but if you are interested in the following capabilities, then subnet " "pools might be for you." msgstr "" #: ../config-subnet-pools.rst:29 msgid "" "First, would not it be nice if you could turn your pool of addresses over to " "Neutron to take care of? When you need to create a subnet, you just ask for " "addresses to be allocated from the pool. You do not have to worry about what " "you have already used and what addresses are in your pool. Subnet pools can " "do this." msgstr "" #: ../config-subnet-pools.rst:35 msgid "" "Second, subnet pools can manage addresses across projects. The addresses are " "guaranteed not to overlap. If the addresses come from an externally routable " "pool then you know that all of the projects have addresses which are " "*routable* and unique. This can be useful in the following scenarios." msgstr "" #: ../config-subnet-pools.rst:40 msgid "IPv6 since OpenStack Networking has no IPv6 floating IPs." msgstr "" #: ../config-subnet-pools.rst:41 msgid "Routing directly to a project network from an external network." msgstr "" #: ../config-subnet-pools.rst:44 msgid "How they work" msgstr "" #: ../config-subnet-pools.rst:46 msgid "" "A subnet pool manages a pool of addresses from which subnets can be " "allocated. It ensures that there is no overlap between any two subnets " "allocated from the same pool." msgstr "" #: ../config-subnet-pools.rst:50 msgid "" "As a regular project in an OpenStack cloud, you can create a subnet pool of " "your own and use it to manage your own pool of addresses. This does not " "require any admin privileges. Your pool will not be visible to any other " "project." msgstr "" #: ../config-subnet-pools.rst:54 msgid "" "If you are an admin, you can create a pool which can be accessed by any " "regular project. Being a shared resource, there is a quota mechanism to " "arbitrate access." msgstr "" #: ../config-subnet-pools.rst:59 msgid "Quotas" msgstr "" #: ../config-subnet-pools.rst:61 msgid "" "Subnet pools have a quota system which is a little bit different than other " "quotas in Neutron. Other quotas in Neutron count discrete instances of an " "object against a quota. Each time you create something like a router, " "network, or a port, it uses one from your total quota." msgstr "" #: ../config-subnet-pools.rst:66 msgid "" "With subnets, the resource is the IP address space. Some subnets take more " "of it than others. For example, 203.0.113.0/24 uses 256 addresses in one " "subnet but 198.51.100.224/28 uses only 16. If address space is limited, the " "quota system can encourage efficient use of the space." msgstr "" #: ../config-subnet-pools.rst:71 msgid "" "With IPv4, the default_quota can be set to the number of absolute addresses " "any given project is allowed to consume from the pool. For example, with a " "quota of 128, I might get 203.0.113.128/26, 203.0.113.224/28, and still have " "room to allocate 48 more addresses in the future." msgstr "" #: ../config-subnet-pools.rst:77 msgid "" "With IPv6 it is a little different. It is not practical to count individual " "addresses. To avoid ridiculously large numbers, the quota is expressed in " "the number of /64 subnets which can be allocated. For example, with a " "default_quota of 3, I might get 2001:db8:c18e:c05a::/64, 2001:" "db8:221c:8ef3::/64, and still have room to allocate one more prefix in the " "future." msgstr "" #: ../config-subnet-pools.rst:85 msgid "Default subnet pools" msgstr "" #: ../config-subnet-pools.rst:87 msgid "" "Beginning with Mitaka, a subnet pool can be marked as the default. This is " "handled with a new extension." msgstr "" #: ../config-subnet-pools.rst:95 msgid "" "An administrator can mark a pool as default. Only one pool from each address " "family can be marked default." msgstr "" #: ../config-subnet-pools.rst:103 msgid "" "If there is a default, it can be requested by passing :option:`--use-default-" "subnetpool` instead of :option:`--subnetpool SUBNETPOOL`." msgstr "" #: ../config-subnet-pools.rst:108 msgid "Demo" msgstr "" #: ../config-subnet-pools.rst:110 msgid "" "If you have access to an OpenStack Kilo or later based neutron, you can play " "with this feature now. Give it a try. All of the following commands work " "equally as well with IPv6 addresses." msgstr "" #: ../config-subnet-pools.rst:114 msgid "First, as admin, create a shared subnet pool:" msgstr "" #: ../config-subnet-pools.rst:134 msgid "" "The ``default_prefixlen`` defines the subnet size you will get if you do not " "specify :option:`--prefixlen` when creating a subnet." msgstr "" #: ../config-subnet-pools.rst:137 msgid "" "Do essentially the same thing for IPv6 and there are now two subnet pools. " "Regular projects can see them. (the output is trimmed a bit for display)" msgstr "" #: ../config-subnet-pools.rst:151 msgid "Now, use them. It is easy to create a subnet from a pool:" msgstr "" #: ../config-subnet-pools.rst:169 msgid "" "You can request a specific subnet from the pool. You need to specify a " "subnet that falls within the pool's prefixes. If the subnet is not already " "allocated, the request succeeds. You can leave off the IP version because it " "is deduced from the subnet pool." msgstr "" #: ../config-subnet-pools.rst:191 msgid "If the pool becomes exhausted, load some more prefixes:" msgstr "" #: ../config.rst:33 msgid "" "For general configuration, see the `Configuration Reference `_." msgstr "" #: ../deploy-lb-ha-vrrp.rst:5 msgid "Linux bridge: High availability using VRRP" msgstr "" #: ../deploy-lb-ha-vrrp.rst:11 msgid "" "This high-availability mechanism is not compatible with the layer-2 " "population mechanism. You must disable layer-2 population in the " "``linuxbridge_agent.ini`` file and restart the Linux bridge agent on all " "existing network and compute nodes prior to deploying the example " "configuration." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:20 ../deploy-lb-selfservice.rst:21 #: ../deploy-ovs-ha-vrrp.rst:12 ../deploy-ovs-selfservice.rst:16 msgid "Add one network node with the following components:" msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:23 ../deploy-ovs-ha-vrrp.rst:15 msgid "" "OpenStack Networking layer-2 agent, layer-3 agent, and any dependencies." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:28 ../deploy-ovs-ha-vrrp.rst:20 msgid "" "You can keep the DHCP and metadata agents on each compute node or move them " "to the network nodes." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:37 ../deploy-ovs-ha-vrrp.rst:29 msgid "" "The following figure shows components and connectivity for one self-service " "network and one untagged (flat) network. The master router resides on " "network node 1. In this particular case, the instance resides on the same " "compute node as the DHCP agent for the network. If the DHCP agent resides on " "another compute node, the latter only contains a DHCP namespace and Linux " "bridge with a port on the overlay physical network interface." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:50 ../deploy-ovs-ha-vrrp.rst:42 msgid "" "Use the following example configuration as a template to add support for " "high-availability using VRRP to an existing operational environment that " "supports self-service networks." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:59 ../deploy-ovs-ha-vrrp.rst:51 msgid "Enable VRRP." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:71 ../deploy-ovs-ha-vrrp.rst:63 msgid "Network node 1" msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:76 ../deploy-ovs-ha-vrrp.rst:68 msgid "Network node 2" msgstr "" #: ../deploy-lb-ha-vrrp.rst:78 msgid "" "Install the Networking service Linux bridge layer-2 agent and layer-3 agent." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:85 ../deploy-lb-selfservice.rst:112 msgid "In the ``linuxbridge_agent.ini`` file, configure the layer-2 agent." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:99 ../deploy-lb-provider.rst:176 #: ../deploy-lb-selfservice.rst:127 ../deploy-ovs-provider.rst:223 msgid "" "Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface " "that handles provider networks. For example, ``eth1``." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:102 ../deploy-lb-selfservice.rst:130 #: ../deploy-lb-selfservice.rst:164 ../deploy-ovs-ha-vrrp.rst:103 #: ../deploy-ovs-selfservice.rst:133 ../deploy-ovs-selfservice.rst:169 msgid "" "Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the " "interface that handles VXLAN overlays for self-service networks." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:105 ../deploy-lb-selfservice.rst:133 #: ../deploy-ovs-ha-dvr.rst:126 ../deploy-ovs-ha-vrrp.rst:106 #: ../deploy-ovs-selfservice.rst:136 msgid "In the ``l3_agent.ini`` file, configure the layer-3 agent." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:121 ../deploy-lb-selfservice.rst:149 #: ../deploy-ovs-ha-dvr.rst:108 ../deploy-ovs-ha-vrrp.rst:122 #: ../deploy-ovs-selfservice.rst:152 msgid "Layer-3 agent" msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:132 ../deploy-lb-selfservice.rst:175 #: ../deploy-ovs-ha-dvr.rst:144 ../deploy-ovs-ha-vrrp.rst:133 #: ../deploy-ovs-selfservice.rst:180 msgid "Verify presence and operation of the agents." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-ha-vrrp.rst:163 ../deploy-ovs-ha-vrrp.rst:164 msgid "Verify failover operation" msgstr "" #: ../deploy-lb-ha-vrrp.rst:170 msgid "" "This high-availability mechanism simply augments :ref:`deploy-lb-" "selfservice` with failover of layer-3 services to another router if the " "master router fails. Thus, you can reference :ref:`Self-service network " "traffic flow ` for normal " "operation." msgstr "" #: ../deploy-lb-provider.rst:5 msgid "Linux bridge: Provider networks" msgstr "" #: ../deploy-lb-provider.rst:7 msgid "" "The provider networks architecture example provides layer-2 connectivity " "between instances and the physical network infrastructure using VLAN " "(802.1q) tagging. It supports one untagged (flat) network and and up to 4095 " "tagged (VLAN) networks. The actual quantity of VLAN networks depends on the " "physical network infrastructure. For more information on provider networks, " "see :ref:`intro-os-networking-provider`." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:17 ../deploy-ovs-provider.rst:27 msgid "One controller node with the following components:" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:19 ../deploy-lb-provider.rst:24 #: ../deploy-ovs-provider.rst:29 ../deploy-ovs-provider.rst:34 #: ../deploy.rst:51 ../deploy.rst:72 msgid "Two network interfaces: management and provider." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:20 ../deploy-ovs-provider.rst:30 msgid "OpenStack Networking server service and ML2 plug-in." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:22 ../deploy-ovs-provider.rst:32 msgid "Two compute nodes with the following components:" msgstr "" #: ../deploy-lb-provider.rst:25 msgid "" "OpenStack Networking Linux bridge layer-2 agent, DHCP agent, metadata agent, " "and any dependencies." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:30 ../deploy-ovs-provider.rst:40 msgid "" "Larger deployments typically deploy the DHCP and metadata agents on a subset " "of compute nodes to increase performance and redundancy. However, too many " "agents can overwhelm the message bus. Also, to further simplify any " "deployment, you can omit the metadata agent and use a configuration drive to " "provide metadata to instances." msgstr "" #: ../deploy-lb-provider.rst:42 msgid "" "The following figure shows components and connectivity for one untagged " "(flat) network. In this particular case, the instance resides on the same " "compute node as the DHCP agent for the network. If the DHCP agent resides on " "another compute node, the latter only contains a DHCP namespace and Linux " "bridge with a port on the provider physical network interface." msgstr "" #: ../deploy-lb-provider.rst:51 msgid "" "The following figure describes virtual connectivity among components for two " "tagged (VLAN) networks. Essentially, each network uses a separate bridge " "that contains a port on the VLAN sub-interface on the provider physical " "network interface. Similar to the single untagged network case, the DHCP " "agent may reside on a different compute node." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:62 ../deploy-ovs-provider.rst:73 msgid "" "These figures omit the controller node because it does not handle instance " "network traffic." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:68 ../deploy-ovs-provider.rst:79 msgid "" "Use the following example configuration as a template to deploy provider " "networks in your environment." msgstr "" #: ../deploy-lb-provider.rst:74 msgid "" "Install the Networking service components that provides the ``neutron-" "server`` service and ML2 plug-in." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:79 ../deploy-ovs-provider.rst:90 msgid "Configure common options:" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:83 ../deploy-ovs-provider.rst:94 msgid "" "Disable service plug-ins because provider networks do not require any. " "However, this breaks portions of the dashboard that manage the Networking " "service. See the `Installation Guide `__ for more " "information." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:94 ../deploy-ovs-provider.rst:105 msgid "" "Enable two DHCP agents per network so both compute nodes can provide DHCP " "service provider networks." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:102 ../deploy-ovs-provider.rst:113 msgid "If necessary, :ref:`configure MTU `." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:106 ../deploy-ovs-provider.rst:117 msgid "Configure drivers and network types:" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:116 ../deploy-ovs-provider.rst:127 msgid "Configure network mappings:" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:128 ../deploy-ovs-provider.rst:139 msgid "" "The ``tenant_network_types`` option contains no value because the " "architecture does not support self-service networks." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:133 ../deploy-ovs-provider.rst:144 msgid "" "The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN ID " "ranges to support use of arbitrary VLAN IDs." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:136 ../deploy-ovs-provider.rst:147 msgid "Configure the security group driver:" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:143 ../deploy-ovs-provider.rst:154 msgid "Populate the database." msgstr "" #: ../deploy-lb-provider.rst:157 msgid "Install the Networking service Linux bridge layer-2 agent." msgstr "" #: ../deploy-lb-provider.rst:163 msgid "" "In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent:" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:179 ../deploy-ovs-provider.rst:187 msgid "In the ``dhcp_agent.ini`` file, configure the DHCP agent:" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:187 ../deploy-ovs-provider.rst:195 msgid "In the ``metadata_agent.ini`` file, configure the metadata agent:" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:195 ../deploy-ovs-provider.rst:203 msgid "" "The value of ``METADATA_SECRET`` must match the value of the same option in " "the ``[neutron]`` section of the ``nova.conf`` file." msgstr "" #: ../deploy-lb-provider.rst:240 msgid "North-south scenario: Instance with a fixed IP address" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:242 ../deploy-ovs-provider.rst:270 msgid "The instance resides on compute node 1 and uses provider network 1." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:243 ../deploy-lb-selfservice.rst:220 #: ../deploy-ovs-provider.rst:271 ../deploy-ovs-selfservice.rst:227 msgid "The instance sends a packet to a host on the Internet." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:245 ../deploy-ovs-provider.rst:273 msgid "The following steps involve compute node 1." msgstr "" #: ../deploy-lb-provider.rst:247 msgid "" "The instance interface (1) forwards the packet to the provider bridge " "instance port (2) via ``veth`` pair." msgstr "" #: ../deploy-lb-provider.rst:249 ../deploy-lb-provider.rst:286 #: ../deploy-lb-provider.rst:332 msgid "" "Security group rules (3) on the provider bridge handle firewalling and " "connection tracking for the packet." msgstr "" #: ../deploy-lb-provider.rst:251 ../deploy-lb-provider.rst:288 #: ../deploy-lb-provider.rst:334 msgid "" "The VLAN sub-interface port (4) on the provider bridge forwards the packet " "to the physical network interface (5)." msgstr "" #: ../deploy-lb-provider.rst:253 ../deploy-lb-provider.rst:290 #: ../deploy-lb-provider.rst:336 msgid "" "The physical network interface (5) adds VLAN tag 101 to the packet and " "forwards it to the physical network infrastructure switch (6)." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:256 ../deploy-lb-provider.rst:293 #: ../deploy-lb-provider.rst:339 ../deploy-ovs-provider.rst:291 #: ../deploy-ovs-provider.rst:335 ../deploy-ovs-provider.rst:394 msgid "The following steps involve the physical network infrastructure:" msgstr "" #: ../deploy-lb-provider.rst:258 ../deploy-lb-provider.rst:341 msgid "" "The switch removes VLAN tag 101 from the packet and forwards it to the " "router (7)." msgstr "" #: ../deploy-lb-provider.rst:260 msgid "" "The router routes the packet from the provider network (8) to the external " "network (9) and forwards the packet to the switch (10)." msgstr "" #: ../deploy-lb-provider.rst:262 msgid "The switch forwards the packet to the external network (11)." msgstr "" #: ../deploy-lb-provider.rst:263 msgid "The external network (12) receives the packet." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:270 ../deploy-lb-provider.rst:311 #: ../deploy-lb-provider.rst:363 ../deploy-lb-selfservice.rst:360 #: ../deploy-lb-selfservice.rst:419 ../deploy-ovs-provider.rst:305 #: ../deploy-ovs-provider.rst:359 ../deploy-ovs-provider.rst:425 #: ../deploy-ovs-selfservice.rst:422 ../deploy-ovs-selfservice.rst:504 msgid "Return traffic follows similar steps in reverse." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:273 ../deploy-lb-selfservice.rst:319 #: ../deploy-ovs-provider.rst:308 ../deploy-ovs-selfservice.rst:362 msgid "East-west scenario 1: Instances on the same network" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:275 ../deploy-ovs-provider.rst:310 msgid "" "Instances on the same network communicate directly between compute nodes " "containing those instances." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:278 ../deploy-lb-provider.rst:318 #: ../deploy-ovs-provider.rst:313 ../deploy-ovs-provider.rst:366 msgid "Instance 1 resides on compute node 1 and uses provider network 1." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:279 ../deploy-ovs-provider.rst:314 msgid "Instance 2 resides on compute node 2 and uses provider network 1." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:280 ../deploy-lb-provider.rst:320 #: ../deploy-lb-selfservice.rst:336 ../deploy-lb-selfservice.rst:374 #: ../deploy-ovs-provider.rst:315 ../deploy-ovs-provider.rst:368 #: ../deploy-ovs-selfservice.rst:380 ../deploy-ovs-selfservice.rst:433 msgid "Instance 1 sends a packet to instance 2." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:282 ../deploy-lb-selfservice.rst:222 #: ../deploy-lb-selfservice.rst:338 ../deploy-ovs-ha-dvr.rst:484 #: ../deploy-ovs-provider.rst:317 ../deploy-ovs-selfservice.rst:229 #: ../deploy-ovs-selfservice.rst:382 msgid "The following steps involve compute node 1:" msgstr "" #: ../deploy-lb-provider.rst:284 ../deploy-lb-provider.rst:330 msgid "" "The instance 1 interface (1) forwards the packet to the provider bridge " "instance port (2) via ``veth`` pair." msgstr "" #: ../deploy-lb-provider.rst:295 msgid "" "The switch forwards the packet from compute node 1 to compute node 2 (7)." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:297 ../deploy-lb-selfservice.rst:349 #: ../deploy-ovs-ha-dvr.rst:510 ../deploy-ovs-provider.rst:339 #: ../deploy-ovs-selfservice.rst:399 msgid "The following steps involve compute node 2:" msgstr "" #: ../deploy-lb-provider.rst:299 msgid "" "The physical network interface (8) removes VLAN tag 101 from the packet and " "forwards it to the VLAN sub-interface port (9) on the provider bridge." msgstr "" #: ../deploy-lb-provider.rst:301 msgid "" "Security group rules (10) on the provider bridge handle firewalling and " "connection tracking for the packet." msgstr "" #: ../deploy-lb-provider.rst:303 msgid "" "The provider bridge instance port (11) forwards the packet to the instance 2 " "interface (12) via ``veth`` pair." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:314 ../deploy-lb-selfservice.rst:366 #: ../deploy-ovs-provider.rst:362 ../deploy-ovs-selfservice.rst:425 msgid "East-west scenario 2: Instances on different networks" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:316 ../deploy-ovs-provider.rst:364 msgid "" "Instances communicate via router on the physical network infrastructure." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:319 ../deploy-ovs-provider.rst:367 msgid "Instance 2 resides on compute node 1 and uses provider network 2." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:324 ../deploy-ovs-provider.rst:372 msgid "" "Both instances reside on the same compute node to illustrate how VLAN " "tagging enables multiple logical layer-2 networks to use the same physical " "layer-2 network." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-provider.rst:328 ../deploy-lb-provider.rst:349 #: ../deploy-lb-selfservice.rst:300 ../deploy-lb-selfservice.rst:381 #: ../deploy-lb-selfservice.rst:408 ../deploy-ovs-ha-dvr.rst:432 #: ../deploy-ovs-provider.rst:376 ../deploy-ovs-provider.rst:404 #: ../deploy-ovs-selfservice.rst:334 ../deploy-ovs-selfservice.rst:440 #: ../deploy-ovs-selfservice.rst:484 msgid "The following steps involve the compute node:" msgstr "" #: ../deploy-lb-provider.rst:343 msgid "" "The router routes the packet from provider network 1 (8) to provider network " "2 (9)." msgstr "" #: ../deploy-lb-provider.rst:345 msgid "The router forwards the packet to the switch (10)." msgstr "" #: ../deploy-lb-provider.rst:346 msgid "" "The switch adds VLAN tag 102 to the packet and forwards it to compute node 1 " "(11)." msgstr "" #: ../deploy-lb-provider.rst:351 msgid "" "The physical network interface (12) removes VLAN tag 102 from the packet and " "forwards it to the VLAN sub-interface port (13) on the provider bridge." msgstr "" #: ../deploy-lb-provider.rst:353 msgid "" "Security group rules (14) on the provider bridge handle firewalling and " "connection tracking for the packet." msgstr "" #: ../deploy-lb-provider.rst:355 msgid "" "The provider bridge instance port (15) forwards the packet to the instance 2 " "interface (16) via ``veth`` pair." msgstr "" #: ../deploy-lb-selfservice.rst:5 msgid "Linux bridge: Self-service networks" msgstr "" #: ../deploy-lb-selfservice.rst:7 msgid "" "This architecture example augments :ref:`deploy-lb-provider` to support a " "nearly limitless quantity of entirely virtual networks. Although the " "Networking service supports VLAN self-service networks, this example focuses " "on VXLAN self-service networks. For more information on self-service " "networks, see :ref:`intro-os-networking-selfservice`." msgstr "" #: ../deploy-lb-selfservice.rst:15 msgid "" "The Linux bridge agent lacks support for other overlay protocols such as GRE " "and Geneve." msgstr "" #: ../deploy-lb-selfservice.rst:25 msgid "OpenStack Networking Linux bridge layer-2 agent, layer-3 agent, and any" msgstr "" #: ../deploy-lb-selfservice.rst:25 msgid "dependencies." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:27 ../deploy-ovs-ha-dvr.rst:34 #: ../deploy-ovs-selfservice.rst:22 msgid "Modify the compute nodes with the following components:" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:29 ../deploy-ovs-selfservice.rst:24 msgid "Add one network interface: overlay." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:33 ../deploy-ovs-selfservice.rst:28 msgid "" "You can keep the DHCP and metadata agents on each compute node or move them " "to the network node." msgstr "" #: ../deploy-lb-selfservice.rst:42 msgid "" "The following figure shows components and connectivity for one self-service " "network and one untagged (flat) provider network. In this particular case, " "the instance resides on the same compute node as the DHCP agent for the " "network. If the DHCP agent resides on another compute node, the latter only " "contains a DHCP namespace and Linux bridge with a port on the overlay " "physical network interface." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:55 ../deploy-ovs-selfservice.rst:49 msgid "" "Use the following example configuration as a template to add support for " "self-service networks to an existing operational environment that supports " "provider networks." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:64 ../deploy-ovs-selfservice.rst:58 msgid "Enable routing and allow overlapping IP address ranges." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:74 ../deploy-ovs-selfservice.rst:68 msgid "Add ``vxlan`` to type drivers and project network types." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:82 ../deploy-ovs-selfservice.rst:76 msgid "Enable the layer-2 population mechanism driver." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:89 ../deploy-ovs-selfservice.rst:83 msgid "Configure the VXLAN network ID (VNI) range." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:96 ../deploy-ovs-selfservice.rst:90 msgid "" "Replace ``VNI_START`` and ``VNI_END`` with appropriate numerical values." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:104 ../deploy-ovs-ha-dvr.rst:83 #: ../deploy-ovs-selfservice.rst:98 msgid "Network node" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:106 ../deploy-ovs-ha-dvr.rst:113 msgid "Install the Networking service layer-3 agent." msgstr "" #: ../deploy-lb-selfservice.rst:154 msgid "" "In the ``linuxbridge_agent.ini`` file, enable VXLAN support including " "layer-2 population." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:211 ../deploy-ovs-ha-dvr.rst:407 #: ../deploy-ovs-selfservice.rst:218 msgid "North-south scenario 1: Instance with a fixed IP address" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:213 ../deploy-ovs-selfservice.rst:220 msgid "" "For instances with a fixed IPv4 address, the network node performs SNAT on " "north-south traffic passing from self-service to external networks such as " "the Internet. For instances with a fixed IPv6 address, the network node " "performs conventional routing of traffic between self-service and external " "networks." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:219 ../deploy-lb-selfservice.rst:273 #: ../deploy-ovs-selfservice.rst:226 ../deploy-ovs-selfservice.rst:298 msgid "The instance resides on compute node 1 and uses self-service network 1." msgstr "" #: ../deploy-lb-selfservice.rst:224 msgid "" "The instance interface (1) forwards the packet to the self-service bridge " "instance port (2) via ``veth`` pair." msgstr "" #: ../deploy-lb-selfservice.rst:226 ../deploy-lb-selfservice.rst:342 #: ../deploy-lb-selfservice.rst:385 msgid "" "Security group rules (3) on the self-service bridge handle firewalling and " "connection tracking for the packet." msgstr "" #: ../deploy-lb-selfservice.rst:228 ../deploy-lb-selfservice.rst:344 #: ../deploy-lb-selfservice.rst:387 msgid "" "The self-service bridge forwards the packet to the VXLAN interface (4) which " "wraps the packet using VNI 101." msgstr "" #: ../deploy-lb-selfservice.rst:230 ../deploy-lb-selfservice.rst:389 msgid "" "The underlying physical interface (5) for the VXLAN interface forwards the " "packet to the network node via the overlay network (6)." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:233 ../deploy-lb-selfservice.rst:276 #: ../deploy-lb-selfservice.rst:392 ../deploy-ovs-selfservice.rst:246 #: ../deploy-ovs-selfservice.rst:301 ../deploy-ovs-selfservice.rst:457 msgid "The following steps involve the network node:" msgstr "" #: ../deploy-lb-selfservice.rst:235 ../deploy-lb-selfservice.rst:351 #: ../deploy-lb-selfservice.rst:394 msgid "" "The underlying physical interface (7) for the VXLAN interface forwards the " "packet to the VXLAN interface (8) which unwraps the packet." msgstr "" #: ../deploy-lb-selfservice.rst:237 msgid "" "The self-service bridge router port (9) forwards the packet to the self-" "service network interface (10) in the router namespace." msgstr "" #: ../deploy-lb-selfservice.rst:240 msgid "" "For IPv4, the router performs SNAT on the packet which changes the source IP " "address to the router IP address on the provider network and sends it to the " "gateway IP address on the provider network via the gateway interface on the " "provider network (11)." msgstr "" #: ../deploy-lb-selfservice.rst:244 msgid "" "For IPv6, the router sends the packet to the next-hop IP address, typically " "the gateway IP address on the provider network, via the provider gateway " "interface (11)." msgstr "" #: ../deploy-lb-selfservice.rst:248 msgid "The router forwards the packet to the provider bridge router port (12)." msgstr "" #: ../deploy-lb-selfservice.rst:250 msgid "" "The VLAN sub-interface port (13) on the provider bridge forwards the packet " "to the provider physical network interface (14)." msgstr "" #: ../deploy-lb-selfservice.rst:252 msgid "" "The provider physical network interface (14) adds VLAN tag 101 to the packet " "and forwards it to the Internet via physical network infrastructure (15)." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:257 ../deploy-ovs-selfservice.rst:282 msgid "" "Return traffic follows similar steps in reverse. However, without a floating " "IPv4 address, hosts on the provider or external networks cannot originate " "connections to instances on the self-service network." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:265 ../deploy-ovs-ha-dvr.rst:418 #: ../deploy-ovs-selfservice.rst:290 msgid "North-south scenario 2: Instance with a floating IPv4 address" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:267 ../deploy-ovs-selfservice.rst:292 msgid "" "For instances with a floating IPv4 address, the network node performs SNAT " "on north-south traffic passing from the instance to external networks such " "as the Internet and DNAT on north-south traffic passing from external " "networks to the instance. Floating IP addresses and NAT do not apply to " "IPv6. Thus, the network node routes IPv6 traffic in this scenario." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:274 ../deploy-ovs-ha-dvr.rst:430 #: ../deploy-ovs-selfservice.rst:299 msgid "A host on the Internet sends a packet to the instance." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:278 ../deploy-ovs-ha-dvr.rst:434 #: ../deploy-ovs-selfservice.rst:303 msgid "" "The physical network infrastructure (1) forwards the packet to the provider " "physical network interface (2)." msgstr "" #: ../deploy-lb-selfservice.rst:280 msgid "" "The provider physical network interface removes VLAN tag 101 and forwards " "the packet to the VLAN sub-interface on the provider bridge." msgstr "" #: ../deploy-lb-selfservice.rst:282 msgid "" "The provider bridge forwards the packet to the self-service router gateway " "port on the provider network (5)." msgstr "" #: ../deploy-lb-selfservice.rst:285 msgid "" "For IPv4, the router performs DNAT on the packet which changes the " "destination IP address to the instance IP address on the self-service " "network and sends it to the gateway IP address on the self-service network " "via the self-service interface (6)." msgstr "" #: ../deploy-lb-selfservice.rst:289 msgid "" "For IPv6, the router sends the packet to the next-hop IP address, typically " "the gateway IP address on the self-service network, via the self-service " "interface (6)." msgstr "" #: ../deploy-lb-selfservice.rst:293 msgid "" "The router forwards the packet to the self-service bridge router port (7)." msgstr "" #: ../deploy-lb-selfservice.rst:295 msgid "" "The self-service bridge forwards the packet to the VXLAN interface (8) which " "wraps the packet using VNI 101." msgstr "" #: ../deploy-lb-selfservice.rst:297 msgid "" "The underlying physical interface (9) for the VXLAN interface forwards the " "packet to the network node via the overlay network (10)." msgstr "" #: ../deploy-lb-selfservice.rst:302 msgid "" "The underlying physical interface (11) for the VXLAN interface forwards the " "packet to the VXLAN interface (12) which unwraps the packet." msgstr "" #: ../deploy-lb-selfservice.rst:304 msgid "" "Security group rules (13) on the self-service bridge handle firewalling and " "connection tracking for the packet." msgstr "" #: ../deploy-lb-selfservice.rst:306 msgid "" "The self-service bridge instance port (14) forwards the packet to the " "instance interface (15) via ``veth`` pair." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:311 ../deploy-ovs-selfservice.rst:357 msgid "" "Egress instance traffic flows similar to north-south scenario 1, except SNAT " "changes the source IP address of the packet to the floating IPv4 address " "rather than the router IP address on the provider network." msgstr "" #: ../deploy-lb-selfservice.rst:321 msgid "" "Instances with a fixed IPv4/IPv6 or floating IPv4 address on the same " "network communicate directly between compute nodes containing those " "instances." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:324 ../deploy-ovs-selfservice.rst:368 msgid "" "By default, the VXLAN protocol lacks knowledge of target location and uses " "multicast to discover it. After discovery, it stores the location in the " "local forwarding database. In large deployments, the discovery process can " "generate a significant amount of network that all nodes must process. To " "eliminate the latter and generally increase efficiency, the Networking " "service includes the layer-2 population mechanism driver that automatically " "populates the forwarding database for VXLAN interfaces. The example " "configuration enables this driver. For more information, see :ref:`config-" "plugin-ml2`." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:334 ../deploy-lb-selfservice.rst:372 #: ../deploy-ovs-ha-dvr.rst:429 ../deploy-ovs-selfservice.rst:378 #: ../deploy-ovs-selfservice.rst:431 msgid "Instance 1 resides on compute node 1 and uses self-service network 1." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:335 ../deploy-ovs-selfservice.rst:379 msgid "Instance 2 resides on compute node 2 and uses self-service network 1." msgstr "" #: ../deploy-lb-selfservice.rst:340 ../deploy-lb-selfservice.rst:383 msgid "" "The instance 1 interface (1) forwards the packet to the self-service bridge " "instance port (2) via ``veth`` pair." msgstr "" #: ../deploy-lb-selfservice.rst:346 msgid "" "The underlying physical interface (5) for the VXLAN interface forwards the " "packet to compute node 2 via the overlay network (6)." msgstr "" #: ../deploy-lb-selfservice.rst:353 msgid "" "Security group rules (9) on the self-service bridge handle firewalling and " "connection tracking for the packet." msgstr "" #: ../deploy-lb-selfservice.rst:355 msgid "" "The self-service bridge instance port (10) forwards the packet to the " "instance 1 interface (11) via ``veth`` pair." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:368 ../deploy-ovs-selfservice.rst:427 msgid "" "Instances using a fixed IPv4/IPv6 address or floating IPv4 address " "communicate via router on the network node. The self-service networks must " "reside on the same router." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:373 ../deploy-ovs-selfservice.rst:432 msgid "Instance 2 resides on compute node 1 and uses self-service network 2." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-lb-selfservice.rst:378 ../deploy-ovs-selfservice.rst:437 msgid "" "Both instances reside on the same compute node to illustrate how VXLAN " "enables multiple overlays to use the same layer-3 network." msgstr "" #: ../deploy-lb-selfservice.rst:396 msgid "" "The self-service bridge router port (9) forwards the packet to the self-" "service network 1 interface (10) in the router namespace." msgstr "" #: ../deploy-lb-selfservice.rst:398 msgid "" "The router sends the packet to the next-hop IP address, typically the " "gateway IP address on self-service network 2, via the self-service network 2 " "interface (11)." msgstr "" #: ../deploy-lb-selfservice.rst:401 msgid "" "The router forwards the packet to the self-service network 2 bridge router " "port (12)." msgstr "" #: ../deploy-lb-selfservice.rst:403 msgid "" "The self-service network 2 bridge forwards the packet to the VXLAN interface " "(13) which wraps the packet using VNI 102." msgstr "" #: ../deploy-lb-selfservice.rst:405 msgid "" "The physical network interface (14) for the VXLAN interface sends the packet " "to the compute node via the overlay network (15)." msgstr "" #: ../deploy-lb-selfservice.rst:410 msgid "" "The underlying physical interface (16) for the VXLAN interface sends the " "packet to the VXLAN interface (17) which unwraps the packet." msgstr "" #: ../deploy-lb-selfservice.rst:412 msgid "" "Security group rules (18) on the self-service bridge handle firewalling and " "connection tracking for the packet." msgstr "" #: ../deploy-lb-selfservice.rst:414 msgid "" "The self-service bridge instance port (19) forwards the packet to the " "instance 2 interface (20) via ``veth`` pair." msgstr "" #: ../deploy-lb.rst:5 msgid "Linux bridge mechanism driver" msgstr "" #: ../deploy-lb.rst:7 msgid "" "The Linux bridge mechanism driver uses only Linux bridges and ``veth`` pairs " "as interconnection devices. A layer-2 agent manages Linux bridges on each " "compute node and any other node that provides layer-3 (routing), DHCP, " "metadata, or other network services." msgstr "" #: ../deploy-ovs-ha-dvr.rst:5 msgid "Open vSwitch: High availability using DVR" msgstr "" #: ../deploy-ovs-ha-dvr.rst:7 msgid "" "This architecture example augments the self-service deployment example with " "the Distributed Virtual Router (DVR) high-availability mechanism that " "provides connectivity between self-service and provider networks on compute " "nodes rather than network nodes for specific scenarios. For instances with a " "floating IPv4 address, routing between self-service and provider networks " "resides completely on the compute nodes to eliminate single point of failure " "and performance issues with network nodes. Routing also resides completely " "on the compute nodes for instances with a fixed or floating IPv4 address " "using self-service networks on the same distributed virtual router. However, " "instances with a fixed IP address still rely on the network node for routing " "and SNAT services between self-service and provider networks." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:19 ../shared/deploy-ha-vrrp.txt:35 msgid "" "Consider the following attributes of this high-availability mechanism to " "determine practicality in your environment:" msgstr "" #: ../deploy-ovs-ha-dvr.rst:22 msgid "" "Only provides connectivity to an instance via the compute node on which the " "instance resides if the instance resides on a self-service network with a " "floating IPv4 address. Instances on self-service networks with only an IPv6 " "address or both IPv4 and IPv6 addresses rely on the network node for IPv6 " "connectivity." msgstr "" #: ../deploy-ovs-ha-dvr.rst:28 msgid "" "The instance of a router on each compute node consumes an IPv4 address on " "the provider network on which it contains a gateway." msgstr "" #: ../deploy-ovs-ha-dvr.rst:36 msgid "Install the OpenStack Networking layer-3 agent." msgstr "" #: ../deploy-ovs-ha-dvr.rst:40 msgid "" "Consider adding at least one additional network node to provide high-" "availability for instances with a fixed IP address. See See :ref:`config-dvr-" "snat-ha-ovs` for more information." msgstr "" #: ../deploy-ovs-ha-dvr.rst:50 msgid "" "The following figure shows components and connectivity for one self-service " "network and one untagged (flat) network. In this particular case, the " "instance resides on the same compute node as the DHCP agent for the network. " "If the DHCP agent resides on another compute node, the latter only contains " "a DHCP namespace with a port on the OVS integration bridge." msgstr "" #: ../deploy-ovs-ha-dvr.rst:62 msgid "" "Use the following example configuration as a template to add support for " "high-availability using DVR to an existing operational environment that " "supports self-service networks." msgstr "" #: ../deploy-ovs-ha-dvr.rst:71 msgid "Enable distributed routing by default for all routers." msgstr "" #: ../deploy-ovs-ha-dvr.rst:85 ../deploy-ovs-ha-dvr.rst:115 msgid "In the ``openswitch_agent.ini`` file, enable distributed routing." msgstr "" #: ../deploy-ovs-ha-dvr.rst:92 msgid "" "In the ``l3_agent.ini`` file, configure the layer-3 agent to provide SNAT " "services." msgstr "" #: ../deploy-ovs-ha-dvr.rst:168 msgid "" "Similar to the self-service deployment example, this configuration supports " "multiple VXLAN self-service networks. After enabling high-availability, all " "additional routers use distributed routing. The following procedure creates " "an additional self-service network and router. The Networking service also " "supports adding distributed routing to existing routers." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:174 #: ../shared/deploy-ha-vrrp-initialnetworks.txt:10 #: ../shared/deploy-provider-verifynetworkoperation.txt:8 #: ../shared/deploy-selfservice-initialnetworks.txt:20 #: ../shared/deploy-selfservice-verifynetworkoperation.txt:16 msgid "Source a regular (non-administrative) project credentials." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:175 #: ../shared/deploy-ha-vrrp-initialnetworks.txt:11 #: ../shared/deploy-selfservice-initialnetworks.txt:21 msgid "Create a self-service network." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:202 #: ../shared/deploy-ha-vrrp-initialnetworks.txt:38 #: ../shared/deploy-selfservice-initialnetworks.txt:48 msgid "Create a IPv4 subnet on the self-service network." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:229 #: ../shared/deploy-ha-vrrp-initialnetworks.txt:65 #: ../shared/deploy-selfservice-initialnetworks.txt:75 msgid "Create a IPv6 subnet on the self-service network." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:258 #: ../shared/deploy-ha-vrrp-initialnetworks.txt:94 #: ../shared/deploy-selfservice-initialnetworks.txt:104 msgid "Create a router." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:279 #: ../shared/deploy-ha-vrrp-initialnetworks.txt:115 #: ../shared/deploy-selfservice-initialnetworks.txt:125 msgid "Add the IPv4 and IPv6 subnets as interfaces on the router." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:289 #: ../shared/deploy-ha-vrrp-initialnetworks.txt:125 msgid "Add the provider network as a gateway on the router." msgstr "" #: ../deploy-ovs-ha-dvr.rst:300 msgid "Verify distributed routing on the router." msgstr "" #: ../deploy-ovs-ha-dvr.rst:311 msgid "" "On each compute node, verify creation of a ``qrouter`` namespace with the " "same ID." msgstr "" #: ../deploy-ovs-ha-dvr.rst:314 msgid "Compute node 1:" msgstr "" #: ../deploy-ovs-ha-dvr.rst:321 msgid "Compute node 2:" msgstr "" #: ../deploy-ovs-ha-dvr.rst:328 msgid "" "On the network node, verify creation of the ``snat`` and ``qrouter`` " "namespaces with the same ID." msgstr "" #: ../deploy-ovs-ha-dvr.rst:339 msgid "" "The namespace for router 1 from :ref:`deploy-ovs-selfservice` should also " "appear on network node 1 because of creation prior to enabling distributed " "routing." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:343 #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:109 msgid "" "Launch an instance with an interface on the addtional self-service network. " "For example, a CirrOS image using flavor ID 1." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:350 #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:116 msgid "" "Replace ``NETWORK_ID`` with the ID of the additional self-service network." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:353 #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:119 #: ../shared/deploy-provider-verifynetworkoperation.txt:24 #: ../shared/deploy-selfservice-verifynetworkoperation.txt:31 msgid "Determine the IPv4 and IPv6 addresses of the instance." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:364 #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:130 #: ../shared/deploy-selfservice-verifynetworkoperation.txt:78 msgid "Create a floating IPv4 address on the provider network." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:379 #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:145 #: ../shared/deploy-selfservice-verifynetworkoperation.txt:93 msgid "Associate the floating IPv4 address with the instance." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:387 #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:153 #: ../shared/deploy-selfservice-verifynetworkoperation.txt:101 msgid "This command provides no output." msgstr "" #: ../deploy-ovs-ha-dvr.rst:389 msgid "" "On the compute node containing the instance, verify creation of the ``fip`` " "namespace with the same ID as the provider network." msgstr "" #: ../deploy-ovs-ha-dvr.rst:402 msgid "" "This section only contains flow scenarios that benefit from distributed " "virtual routing or that differ from conventional operation. For other flow " "scenarios, see :ref:`deploy-ovs-selfservice-networktrafficflow`." msgstr "" #: ../deploy-ovs-ha-dvr.rst:409 msgid "" "Similar to :ref:`deploy-ovs-selfservice-networktrafficflow-ns1`, except the " "router namespace on the network node becomes the SNAT namespace. The network " "node still contains the router namespace, but it serves no purpose in this " "case." msgstr "" #: ../deploy-ovs-ha-dvr.rst:420 msgid "" "For instances with a floating IPv4 address using a self-service network on a " "distributed router, the compute node containing the instance performs SNAT " "on north-south traffic passing from the instance to external networks such " "as the Internet and DNAT on north-south traffic passing from external " "networks to the instance. Floating IP addresses and NAT do not apply to " "IPv6. Thus, the network node routes IPv6 traffic in this scenario. north-" "south traffic passing between the instance and external networks such as the " "Internet." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:436 ../deploy-ovs-selfservice.rst:305 msgid "" "The provider physical network interface forwards the packet to the OVS " "provider bridge provider network port (3)." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:438 ../deploy-ovs-selfservice.rst:307 msgid "" "The OVS provider bridge swaps actual VLAN tag 101 with the internal VLAN tag." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:440 ../deploy-ovs-selfservice.rst:309 msgid "" "The OVS provider bridge ``phy-br-provider`` port (4) forwards the packet to " "the OVS integration bridge ``int-br-provider`` port (5)." msgstr "" #: ../deploy-ovs-ha-dvr.rst:442 msgid "" "The OVS integration bridge port for the provider network (6) removes the " "internal VLAN tag and forwards the packet to the provider network interface " "(7) in the floating IP namespace. This interface responds to any ARP " "requests for the instance floating IPv4 address." msgstr "" #: ../deploy-ovs-ha-dvr.rst:446 msgid "" "The floating IP namespace routes the packet (8) to the distributed router " "namespace (9) using a pair of IP addresses on the DVR internal network. This " "namespace contains the instance floating IPv4 address." msgstr "" #: ../deploy-ovs-ha-dvr.rst:449 msgid "" "The router performs DNAT on the packet which changes the destination IP " "address to the instance IP address on the self-service network via the self-" "service network interface (10)." msgstr "" #: ../deploy-ovs-ha-dvr.rst:452 msgid "" "The router forwards the packet to the OVS integration bridge port for the " "self-service network (11)." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:454 ../deploy-ovs-ha-dvr.rst:492 #: ../deploy-ovs-ha-dvr.rst:501 ../deploy-ovs-provider.rst:281 #: ../deploy-ovs-provider.rst:325 ../deploy-ovs-provider.rst:384 #: ../deploy-ovs-selfservice.rst:237 ../deploy-ovs-selfservice.rst:325 #: ../deploy-ovs-selfservice.rst:390 ../deploy-ovs-selfservice.rst:448 msgid "The OVS integration bridge adds an internal VLAN tag to the packet." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:455 ../deploy-ovs-ha-dvr.rst:520 #: ../deploy-ovs-selfservice.rst:344 ../deploy-ovs-selfservice.rst:409 #: ../deploy-ovs-selfservice.rst:494 msgid "" "The OVS integration bridge removes the internal VLAN tag from the packet." msgstr "" #: ../deploy-ovs-ha-dvr.rst:456 msgid "" "The OVS integration bridge security group port (12) forwards the packet to " "the security group bridge OVS port (13) via ``veth`` pair." msgstr "" #: ../deploy-ovs-ha-dvr.rst:458 msgid "" "Security group rules (14) on the security group bridge handle firewalling " "and connection tracking for the packet." msgstr "" #: ../deploy-ovs-ha-dvr.rst:460 msgid "" "The security group bridge instance port (15) forwards the packet to the " "instance interface (16) via ``veth`` pair." msgstr "" #: ../deploy-ovs-ha-dvr.rst:468 msgid "" "Egress traffic follows similar steps in reverse, except SNAT changes the " "source IPv4 address of the packet to the floating IPv4 address." msgstr "" #: ../deploy-ovs-ha-dvr.rst:472 msgid "" "East-west scenario 1: Instances on different networks on the same router" msgstr "" #: ../deploy-ovs-ha-dvr.rst:474 msgid "" "Instances with fixed IPv4/IPv6 address or floating IPv4 address on the same " "compute node communicate via router on the compute node. Instances on " "different compute nodes communicate via an instance of the router on each " "compute node." msgstr "" #: ../deploy-ovs-ha-dvr.rst:481 msgid "" "This scenario places the instances on different compute nodes to show the " "most complex situation." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:486 ../deploy-ovs-provider.rst:275 #: ../deploy-ovs-selfservice.rst:231 ../deploy-ovs-selfservice.rst:442 msgid "" "The instance interface (1) forwards the packet to the security group bridge " "instance port (2) via ``veth`` pair." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:488 ../deploy-ovs-provider.rst:277 #: ../deploy-ovs-provider.rst:321 ../deploy-ovs-provider.rst:380 #: ../deploy-ovs-selfservice.rst:233 ../deploy-ovs-selfservice.rst:386 #: ../deploy-ovs-selfservice.rst:444 msgid "" "Security group rules (3) on the security group bridge handle firewalling and " "connection tracking for the packet." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:490 ../deploy-ovs-provider.rst:279 #: ../deploy-ovs-provider.rst:323 ../deploy-ovs-provider.rst:382 #: ../deploy-ovs-selfservice.rst:235 ../deploy-ovs-selfservice.rst:388 #: ../deploy-ovs-selfservice.rst:446 msgid "" "The security group bridge OVS port (4) forwards the packet to the OVS " "integration bridge security group port (5) via ``veth`` pair." msgstr "" #: ../deploy-ovs-ha-dvr.rst:493 msgid "" "The OVS integration bridge port for self-service network 1 (6) removes the " "internal VLAN tag and forwards the packet to the self-service network 1 " "interface in the distributed router namespace (6)." msgstr "" #: ../deploy-ovs-ha-dvr.rst:496 msgid "" "The distributed router namespace routes the packet to self-service network 2." msgstr "" #: ../deploy-ovs-ha-dvr.rst:498 msgid "" "The self-service network 2 interface in the distributed router namespace (8) " "forwards the packet to the OVS integration bridge port for self-service " "network 2 (9)." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:502 ../deploy-ovs-selfservice.rst:238 #: ../deploy-ovs-selfservice.rst:326 ../deploy-ovs-selfservice.rst:391 #: ../deploy-ovs-selfservice.rst:449 ../deploy-ovs-selfservice.rst:476 msgid "" "The OVS integration bridge exchanges the internal VLAN tag for an internal " "tunnel ID." msgstr "" #: ../deploy-ovs-ha-dvr.rst:504 msgid "" "The OVS integration bridge ``patch-tun`` port (10) forwards the packet to " "the OVS tunnel bridge ``patch-int`` port (11)." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:506 ../deploy-ovs-selfservice.rst:330 msgid "The OVS tunnel bridge (12) wraps the packet using VNI 101." msgstr "" #: ../deploy-ovs-ha-dvr.rst:507 msgid "" "The underlying physical interface (13) for overlay networks forwards the " "packet to compute node 2 via the overlay network (14)." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:512 ../deploy-ovs-selfservice.rst:336 msgid "" "The underlying physical interface (15) for overlay networks forwards the " "packet to the OVS tunnel bridge (16)." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:514 ../deploy-ovs-selfservice.rst:250 #: ../deploy-ovs-selfservice.rst:338 ../deploy-ovs-selfservice.rst:403 #: ../deploy-ovs-selfservice.rst:461 ../deploy-ovs-selfservice.rst:488 msgid "" "The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to " "it." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:516 ../deploy-ovs-selfservice.rst:252 #: ../deploy-ovs-selfservice.rst:340 ../deploy-ovs-selfservice.rst:405 #: ../deploy-ovs-selfservice.rst:463 ../deploy-ovs-selfservice.rst:490 msgid "" "The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN " "tag." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:518 ../deploy-ovs-selfservice.rst:342 msgid "" "The OVS tunnel bridge ``patch-int`` patch port (17) forwards the packet to " "the OVS integration bridge ``patch-tun`` patch port (18)." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:521 ../deploy-ovs-selfservice.rst:345 msgid "" "The OVS integration bridge security group port (19) forwards the packet to " "the security group bridge OVS port (20) via ``veth`` pair." msgstr "" # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-dvr.rst:523 ../deploy-ovs-selfservice.rst:347 msgid "" "Security group rules (21) on the security group bridge handle firewalling " "and connection tracking for the packet." msgstr "" #: ../deploy-ovs-ha-dvr.rst:525 msgid "" "The security group bridge instance port (22) forwards the packet to the " "instance 2 interface (23) via ``veth`` pair." msgstr "" #: ../deploy-ovs-ha-dvr.rst:530 msgid "" "Routing between self-service networks occurs on the compute node containing " "the instance sending the packet. In this scenario, routing occurs on compute " "node 1 for packets from instance 1 to instance 2 and on compute node 2 for " "packets from instance 2 to instance 1." msgstr "" #: ../deploy-ovs-ha-vrrp.rst:5 msgid "Open vSwitch: High availability using VRRP" msgstr "" # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-vrrp.rst:70 ../deploy-ovs-selfservice.rst:100 msgid "Install the Networking service OVS layer-2 agent and layer-3 agent." msgstr "" # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-vrrp.rst:72 ../deploy-ovs-provider.rst:171 #: ../deploy-ovs-selfservice.rst:102 msgid "Install OVS." msgstr "" # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-vrrp.rst:80 ../deploy-ovs-provider.rst:208 #: ../deploy-ovs-selfservice.rst:110 ../intro-os-networking.rst:310 msgid "OVS" msgstr "" # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-vrrp.rst:82 ../deploy-ovs-provider.rst:210 #: ../deploy-ovs-selfservice.rst:112 msgid "Create the OVS provider bridge ``br-provider``:" msgstr "" # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-ha-vrrp.rst:88 ../deploy-ovs-selfservice.rst:118 msgid "In the ``openvswitch_agent.ini`` file, configure the layer-2 agent." msgstr "" #: ../deploy-ovs-ha-vrrp.rst:171 msgid "" "This high-availability mechanism simply augments :ref:`deploy-ovs-" "selfservice` with failover of layer-3 services to another router if the " "master router fails. Thus, you can reference :ref:`Self-service network " "traffic flow ` for normal " "operation." msgstr "" #: ../deploy-ovs-provider.rst:5 msgid "Open vSwitch: Provider networks" msgstr "" #: ../deploy-ovs-provider.rst:7 msgid "" "This architecture example provides layer-2 connectivity between instances " "and the physical network infrastructure using VLAN (802.1q) tagging. It " "supports one untagged (flat) network and up to 4095 tagged (VLAN) networks. " "The actual quantity of VLAN networks depends on the physical network " "infrastructure. For more information on provider networks, see :ref:`intro-" "os-networking-provider`." msgstr "" #: ../deploy-ovs-provider.rst:16 msgid "" "Linux distributions often package older releases of Open vSwitch that can " "introduce issues during operation with the Networking service. We recommend " "using at least the latest long-term stable (LTS) release of Open vSwitch for " "the best experience and support from Open vSwitch. See ``__ for available releases and the `installation " "instructions `__ " "for" msgstr "" #: ../deploy-ovs-provider.rst:35 msgid "" "OpenStack Networking Open vSwitch (OVS) layer-2 agent, DHCP agent, metadata " "agent, and any dependencies including OVS." msgstr "" #: ../deploy-ovs-provider.rst:52 msgid "" "The following figure shows components and connectivity for one untagged " "(flat) network. In this particular case, the instance resides on the same " "compute node as the DHCP agent for the network. If the DHCP agent resides on " "another compute node, the latter only contains a DHCP namespace with a port " "on the OVS integration bridge." msgstr "" #: ../deploy-ovs-provider.rst:61 msgid "" "The following figure describes virtual connectivity among components for two " "tagged (VLAN) networks. Essentially, all networks use a single OVS " "integration bridge with different internal VLAN tags. The internal VLAN tags " "almost always differ from the network VLAN assignment in the Networking " "service. Similar to the untagged network case, the DHCP agent may reside on " "a different compute node." msgstr "" #: ../deploy-ovs-provider.rst:85 msgid "" "Install the Networking service components that provide the ``neutron-" "server`` service and ML2 plug-in." msgstr "" #: ../deploy-ovs-provider.rst:168 msgid "" "Install the Networking service OVS layer-2 agent, DHCP agent, and metadata " "agent." msgstr "" #: ../deploy-ovs-provider.rst:177 msgid "In the ``openvswitch_agent.ini`` file, configure the OVS agent:" msgstr "" #: ../deploy-ovs-provider.rst:216 msgid "" "Add the provider network interface as a port on the OVS provider bridge ``br-" "provider``:" msgstr "" #: ../deploy-ovs-provider.rst:228 msgid "OVS agent" msgstr "" #: ../deploy-ovs-provider.rst:268 msgid "North-south" msgstr "" #: ../deploy-ovs-provider.rst:282 ../deploy-ovs-provider.rst:326 #: ../deploy-ovs-provider.rst:385 msgid "" "The OVS integration bridge ``int-br-provider`` patch port (6) forwards the " "packet to the OVS provider bridge ``phy-br-provider`` patch port (7)." msgstr "" # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-provider.rst:284 ../deploy-ovs-provider.rst:328 #: ../deploy-ovs-provider.rst:387 ../deploy-ovs-selfservice.rst:273 msgid "" "The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag 101." msgstr "" #: ../deploy-ovs-provider.rst:286 ../deploy-ovs-provider.rst:330 #: ../deploy-ovs-provider.rst:389 msgid "" "The OVS provider bridge provider network port (8) forwards the packet to the " "physical network interface (9)." msgstr "" #: ../deploy-ovs-provider.rst:288 ../deploy-ovs-provider.rst:332 #: ../deploy-ovs-provider.rst:391 msgid "" "The physical network interface forwards the packet to the physical network " "infrastructure switch (10)." msgstr "" #: ../deploy-ovs-provider.rst:293 ../deploy-ovs-provider.rst:396 msgid "" "The switch removes VLAN tag 101 from the packet and forwards it to the " "router (11)." msgstr "" #: ../deploy-ovs-provider.rst:295 msgid "" "The router routes the packet from the provider network (12) to the external " "network (13) and forwards the packet to the switch (14)." msgstr "" #: ../deploy-ovs-provider.rst:297 msgid "The switch forwards the packet to the external network (15)." msgstr "" #: ../deploy-ovs-provider.rst:298 msgid "The external network (16) receives the packet." msgstr "" # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../deploy-ovs-provider.rst:319 ../deploy-ovs-provider.rst:378 #: ../deploy-ovs-selfservice.rst:384 msgid "" "The instance 1 interface (1) forwards the packet to the security group " "bridge instance port (2) via ``veth`` pair." msgstr "" #: ../deploy-ovs-provider.rst:337 msgid "" "The switch forwards the packet from compute node 1 to compute node 2 (11)." msgstr "" #: ../deploy-ovs-provider.rst:341 msgid "" "The physical network interface (12) forwards the packet to the OVS provider " "bridge provider network port (13)." msgstr "" #: ../deploy-ovs-provider.rst:343 msgid "" "The OVS provider bridge ``phy-br-provider`` patch port (14) forwards the " "packet to the OVS integration bridge ``int-br-provider`` patch port (15)." msgstr "" #: ../deploy-ovs-provider.rst:345 msgid "" "The OVS integration bridge swaps the actual VLAN tag 101 with the internal " "VLAN tag." msgstr "" #: ../deploy-ovs-provider.rst:347 msgid "" "The OVS integration bridge security group port (16) forwards the packet to " "the security group bridge OVS port (17)." msgstr "" #: ../deploy-ovs-provider.rst:349 msgid "" "Security group rules (18) on the security group bridge handle firewalling " "and connection tracking for the packet." msgstr "" #: ../deploy-ovs-provider.rst:351 msgid "" "The security group bridge instance port (19) forwards the packet to the " "instance 2 interface (20) via ``veth`` pair." msgstr "" #: ../deploy-ovs-provider.rst:398 msgid "" "The router routes the packet from provider network 1 (12) to provider " "network 2 (13)." msgstr "" #: ../deploy-ovs-provider.rst:400 msgid "The router forwards the packet to the switch (14)." msgstr "" #: ../deploy-ovs-provider.rst:401 msgid "" "The switch adds VLAN tag 102 to the packet and forwards it to compute node 1 " "(15)." msgstr "" #: ../deploy-ovs-provider.rst:406 msgid "" "The physical network interface (16) forwards the packet to the OVS provider " "bridge provider network port (17)." msgstr "" #: ../deploy-ovs-provider.rst:408 msgid "" "The OVS provider bridge ``phy-br-provider`` patch port (18) forwards the " "packet to the OVS integration bridge ``int-br-provider`` patch port (19)." msgstr "" #: ../deploy-ovs-provider.rst:410 msgid "" "The OVS integration bridge swaps the actual VLAN tag 102 with the internal " "VLAN tag." msgstr "" #: ../deploy-ovs-provider.rst:412 msgid "" "The OVS integration bridge security group port (20) removes the internal " "VLAN tag and forwards the packet to the security group bridge OVS port (21)." msgstr "" #: ../deploy-ovs-provider.rst:415 msgid "" "Security group rules (22) on the security group bridge handle firewalling " "and connection tracking for the packet." msgstr "" #: ../deploy-ovs-provider.rst:417 msgid "" "The security group bridge instance port (23) forwards the packet to the " "instance 2 interface (24) via ``veth`` pair." msgstr "" #: ../deploy-ovs-selfservice.rst:5 msgid "Open vSwitch: Self-service networks" msgstr "" #: ../deploy-ovs-selfservice.rst:7 msgid "" "This architecture example augments :ref:`deploy-ovs-provider` to support a " "nearly limitless quantity of entirely virtual networks. Although the " "Networking service supports VLAN self-service networks, this example focuses " "on VXLAN self-service networks. For more information on self-service " "networks, see :ref:`intro-os-networking-selfservice`." msgstr "" #: ../deploy-ovs-selfservice.rst:19 msgid "" "OpenStack Networking Open vSwitch (OVS) layer-2 agent, layer-3 agent, and " "any including OVS." msgstr "" #: ../deploy-ovs-selfservice.rst:37 msgid "" "The following figure shows components and connectivity for one self-service " "network and one untagged (flat) provider network. In this particular case, " "the instance resides on the same compute node as the DHCP agent for the " "network. If the DHCP agent resides on another compute node, the latter only " "contains a DHCP namespace and with a port on the OVS integration bridge." msgstr "" #: ../deploy-ovs-selfservice.rst:157 msgid "" "In the ``openvswitch_agent.ini`` file, enable VXLAN support including " "layer-2 population." msgstr "" #: ../deploy-ovs-selfservice.rst:240 ../deploy-ovs-selfservice.rst:393 msgid "" "The OVS integration bridge patch port (6) forwards the packet to the OVS " "tunnel bridge patch port (7)." msgstr "" #: ../deploy-ovs-selfservice.rst:242 ../deploy-ovs-selfservice.rst:395 #: ../deploy-ovs-selfservice.rst:453 msgid "The OVS tunnel bridge (8) wraps the packet using VNI 101." msgstr "" #: ../deploy-ovs-selfservice.rst:243 ../deploy-ovs-selfservice.rst:454 msgid "" "The underlying physical interface (9) for overlay networks forwards the " "packet to the network node via the overlay network (10)." msgstr "" #: ../deploy-ovs-selfservice.rst:248 ../deploy-ovs-selfservice.rst:401 #: ../deploy-ovs-selfservice.rst:459 msgid "" "The underlying physical interface (11) for overlay networks forwards the " "packet to the OVS tunnel bridge (12)." msgstr "" #: ../deploy-ovs-selfservice.rst:254 msgid "" "The OVS tunnel bridge patch port (13) forwards the packet to the OVS " "integration bridge patch port (14)." msgstr "" #: ../deploy-ovs-selfservice.rst:256 msgid "" "The OVS integration bridge port for the self-service network (15) removes " "the internal VLAN tag and forwards the packet to the self-service network " "interface (16) in the router namespace." msgstr "" #: ../deploy-ovs-selfservice.rst:260 msgid "" "For IPv4, the router performs SNAT on the packet which changes the source IP " "address to the router IP address on the provider network and sends it to the " "gateway IP address on the provider network via the gateway interface on the " "provider network (17)." msgstr "" #: ../deploy-ovs-selfservice.rst:264 msgid "" "For IPv6, the router sends the packet to the next-hop IP address, typically " "the gateway IP address on the provider network, via the provider gateway " "interface (17)." msgstr "" #: ../deploy-ovs-selfservice.rst:268 msgid "" "The router forwards the packet to the OVS integration bridge port for the " "provider network (18)." msgstr "" #: ../deploy-ovs-selfservice.rst:270 ../deploy-ovs-selfservice.rst:475 msgid "The OVS integration bridge adds the internal VLAN tag to the packet." msgstr "" #: ../deploy-ovs-selfservice.rst:271 msgid "" "The OVS integration bridge ``int-br-provider`` patch port (19) forwards the " "packet to the OVS provider bridge ``phy-br-provider`` patch port (20)." msgstr "" #: ../deploy-ovs-selfservice.rst:275 msgid "" "The OVS provider bridge provider network port (21) forwards the packet to " "the physical network interface (22)." msgstr "" #: ../deploy-ovs-selfservice.rst:277 msgid "" "The physical network interface forwards the packet to the Internet via " "physical network infrastructure (23)." msgstr "" #: ../deploy-ovs-selfservice.rst:311 msgid "" "The OVS integration bridge port for the provider network (6) removes the " "internal VLAN tag and forwards the packet to the provider network interface " "(6) in the router namespace." msgstr "" #: ../deploy-ovs-selfservice.rst:315 msgid "" "For IPv4, the router performs DNAT on the packet which changes the " "destination IP address to the instance IP address on the self-service " "network and sends it to the gateway IP address on the self-service network " "via the self-service interface (7)." msgstr "" #: ../deploy-ovs-selfservice.rst:319 msgid "" "For IPv6, the router sends the packet to the next-hop IP address, typically " "the gateway IP address on the self-service network, via the self-service " "interface (8)." msgstr "" #: ../deploy-ovs-selfservice.rst:323 msgid "" "The router forwards the packet to the OVS integration bridge port for the " "self-service network (9)." msgstr "" #: ../deploy-ovs-selfservice.rst:328 msgid "" "The OVS integration bridge ``patch-tun`` patch port (10) forwards the packet " "to the OVS tunnel bridge ``patch-int`` patch port (11)." msgstr "" #: ../deploy-ovs-selfservice.rst:331 msgid "" "The underlying physical interface (13) for overlay networks forwards the " "packet to the network node via the overlay network (14)." msgstr "" #: ../deploy-ovs-selfservice.rst:349 msgid "" "The security group bridge instance port (22) forwards the packet to the " "instance interface (23) via ``veth`` pair." msgstr "" #: ../deploy-ovs-selfservice.rst:364 msgid "" "Instances with a fixed IPv4/IPv6 address or floating IPv4 address on the " "same network communicate directly between compute nodes containing those " "instances." msgstr "" #: ../deploy-ovs-selfservice.rst:396 msgid "" "The underlying physical interface (9) for overlay networks forwards the " "packet to compute node 2 via the overlay network (10)." msgstr "" #: ../deploy-ovs-selfservice.rst:407 ../deploy-ovs-selfservice.rst:465 msgid "" "The OVS tunnel bridge ``patch-int`` patch port (13) forwards the packet to " "the OVS integration bridge ``patch-tun`` patch port (14)." msgstr "" #: ../deploy-ovs-selfservice.rst:410 msgid "" "The OVS integration bridge security group port (15) forwards the packet to " "the security group bridge OVS port (16) via ``veth`` pair." msgstr "" #: ../deploy-ovs-selfservice.rst:412 msgid "" "Security group rules (17) on the security group bridge handle firewalling " "and connection tracking for the packet." msgstr "" #: ../deploy-ovs-selfservice.rst:414 msgid "" "The security group bridge instance port (18) forwards the packet to the " "instance 2 interface (19) via ``veth`` pair." msgstr "" #: ../deploy-ovs-selfservice.rst:451 msgid "" "The OVS integration bridge ``patch-tun`` patch port (6) forwards the packet " "to the OVS tunnel bridge ``patch-int`` patch port (7)." msgstr "" #: ../deploy-ovs-selfservice.rst:467 msgid "" "The OVS integration bridge port for self-service network 1 (15) removes the " "internal VLAN tag and forwards the packet to the self-service network 1 " "interface (16) in the router namespace." msgstr "" #: ../deploy-ovs-selfservice.rst:470 msgid "" "The router sends the packet to the next-hop IP address, typically the " "gateway IP address on self-service network 2, via the self-service network 2 " "interface (17)." msgstr "" #: ../deploy-ovs-selfservice.rst:473 msgid "" "The router forwards the packet to the OVS integration bridge port for self-" "service network 2 (18)." msgstr "" #: ../deploy-ovs-selfservice.rst:478 msgid "" "The OVS integration bridge ``patch-tun`` patch port (19) forwards the packet " "to the OVS tunnel bridge ``patch-int`` patch port (20)." msgstr "" #: ../deploy-ovs-selfservice.rst:480 msgid "The OVS tunnel bridge (21) wraps the packet using VNI 102." msgstr "" #: ../deploy-ovs-selfservice.rst:481 msgid "" "The underlying physical interface (22) for overlay networks forwards the " "packet to the compute node via the overlay network (23)." msgstr "" #: ../deploy-ovs-selfservice.rst:486 msgid "" "The underlying physical interface (24) for overlay networks forwards the " "packet to the OVS tunnel bridge (25)." msgstr "" #: ../deploy-ovs-selfservice.rst:492 msgid "" "The OVS tunnel bridge ``patch-int`` patch port (26) forwards the packet to " "the OVS integration bridge ``patch-tun`` patch port (27)." msgstr "" #: ../deploy-ovs-selfservice.rst:495 msgid "" "The OVS integration bridge security group port (28) forwards the packet to " "the security group bridge OVS port (29) via ``veth`` pair." msgstr "" #: ../deploy-ovs-selfservice.rst:497 msgid "" "Security group rules (30) on the security group bridge handle firewalling " "and connection tracking for the packet." msgstr "" #: ../deploy-ovs-selfservice.rst:499 msgid "" "The security group bridge instance port (31) forwards the packet to the " "instance interface (32) via ``veth`` pair." msgstr "" #: ../deploy-ovs.rst:5 msgid "Open vSwitch mechanism driver" msgstr "" #: ../deploy-ovs.rst:7 msgid "" "The Open vSwitch (OVS) mechanism driver uses a combination of OVS and Linux " "bridges as interconnection devices. However, optionally enabling the OVS " "native implementation of security groups removes the dependency on Linux " "bridges." msgstr "" #: ../deploy-ovs.rst:12 msgid "" "We recommend using Open vSwitch version 2.4 or higher. Optional features may " "require a higher minimum version." msgstr "" #: ../deploy.rst:5 msgid "Deployment examples" msgstr "" #: ../deploy.rst:7 msgid "" "The following deployment examples provide building blocks of increasing " "architectural complexity using the Networking service reference architecture " "which implements the Modular Layer 2 (ML2) plug-in and either the Open " "vSwitch (OVS) or Linux bridge mechanism drivers. Both mechanism drivers " "support the same basic features such as provider networks, self-service " "networks, and routers. However, more complex features often require a " "particular mechanism driver. Thus, you should consider the requirements (or " "goals) of your cloud before choosing a mechanism driver." msgstr "" #: ../deploy.rst:16 msgid "" "After choosing a :ref:`mechanism driver `, the " "deployment examples generally include the following building blocks:" msgstr "" #: ../deploy.rst:19 msgid "Provider (public/external) networks using IPv4 and IPv6" msgstr "" #: ../deploy.rst:21 msgid "" "Self-service (project/private/internal) networks including routers using " "IPv4 and IPv6" msgstr "" #: ../deploy.rst:24 msgid "High-availability features" msgstr "" #: ../deploy.rst:26 msgid "Other features such as BGP dynamic routing" msgstr "" #: ../deploy.rst:31 msgid "" "Prerequisites, typically hardware requirements, generally increase with each " "building block. Each building block depends on proper deployment and " "operation of prior building blocks. For example, the first building block " "(provider networks) only requires one controller and two compute nodes, the " "second building block (self-service networks) adds a network node, and the " "high-availability building blocks typically add a second network node for a " "total of five nodes. Each building block could also require additional " "infrastructure or changes to existing infrastructure such as networks." msgstr "" #: ../deploy.rst:40 msgid "" "For basic configuration of prerequisites, see the `Installation Guide " "`_ for your OpenStack release." msgstr "" #: ../deploy.rst:44 msgid "Nodes" msgstr "" #: ../deploy.rst:46 msgid "The deployment examples refer one or more of the following nodes:" msgstr "" #: ../deploy.rst:48 msgid "" "Controller: Contains control plane components of OpenStack services and " "their dependencies." msgstr "" #: ../deploy.rst:52 msgid "" "Operational SQL server with databases necessary for each OpenStack service." msgstr "" #: ../deploy.rst:54 msgid "Operational message queue service." msgstr "" #: ../deploy.rst:55 msgid "Operational OpenStack Identity (keystone) service." msgstr "" #: ../deploy.rst:56 msgid "Operational OpenStack Image Service (glance)." msgstr "" #: ../deploy.rst:57 msgid "" "Operational management components of the OpenStack Compute (nova) service " "with appropriate configuration to use the Networking service." msgstr "" #: ../deploy.rst:59 msgid "OpenStack Networking (neutron) server service and ML2 plug-in." msgstr "" #: ../deploy.rst:61 msgid "" "Network: Contains the OpenStack Networking service layer-3 (routing) " "component. High availability options may include additional components." msgstr "" #: ../deploy.rst:64 msgid "Three network interfaces: management, overlay, and provider." msgstr "" #: ../deploy.rst:65 msgid "" "Openstack Networking layer-2 (switching) agent, layer-3 agent, and any " "dependencies." msgstr "" #: ../deploy.rst:68 msgid "" "Compute: Contains the hypervisor component of the OpenStack Compute service " "and the OpenStack Networking layer-2, DHCP, and metadata components. High-" "availability options may include additional components." msgstr "" #: ../deploy.rst:73 msgid "" "Operational hypervisor components of the OpenStack Compute (nova) service " "with appropriate configuration to use the Networking service." msgstr "" #: ../deploy.rst:75 msgid "" "OpenStack Networking layer-2 agent, DHCP agent, metadata agent, and any " "dependencies." msgstr "" #: ../deploy.rst:78 msgid "" "Each building block defines the quantity and types of nodes including the " "components on each node." msgstr "" #: ../deploy.rst:83 msgid "" "You can virtualize these nodes for demonstration, training, or proof-of-" "concept purposes. However, you must use physical hosts for for evaluation of " "performance or scaling." msgstr "" #: ../deploy.rst:88 msgid "Networks and network interfaces" msgstr "" #: ../deploy.rst:90 msgid "" "The deployment examples refer to one or more of the following networks and " "network interfaces:" msgstr "" #: ../deploy.rst:93 msgid "" "Management: Handles API requests from clients and control plane traffic for " "OpenStack services including their dependencies." msgstr "" #: ../deploy.rst:95 msgid "" "Overlay: Handles self-service networks using an overlay protocol such as " "VXLAN or GRE." msgstr "" #: ../deploy.rst:97 msgid "" "Provider: Connects virtual and physical networks at layer-2. Typically uses " "physical network infrastructure for switching/routing traffic to external " "networks such as the Internet." msgstr "" #: ../deploy.rst:103 msgid "" "For best performance, 10+ Gbps physical network infrastructure should " "support jumbo frames." msgstr "" #: ../deploy.rst:106 msgid "" "For illustration purposes, the configuration examples typically reference " "the following IP address ranges:" msgstr "" #: ../deploy.rst:109 msgid "Management network: 10.0.0.0/24" msgstr "" #: ../deploy.rst:110 msgid "Overlay (tunnel) network: 10.0.1.0/24" msgstr "" #: ../deploy.rst:111 msgid "Provider network 1:" msgstr "" #: ../deploy.rst:113 msgid "IPv4: 203.0.113.0/24" msgstr "" #: ../deploy.rst:114 msgid "IPv6: fd00:203:0:113::/64" msgstr "" #: ../deploy.rst:116 msgid "Provider network 2:" msgstr "" #: ../deploy.rst:118 msgid "IPv4: 192.0.2.0/24" msgstr "" #: ../deploy.rst:119 msgid "IPv6: fd00:192:0:2::/64" msgstr "" #: ../deploy.rst:121 msgid "Self-service networks:" msgstr "" #: ../deploy.rst:123 msgid "IPv4: 192.168.0.0/16 in /24 segments" msgstr "" #: ../deploy.rst:124 msgid "IPv6: fd00:192:168::/48 in /64 segments" msgstr "" #: ../deploy.rst:126 msgid "" "You may change them to work with your particular network infrastructure." msgstr "" #: ../index.rst:8 msgid "OpenStack Networking Guide" msgstr "" #: ../index.rst:11 msgid "Abstract" msgstr "" #: ../index.rst:13 msgid "" "This guide targets OpenStack administrators seeking to deploy and manage " "OpenStack Networking (neutron)." msgstr "" #: ../index.rst:16 msgid "This guide documents the OpenStack Mitaka release." msgstr "" #: ../index.rst:19 msgid "Contents" msgstr "" #: ../index.rst:33 msgid "Appendix" msgstr "" #: ../index.rst:41 msgid "Glossary" msgstr "" #: ../index.rst:49 msgid "Search in this guide" msgstr "" #: ../index.rst:51 msgid ":ref:`search`" msgstr "" #: ../intro-basic-networking.rst:5 msgid "Basic networking" msgstr "" #: ../intro-basic-networking.rst:8 msgid "Ethernet" msgstr "" #: ../intro-basic-networking.rst:10 msgid "" "Ethernet is a networking protocol, specified by the IEEE 802.3 standard. " "Most wired network interface cards (NICs) communicate using Ethernet." msgstr "" #: ../intro-basic-networking.rst:13 msgid "" "In the `OSI model `_ of networking " "protocols, Ethernet occupies the second layer, which is known as the data " "link layer. When discussing Ethernet, you will often hear terms such as " "*local network*, *layer 2*, *L2*, *link layer* and *data link layer*." msgstr "" #: ../intro-basic-networking.rst:18 msgid "" "In an Ethernet network, the hosts connected to the network communicate by " "exchanging *frames*. Every host on an Ethernet network is uniquely " "identified by an address called the media access control (MAC) address. In " "particular, every virtual machine instance in an OpenStack environment has a " "unique MAC address, which is different from the MAC address of the compute " "host. A MAC address has 48 bits and is typically represented as a " "hexadecimal string, such as ``08:00:27:b9:88:74``. The MAC address is hard-" "coded into the NIC by the manufacturer, although modern NICs allow you to " "change the MAC address programmatically. In Linux, you can retrieve the MAC " "address of a NIC using the :command:`ip` command:" msgstr "" #: ../intro-basic-networking.rst:35 msgid "" "Conceptually, you can think of an Ethernet network as a single bus that each " "of the network hosts connects to. In early implementations, an Ethernet " "network consisted of a single coaxial cable that hosts would tap into to " "connect to the network. However, network hosts in modern Ethernet networks " "connect directly to a network device called a *switch*. Still, this " "conceptual model is useful, and in network diagrams (including those " "generated by the OpenStack dashboard) an Ethernet network is often depicted " "as if it was a single bus. You'll sometimes hear an Ethernet network " "referred to as a *layer 2 segment*." msgstr "" #: ../intro-basic-networking.rst:45 msgid "" "In an Ethernet network, every host on the network can send a frame directly " "to every other host. An Ethernet network also supports broadcasts so that " "one host can send a frame to every host on the network by sending to the " "special MAC address ``ff:ff:ff:ff:ff:ff``. ARP_ and DHCP_ are two notable " "protocols that use Ethernet broadcasts. Because Ethernet networks support " "broadcasts, you will sometimes hear an Ethernet network referred to as a " "*broadcast domain*." msgstr "" #: ../intro-basic-networking.rst:53 msgid "" "When a NIC receives an Ethernet frame, by default the NIC checks to see if " "the destination MAC address matches the address of the NIC (or the broadcast " "address), and the Ethernet frame is discarded if the MAC address does not " "match. For a compute host, this behavior is undesirable because the frame " "may be intended for one of the instances. NICs can be configured for " "*promiscuous mode*, where they pass all Ethernet frames to the operating " "system, even if the MAC address does not match. Compute hosts should always " "have the appropriate NICs configured for promiscuous mode." msgstr "" #: ../intro-basic-networking.rst:63 msgid "" "As mentioned earlier, modern Ethernet networks use switches to interconnect " "the network hosts. A switch is a box of networking hardware with a large " "number of ports that forward Ethernet frames from one connected host to " "another. When hosts first send frames over the switch, the switch doesn’t " "know which MAC address is associated with which port. If an Ethernet frame " "is destined for an unknown MAC address, the switch broadcasts the frame to " "all ports. The switch learns which MAC addresses are at which ports by " "observing the traffic. Once it knows which MAC address is associated with a " "port, it can send Ethernet frames to the correct port instead of " "broadcasting. The switch maintains the mappings of MAC addresses to switch " "ports in a table called a *forwarding table* or *forwarding information " "base* (FIB). Switches can be daisy-chained together, and the resulting " "connection of switches and hosts behaves like a single network." msgstr "" #: ../intro-basic-networking.rst:79 msgid "VLANs" msgstr "" #: ../intro-basic-networking.rst:81 msgid "" "VLAN is a networking technology that enables a single switch to act as if it " "was multiple independent switches. Specifically, two hosts that are " "connected to the same switch but on different VLANs do not see each other's " "traffic. OpenStack is able to take advantage of VLANs to isolate the traffic " "of different tenants, even if the tenants happen to have instances running " "on the same compute host. Each VLAN has an associated numerical ID, between " "1 and 4095. We say \"VLAN 15\" to refer to the VLAN with a numerical ID of " "15." msgstr "" #: ../intro-basic-networking.rst:90 msgid "" "To understand how VLANs work, let's consider VLAN applications in a " "traditional IT environment, where physical hosts are attached to a physical " "switch, and no virtualization is involved. Imagine a scenario where you want " "three isolated networks but you only have a single physical switch. The " "network administrator would choose three VLAN IDs, for example, 10, 11, and " "12, and would configure the switch to associate switchports with VLAN IDs. " "For example, switchport 2 might be associated with VLAN 10, switchport 3 " "might be associated with VLAN 11, and so forth. When a switchport is " "configured for a specific VLAN, it is called an *access port*. The switch is " "responsible for ensuring that the network traffic is isolated across the " "VLANs." msgstr "" #: ../intro-basic-networking.rst:102 msgid "" "Now consider the scenario that all of the switchports in the first switch " "become occupied, and so the organization buys a second switch and connects " "it to the first switch to expand the available number of switchports. The " "second switch is also configured to support VLAN IDs 10, 11, and 12. Now " "imagine host A connected to switch 1 on a port configured for VLAN ID 10 " "sends an Ethernet frame intended for host B connected to switch 2 on a port " "configured for VLAN ID 10. When switch 1 forwards the Ethernet frame to " "switch 2, it must communicate that the frame is associated with VLAN ID 10." msgstr "" #: ../intro-basic-networking.rst:112 msgid "" "If two switches are to be connected together, and the switches are " "configured for VLANs, then the switchports used for cross-connecting the " "switches must be configured to allow Ethernet frames from any VLAN to be " "forwarded to the other switch. In addition, the sending switch must tag each " "Ethernet frame with the VLAN ID so that the receiving switch can ensure that " "only hosts on the matching VLAN are eligible to receive the frame." msgstr "" #: ../intro-basic-networking.rst:119 msgid "" "A switchport that is configured to pass frames from all VLANs and tag them " "with the VLAN IDs is called a *trunk port*. IEEE 802.1Q is the network " "standard that describes how VLAN tags are encoded in Ethernet frames when " "trunking is being used." msgstr "" #: ../intro-basic-networking.rst:124 msgid "" "Note that if you are using VLANs on your physical switches to implement " "tenant isolation in your OpenStack cloud, you must ensure that all of your " "switchports are configured as trunk ports." msgstr "" #: ../intro-basic-networking.rst:128 msgid "" "It is important that you select a VLAN range not being used by your current " "network infrastructure. For example, if you estimate that your cloud must " "support a maximum of 100 projects, pick a VLAN range outside of that value, " "such as VLAN 200–299. OpenStack, and all physical network infrastructure " "that handles tenant networks, must then support this VLAN range." msgstr "" #: ../intro-basic-networking.rst:134 msgid "" "Trunking is used to connect between different switches. Each trunk uses a " "tag to identify which VLAN is in use. This ensures that switches on the same " "VLAN can communicate." msgstr "" #: ../intro-basic-networking.rst:142 msgid "Subnets and ARP" msgstr "" #: ../intro-basic-networking.rst:144 msgid "" "While NICs use MAC addresses to address network hosts, TCP/IP applications " "use IP addresses. The Address Resolution Protocol (ARP) bridges the gap " "between Ethernet and IP by translating IP addresses into MAC addresses." msgstr "" #: ../intro-basic-networking.rst:148 msgid "" "IP addresses are broken up into two parts: a *network number* and a *host " "identifier*. Two hosts are on the same *subnet* if they have the same " "network number. Recall that two hosts can only communicate directly over " "Ethernet if they are on the same local network. ARP assumes that all " "machines that are in the same subnet are on the same local network. Network " "administrators must take care when assigning IP addresses and netmasks to " "hosts so that any two hosts that are in the same subnet are on the same " "local network, otherwise ARP does not work properly." msgstr "" #: ../intro-basic-networking.rst:157 msgid "" "To calculate the network number of an IP address, you must know the " "*netmask* associated with the address. A netmask indicates how many of the " "bits in the 32-bit IP address make up the network number." msgstr "" #: ../intro-basic-networking.rst:161 msgid "There are two syntaxes for expressing a netmask:" msgstr "" #: ../intro-basic-networking.rst:163 msgid "dotted quad" msgstr "" #: ../intro-basic-networking.rst:164 msgid "classless inter-domain routing (CIDR)" msgstr "" #: ../intro-basic-networking.rst:166 msgid "" "Consider an IP address of 192.168.1.5, where the first 24 bits of the " "address are the network number. In dotted quad notation, the netmask would " "be written as ``255.255.255.0``. CIDR notation includes both the IP address " "and netmask, and this example would be written as ``192.168.1.5/24``." msgstr "" #: ../intro-basic-networking.rst:174 msgid "" "Creating CIDR subnets including a multicast address or a loopback address " "cannot be used in an OpenStack environment. For example, creating a subnet " "using ``224.0.0.0/16`` or ``127.0.1.0/24`` is not supported." msgstr "" #: ../intro-basic-networking.rst:178 msgid "" "Sometimes we want to refer to a subnet, but not any particular IP address on " "the subnet. A common convention is to set the host identifier to all zeros " "to make reference to a subnet. For example, if a host's IP address is " "``10.10.53.24/16``, then we would say the subnet is ``10.10.0.0/16``." msgstr "" #: ../intro-basic-networking.rst:184 msgid "" "To understand how ARP translates IP addresses to MAC addresses, consider the " "following example. Assume host *A* has an IP address of ``192.168.1.5/24`` " "and a MAC address of ``fc:99:47:49:d4:a0``, and wants to send a packet to " "host *B* with an IP address of ``192.168.1.7``. Note that the network number " "is the same for both hosts, so host *A* is able to send frames directly to " "host *B*." msgstr "" #: ../intro-basic-networking.rst:191 msgid "" "The first time host *A* attempts to communicate with host *B*, the " "destination MAC address is not known. Host *A* makes an ARP request to the " "local network. The request is a broadcast with a message like this:" msgstr "" #: ../intro-basic-networking.rst:196 msgid "" "*To: everybody (ff:ff:ff:ff:ff:ff). I am looking for the computer who has IP " "address 192.168.1.7. Signed: MAC address fc:99:47:49:d4:a0*." msgstr "" #: ../intro-basic-networking.rst:199 msgid "Host *B* responds with a response like this:" msgstr "" #: ../intro-basic-networking.rst:201 msgid "" "*To: fc:99:47:49:d4:a0. I have IP address 192.168.1.7. Signed: MAC address " "54:78:1a:86:00:a5.*" msgstr "" #: ../intro-basic-networking.rst:204 msgid "Host *A* then sends Ethernet frames to host *B*." msgstr "" #: ../intro-basic-networking.rst:206 msgid "" "You can initiate an ARP request manually using the :command:`arping` " "command. For example, to send an ARP request to IP address ``10.30.0.132``:" msgstr "" #: ../intro-basic-networking.rst:219 msgid "" "To reduce the number of ARP requests, operating systems maintain an ARP " "cache that contains the mappings of IP addresses to MAC address. On a Linux " "machine, you can view the contents of the ARP cache by using the :command:" "`arp` command:" msgstr "" # #-#-#-#-# intro-basic-networking.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# #: ../intro-basic-networking.rst:234 ../intro-os-networking.rst:270 #: ../intro-os-networking.rst:317 msgid "DHCP" msgstr "" #: ../intro-basic-networking.rst:236 msgid "" "Hosts connected to a network use the Dynamic Host Configuration Protocol (:" "term:`DHCP`) to dynamically obtain IP addresses. A DHCP server hands out the " "IP addresses to network hosts, which are the DHCP clients." msgstr "" #: ../intro-basic-networking.rst:241 msgid "" "DHCP clients locate the DHCP server by sending a UDP_ packet from port 68 to " "address ``255.255.255.255`` on port 67. Address ``255.255.255.255`` is the " "local network broadcast address: all hosts on the local network see the UDP " "packets sent to this address. However, such packets are not forwarded to " "other networks. Consequently, the DHCP server must be on the same local " "network as the client, or the server will not receive the broadcast. The " "DHCP server responds by sending a UDP packet from port 67 to port 68 on the " "client. The exchange looks like this:" msgstr "" #: ../intro-basic-networking.rst:251 msgid "" "The client sends a discover (\"I’m a client at MAC address ``08:00:27:" "b9:88:74``, I need an IP address\")" msgstr "" #: ../intro-basic-networking.rst:253 msgid "" "The server sends an offer (\"OK ``08:00:27:b9:88:74``, I’m offering IP " "address ``10.10.0.112``\")" msgstr "" #: ../intro-basic-networking.rst:255 msgid "" "The client sends a request (\"Server ``10.10.0.131``, I would like to have " "IP ``10.10.0.112``\")" msgstr "" #: ../intro-basic-networking.rst:257 msgid "" "The server sends an acknowledgement (\"OK ``08:00:27:b9:88:74``, IP " "``10.10.0.112`` is yours\")" msgstr "" #: ../intro-basic-networking.rst:261 msgid "" "OpenStack uses a third-party program called `dnsmasq `_ to implement the DHCP server. Dnsmasq writes to " "the syslog, where you can observe the DHCP request and replies::" msgstr "" #: ../intro-basic-networking.rst:272 msgid "" "When troubleshooting an instance that is not reachable over the network, it " "can be helpful to examine this log to verify that all four steps of the DHCP " "protocol were carried out for the instance in question." msgstr "" #: ../intro-basic-networking.rst:278 msgid "IP" msgstr "" #: ../intro-basic-networking.rst:280 msgid "" "The Internet Protocol (IP) specifies how to route packets between hosts that " "are connected to different local networks. IP relies on special network " "hosts called *routers* or *gateways*. A router is a host that is connected " "to at least two local networks and can forward IP packets from one local " "network to another. A router has multiple IP addresses: one for each of the " "networks it is connected to." msgstr "" #: ../intro-basic-networking.rst:287 msgid "" "In the OSI model of networking protocols IP occupies the third layer, known " "as the network layer. When discussing IP, you will often hear terms such as " "*layer 3*, *L3*, and *network layer*." msgstr "" #: ../intro-basic-networking.rst:291 msgid "" "A host sending a packet to an IP address consults its *routing table* to " "determine which machine on the local network(s) the packet should be sent " "to. The routing table maintains a list of the subnets associated with each " "local network that the host is directly connected to, as well as a list of " "routers that are on these local networks." msgstr "" #: ../intro-basic-networking.rst:297 msgid "" "On a Linux machine, any of the following commands displays the routing table:" msgstr "" #: ../intro-basic-networking.rst:305 msgid "Here is an example of output from :command:`ip route show`:" msgstr "" #: ../intro-basic-networking.rst:315 msgid "" "Line 1 of the output specifies the location of the default route, which is " "the effective routing rule if none of the other rules match. The router " "associated with the default route (``10.0.2.2`` in the example above) is " "sometimes referred to as the *default gateway*. A DHCP_ server typically " "transmits the IP address of the default gateway to the DHCP client along " "with the client's IP address and a netmask." msgstr "" #: ../intro-basic-networking.rst:322 msgid "" "Line 2 of the output specifies that IPs in the 10.0.2.0/24 subnet are on the " "local network associated with the network interface eth0." msgstr "" #: ../intro-basic-networking.rst:325 msgid "" "Line 3 of the output specifies that IPs in the 192.168.27.0/24 subnet are on " "the local network associated with the network interface eth1." msgstr "" #: ../intro-basic-networking.rst:328 msgid "" "Line 4 of the output specifies that IPs in the 192.168.122.0/24 subnet are " "on the local network associated with the network interface virbr0." msgstr "" #: ../intro-basic-networking.rst:331 msgid "" "The output of the :command:`route -n` and :command:`netstat -rn` commands " "are formatted in a slightly different way. This example shows how the same " "routes would be formatted using these commands:" msgstr "" #: ../intro-basic-networking.rst:345 msgid "" "The :command:`ip route get` command outputs the route for a destination IP " "address. From the below example, destination IP address 10.0.2.14 is on the " "local network of eth0 and would be sent directly:" msgstr "" #: ../intro-basic-networking.rst:354 msgid "" "The destination IP address 93.184.216.34 is not on any of the connected " "local networks and would be forwarded to the default gateway at 10.0.2.2:" msgstr "" #: ../intro-basic-networking.rst:362 msgid "" "It is common for a packet to hop across multiple routers to reach its final " "destination. On a Linux machine, the ``traceroute`` and more recent ``mtr`` " "programs prints out the IP address of each router that an IP packet " "traverses along its path to its destination." msgstr "" #: ../intro-basic-networking.rst:370 msgid "TCP/UDP/ICMP" msgstr "" #: ../intro-basic-networking.rst:372 msgid "" "For networked software applications to communicate over an IP network, they " "must use a protocol layered atop IP. These protocols occupy the fourth layer " "of the OSI model known as the *transport layer* or *layer 4*. See the " "`Protocol Numbers `_ web page maintained by the Internet Assigned Numbers " "Authority (IANA) for a list of protocols that layer atop IP and their " "associated numbers." msgstr "" #: ../intro-basic-networking.rst:380 msgid "" "The *Transmission Control Protocol* (TCP) is the most commonly used layer 4 " "protocol in networked applications. TCP is a *connection-oriented* protocol: " "it uses a client-server model where a client connects to a server, where " "*server* refers to the application that receives connections. The typical " "interaction in a TCP-based application proceeds as follows:" msgstr "" #: ../intro-basic-networking.rst:388 msgid "Client connects to server." msgstr "" #: ../intro-basic-networking.rst:389 msgid "Client and server exchange data." msgstr "" #: ../intro-basic-networking.rst:390 msgid "Client or server disconnects." msgstr "" #: ../intro-basic-networking.rst:392 msgid "" "Because a network host may have multiple TCP-based applications running, TCP " "uses an addressing scheme called *ports* to uniquely identify TCP-based " "applications. A TCP port is associated with a number in the range 1-65535, " "and only one application on a host can be associated with a TCP port at a " "time, a restriction that is enforced by the operating system." msgstr "" #: ../intro-basic-networking.rst:398 msgid "" "A TCP server is said to *listen* on a port. For example, an SSH server " "typically listens on port 22. For a client to connect to a server using TCP, " "the client must know both the IP address of a server's host and the server's " "TCP port." msgstr "" #: ../intro-basic-networking.rst:403 msgid "" "The operating system of the TCP client application automatically assigns a " "port number to the client. The client owns this port number until the TCP " "connection is terminated, after which the operating system reclaims the port " "number. These types of ports are referred to as *ephemeral ports*." msgstr "" #: ../intro-basic-networking.rst:409 msgid "" "IANA maintains a `registry of port numbers `_ for many TCP-" "based services, as well as services that use other layer 4 protocols that " "employ ports. Registering a TCP port number is not required, but registering " "a port number is helpful to avoid collisions with other services. See " "`Appendix B. Firewalls and default ports `_ of the `OpenStack " "Configuration Reference `_ for the default TCP ports used by various services involved in an " "OpenStack deployment." msgstr "" #: ../intro-basic-networking.rst:421 msgid "" "The most common application programming interface (API) for writing TCP-" "based applications is called *Berkeley sockets*, also known as *BSD sockets* " "or, simply, *sockets*. The sockets API exposes a *stream oriented* interface " "for writing TCP applications. From the perspective of a programmer, sending " "data over a TCP connection is similar to writing a stream of bytes to a " "file. It is the responsibility of the operating system's TCP/IP " "implementation to break up the stream of data into IP packets. The operating " "system is also responsible for automatically retransmitting dropped packets, " "and for handling flow control to ensure that transmitted data does not " "overrun the sender's data buffers, receiver's data buffers, and network " "capacity. Finally, the operating system is responsible for re-assembling the " "packets in the correct order into a stream of data on the receiver's side. " "Because TCP detects and retransmits lost packets, it is said to be a " "*reliable* protocol." msgstr "" #: ../intro-basic-networking.rst:436 msgid "" "The *User Datagram Protocol* (UDP) is another layer 4 protocol that is the " "basis of several well-known networking protocols. UDP is a *connectionless* " "protocol: two applications that communicate over UDP do not need to " "establish a connection before exchanging data. UDP is also an *unreliable* " "protocol. The operating system does not attempt to retransmit or even detect " "lost UDP packets. The operating system also does not provide any guarantee " "that the receiving application sees the UDP packets in the same order that " "they were sent in." msgstr "" #: ../intro-basic-networking.rst:445 msgid "" "UDP, like TCP, uses the notion of ports to distinguish between different " "applications running on the same system. Note, however, that operating " "systems treat UDP ports separately from TCP ports. For example, it is " "possible for one application to be associated with TCP port 16543 and a " "separate application to be associated with UDP port 16543." msgstr "" #: ../intro-basic-networking.rst:451 msgid "" "Like TCP, the sockets API is the most common API for writing UDP-based " "applications. The sockets API provides a *message-oriented* interface for " "writing UDP applications: a programmer sends data over UDP by transmitting a " "fixed-sized message. If an application requires retransmissions of lost " "packets or a well-defined ordering of received packets, the programmer is " "responsible for implementing this functionality in the application code." msgstr "" #: ../intro-basic-networking.rst:458 msgid "" "DHCP_, the Domain Name System (DNS), the Network Time Protocol (NTP), and :" "ref:`VXLAN` are examples of UDP-based protocols used in OpenStack " "deployments." msgstr "" #: ../intro-basic-networking.rst:461 msgid "" "UDP has support for one-to-many communication: sending a single packet to " "multiple hosts. An application can broadcast a UDP packet to all of the " "network hosts on a local network by setting the receiver IP address as the " "special IP broadcast address ``255.255.255.255``. An application can also " "send a UDP packet to a set of receivers using *IP multicast*. The intended " "receiver applications join a multicast group by binding a UDP socket to a " "special IP address that is one of the valid multicast group addresses. The " "receiving hosts do not have to be on the same local network as the sender, " "but the intervening routers must be configured to support IP multicast " "routing. VXLAN is an example of a UDP-based protocol that uses IP multicast." msgstr "" #: ../intro-basic-networking.rst:473 msgid "" "The *Internet Control Message Protocol* (ICMP) is a protocol used for " "sending control messages over an IP network. For example, a router that " "receives an IP packet may send an ICMP packet back to the source if there is " "no route in the router's routing table that corresponds to the destination " "address (ICMP code 1, destination host unreachable) or if the IP packet is " "too large for the router to handle (ICMP code 4, fragmentation required and " "\"don't fragment\" flag is set)." msgstr "" #: ../intro-basic-networking.rst:481 msgid "" "The :command:`ping` and :command:`mtr` Linux command-line tools are two " "examples of network utilities that use ICMP." msgstr "" #: ../intro-nat.rst:5 msgid "Network address translation" msgstr "" #: ../intro-nat.rst:7 msgid "" "*Network Address Translation* (NAT) is a process for modifying the source or " "destination addresses in the headers of an IP packet while the packet is in " "transit. In general, the sender and receiver applications are not aware that " "the IP packets are being manipulated." msgstr "" #: ../intro-nat.rst:12 msgid "" "NAT is often implemented by routers, and so we will refer to the host " "performing NAT as a *NAT router*. However, in OpenStack deployments it is " "typically Linux servers that implement the NAT functionality, not hardware " "routers. These servers use the `iptables `_ software package to implement the NAT functionality." msgstr "" #: ../intro-nat.rst:19 msgid "" "There are multiple variations of NAT, and here we describe three kinds " "commonly found in OpenStack deployments." msgstr "" #: ../intro-nat.rst:23 msgid "SNAT" msgstr "" #: ../intro-nat.rst:25 msgid "" "In *Source Network Address Translation* (SNAT), the NAT router modifies the " "IP address of the sender in IP packets. SNAT is commonly used to enable " "hosts with *private addresses* to communicate with servers on the public " "Internet." msgstr "" #: ../intro-nat.rst:30 msgid "" "`RFC 1918 `_ reserves the following " "three subnets as private addresses:" msgstr "" #: ../intro-nat.rst:33 msgid "``10.0.0.0/8``" msgstr "" #: ../intro-nat.rst:34 msgid "``172.16.0.0/12``" msgstr "" #: ../intro-nat.rst:35 msgid "``192.168.0.0/16``" msgstr "" #: ../intro-nat.rst:37 msgid "" "These IP addresses are not publicly routable, meaning that a host on the " "public Internet can not send an IP packet to any of these addresses. Private " "IP addresses are widely used in both residential and corporate environments." msgstr "" #: ../intro-nat.rst:41 msgid "" "Often, an application running on a host with a private IP address will need " "to connect to a server on the public Internet. An example is a user who " "wants to access a public website such as www.openstack.org. If the IP " "packets reach the web server at www.openstack.org with a private IP address " "as the source, then the web server cannot send packets back to the sender." msgstr "" #: ../intro-nat.rst:47 msgid "" "SNAT solves this problem by modifying the source IP address to an IP address " "that is routable on the public Internet. There are different variations of " "SNAT; in the form that OpenStack deployments use, a NAT router on the path " "between the sender and receiver replaces the packet's source IP address with " "the router's public IP address. The router also modifies the source TCP or " "UDP port to another value, and the router maintains a record of the sender's " "true IP address and port, as well as the modified IP address and port." msgstr "" #: ../intro-nat.rst:56 msgid "" "When the router receives a packet with the matching IP address and port, it " "translates these back to the private IP address and port, and forwards the " "packet along." msgstr "" #: ../intro-nat.rst:60 msgid "" "Because the NAT router modifies ports as well as IP addresses, this form of " "SNAT is sometimes referred to as *Port Address Translation* (PAT). It is " "also sometimes referred to as *NAT overload*." msgstr "" #: ../intro-nat.rst:64 msgid "" "OpenStack uses SNAT to enable applications running inside of instances to " "connect out to the public Internet." msgstr "" #: ../intro-nat.rst:68 msgid "DNAT" msgstr "" #: ../intro-nat.rst:70 msgid "" "In *Destination Network Address Translation* (DNAT), the NAT router modifies " "the IP address of the destination in IP packet headers." msgstr "" #: ../intro-nat.rst:73 msgid "" "OpenStack uses DNAT to route packets from instances to the OpenStack " "metadata service. Applications running inside of instances access the " "OpenStack metadata service by making HTTP GET requests to a web server with " "IP address 169.254.169.254. In an OpenStack deployment, there is no host " "with this IP address. Instead, OpenStack uses DNAT to change the destination " "IP of these packets so they reach the network interface that a metadata " "service is listening on." msgstr "" #: ../intro-nat.rst:82 msgid "One-to-one NAT" msgstr "" #: ../intro-nat.rst:84 msgid "" "In *one-to-one NAT*, the NAT router maintains a one-to-one mapping between " "private IP addresses and public IP addresses. OpenStack uses one-to-one NAT " "to implement floating IP addresses." msgstr "" #: ../intro-network-components.rst:5 msgid "Network components" msgstr "" #: ../intro-network-components.rst:8 msgid "Switches" msgstr "" #: ../intro-network-components.rst:10 msgid "" "Switches are Multi-Input Multi-Output (MIMO) devices that enable packets to " "travel from one node to another. Switches connect hosts that belong to the " "same layer-2 network. Switches enable forwarding of the packet received on " "one port (input) to another port (output) so that they reach the desired " "destination node. Switches operate at layer-2 in the networking model. They " "forward the traffic based on the destination Ethernet address in the packet " "header." msgstr "" # #-#-#-#-# intro-network-components.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# ops-resource-purge.pot (Networking Guide 0.9) #-#-#-#-# #: ../intro-network-components.rst:19 ../intro-os-networking.rst:187 #: ../ops-resource-purge.rst:14 msgid "Routers" msgstr "" #: ../intro-network-components.rst:21 msgid "" "Routers are special devices that enable packets to travel from one layer-3 " "network to another. Routers enable communication between two nodes on " "different layer-3 networks that are not directly connected to each other. " "Routers operate at layer-3 in the networking model. They route the traffic " "based on the destination IP address in the packet header." msgstr "" #: ../intro-network-components.rst:28 msgid "Firewalls" msgstr "" #: ../intro-network-components.rst:30 msgid "" "Firewalls are used to regulate traffic to and from a host or a network. A " "firewall can be either a specialized device connecting two networks or a " "software-based filtering mechanism implemented on an operating system. " "Firewalls are used to restrict traffic to a host based on the rules defined " "on the host. They can filter packets based on several criteria such as " "source IP address, destination IP address, port numbers, connection state, " "and so on. It is primarily used to protect the hosts from unauthorized " "access and malicious attacks. Linux-based operating systems implement " "firewalls through ``iptables``." msgstr "" #: ../intro-network-components.rst:41 msgid "Load balancers" msgstr "" #: ../intro-network-components.rst:43 msgid "" "Load balancers can be software-based or hardware-based devices that allow " "traffic to evenly be distributed across several servers. By distributing the " "traffic across multiple servers, it avoids overload of a single server " "thereby preventing a single point of failure in the product. This further " "improves the performance, network throughput, and response time of the " "servers. Load balancers are typically used in a 3-tier architecture. In this " "model, a load balancer receives a request from the front-end web server, " "which then forwards the request to one of the available back-end database " "servers for processing. The response from the database server is passed back " "to the web server for further processing." msgstr "" #: ../intro-network-namespaces.rst:5 msgid "Network namespaces" msgstr "" #: ../intro-network-namespaces.rst:7 msgid "" "A namespace is a way of scoping a particular set of identifiers. Using a " "namespace, you can use the same identifier multiple times in different " "namespaces. You can also restrict an identifier set visible to particular " "processes." msgstr "" #: ../intro-network-namespaces.rst:12 msgid "" "For example, Linux provides namespaces for networking and processes, among " "other things. If a process is running within a process namespace, it can " "only see and communicate with other processes in the same namespace. So, if " "a shell in a particular process namespace ran :command:`ps waux`, it would " "only show the other processes in the same namespace." msgstr "" #: ../intro-network-namespaces.rst:19 msgid "Linux network namespaces" msgstr "" #: ../intro-network-namespaces.rst:21 msgid "" "In a network namespace, the scoped 'identifiers' are network devices; so a " "given network device, such as ``eth0``, exists in a particular namespace. " "Linux starts up with a default network namespace, so if your operating " "system does not do anything special, that is where all the network devices " "will be located. But it is also possible to create further non-default " "namespaces, and create new devices in those namespaces, or to move an " "existing device from one namespace to another." msgstr "" #: ../intro-network-namespaces.rst:29 msgid "" "Each network namespace also has its own routing table, and in fact this is " "the main reason for namespaces to exist. A routing table is keyed by " "destination IP address, so network namespaces are what you need if you want " "the same destination IP address to mean different things at different times " "- which is something that OpenStack Networking requires for its feature of " "providing overlapping IP addresses in different virtual networks." msgstr "" #: ../intro-network-namespaces.rst:36 msgid "" "Each network namespace also has its own set of iptables (for both IPv4 and " "IPv6). So, you can apply different security to flows with the same IP " "addressing in different namespaces, as well as different routing." msgstr "" #: ../intro-network-namespaces.rst:40 msgid "" "Any given Linux process runs in a particular network namespace. By default " "this is inherited from its parent process, but a process with the right " "capabilities can switch itself into a different namespace; in practice this " "is mostly done using the :command:`ip netns exec NETNS COMMAND...` " "invocation, which starts ``COMMAND`` running in the namespace named " "``NETNS``. Suppose such a process sends out a message to IP address A.B.C.D, " "the effect of the namespace is that A.B.C.D will be looked up in that " "namespace's routing table, and that will determine the network device that " "the message is transmitted through." msgstr "" #: ../intro-network-namespaces.rst:50 msgid "Virtual routing and forwarding (VRF)" msgstr "" #: ../intro-network-namespaces.rst:52 msgid "" "Virtual routing and forwarding is an IP technology that allows multiple " "instances of a routing table to coexist on the same router at the same time. " "It is another name for the network namespace functionality described above." msgstr "" #: ../intro-os-networking.rst:5 msgid "OpenStack Networking" msgstr "" #: ../intro-os-networking.rst:7 msgid "" "OpenStack Networking allows you to create and manage network objects, such " "as networks, subnets, and ports, which other OpenStack services can use. " "Plug-ins can be implemented to accommodate different networking equipment " "and software, providing flexibility to OpenStack architecture and deployment." msgstr "" #: ../intro-os-networking.rst:13 msgid "" "The Networking service, code-named neutron, provides an API that lets you " "define network connectivity and addressing in the cloud. The Networking " "service enables operators to leverage different networking technologies to " "power their cloud networking. The Networking service also provides an API to " "configure and manage a variety of network services ranging from L3 " "forwarding and :term:`NAT` to load balancing, perimeter firewalls, and " "virtual private networks." msgstr "" #: ../intro-os-networking.rst:21 msgid "It includes the following components:" msgstr "" #: ../intro-os-networking.rst:24 msgid "" "The OpenStack Networking API includes support for Layer 2 networking and :" "term:`IP address management (IPAM) `, as well " "as an extension for a Layer 3 router construct that enables routing between " "Layer 2 networks and gateways to external networks. OpenStack Networking " "includes a growing list of plug-ins that enable interoperability with " "various commercial and open source network technologies, including routers, " "switches, virtual switches and software-defined networking (SDN) controllers." msgstr "" #: ../intro-os-networking.rst:31 msgid "API server" msgstr "" #: ../intro-os-networking.rst:34 msgid "" "Plugs and unplugs ports, creates networks or subnets, and provides IP " "addressing. The chosen plug-in and agents differ depending on the vendor and " "technologies used in the particular cloud. It is important to mention that " "only one plug-in can be used at a time." msgstr "" #: ../intro-os-networking.rst:37 msgid "OpenStack Networking plug-in and agents" msgstr "" #: ../intro-os-networking.rst:40 msgid "" "Accepts and routes RPC requests between agents to complete API operations. " "Message queue is used in the ML2 plug-in for RPC between the neutron server " "and neutron agents that run on each hypervisor, in the ML2 mechanism drivers " "for :term:`Open vSwitch` and :term:`Linux bridge`." msgstr "" #: ../intro-os-networking.rst:43 msgid "Messaging queue" msgstr "" #: ../intro-os-networking.rst:46 msgid "Concepts" msgstr "" #: ../intro-os-networking.rst:48 msgid "" "To configure rich network topologies, you can create and configure networks " "and subnets and instruct other OpenStack services like Compute to attach " "virtual devices to ports on these networks. OpenStack Compute is a prominent " "consumer of OpenStack Networking to provide connectivity for its instances. " "In particular, OpenStack Networking supports each tenant having multiple " "private networks and enables tenants to choose their own IP addressing " "scheme, even if those IP addresses overlap with those that other tenants " "use. There are two types of network, tenant and provider networks. It is " "possible to share any of these types of networks among tenants as part of " "the network creation process." msgstr "" #: ../intro-os-networking.rst:63 msgid "Provider networks" msgstr "" #: ../intro-os-networking.rst:65 msgid "" "Provider networks offer layer-2 connectivity to instances with optional " "support for DHCP and metadata services. These networks connect, or map, to " "existing layer-2 networks in the data center, typically using VLAN (802.1q) " "tagging to identify and separate them." msgstr "" #: ../intro-os-networking.rst:70 msgid "" "Provider networks generally offer simplicity, performance, and reliability " "at the cost of flexibility. Only administrators can manage provider networks " "because they require configuration of physical network infrastructure. Also, " "provider networks only handle layer-2 connectivity for instances, thus " "lacking support for features such as routers and floating IP addresses." msgstr "" #: ../intro-os-networking.rst:76 msgid "" "In many cases, operators who are already familiar with virtual networking " "architectures that rely on physical network infrastructure for layer-2, " "layer-3, or other services can seamlessly deploy the OpenStack Networking " "service. In particular, provider networks appeal to operators looking to " "migrate from the Compute networking service (nova-network) to the OpenStack " "Networking service. Over time, operators can build on this minimal " "architecture to enable more cloud networking features." msgstr "" #: ../intro-os-networking.rst:84 msgid "" "In general, the OpenStack Networking software components that handle layer-3 " "operations impact performance and reliability the most. To improve " "performance and reliability, provider networks move layer-3 operations to " "the physical network infrastructure." msgstr "" #: ../intro-os-networking.rst:89 msgid "" "In one particular use case, the OpenStack deployment resides in a mixed " "environment with conventional virtualization and bare-metal hosts that use a " "sizable physical network infrastructure. Applications that run inside the " "OpenStack deployment might require direct layer-2 access, typically using " "VLANs, to applications outside of the deployment." msgstr "" #: ../intro-os-networking.rst:98 msgid "Self-service networks" msgstr "" #: ../intro-os-networking.rst:100 msgid "" "Self-service networks primarily enable general (non-privileged) projects to " "manage networks without involving administrators. These networks are " "entirely virtual and require virtual routers to interact with provider and " "external networks such as the Internet. Self-service networks also usually " "provide DHCP and metadata services to instances." msgstr "" #: ../intro-os-networking.rst:106 msgid "" "In most cases, self-service networks use overlay protocols such as VXLAN or " "GRE because they can support many more networks than layer-2 segmentation " "using VLAN tagging (802.1q). Furthermore, VLANs typically require additional " "configuration of physical network infrastructure." msgstr "" #: ../intro-os-networking.rst:111 msgid "" "IPv4 self-service networks typically use private IP address ranges (RFC1918) " "and interact with provider networks via source NAT on virtual routers. " "Floating IP addresses enable access to instances from provider networks via " "destination NAT on virtual routers. IPv6 self-service networks always use " "public IP address ranges and interact with provider networks via virtual " "routers with static routes." msgstr "" #: ../intro-os-networking.rst:118 msgid "" "The Networking service implements routers using a layer-3 agent that " "typically resides at least one network node. Contrary to provider networks " "that connect instances to the physical network infrastructure at layer-2, " "self-service networks must traverse a layer-3 agent. Thus, oversubscription " "or failure of a layer-3 agent or network node can impact a significant " "quantity of self-service networks and instances using them. Consider " "implementing one or more high-availability features to increase redundancy " "and performance of self-service networks." msgstr "" #: ../intro-os-networking.rst:127 msgid "" "Users create tenant networks for connectivity within projects. By default, " "they are fully isolated and are not shared with other projects. OpenStack " "Networking supports the following types of network isolation and overlay " "technologies." msgstr "" #: ../intro-os-networking.rst:132 msgid "" "All instances reside on the same network, which can also be shared with the " "hosts. No VLAN tagging or other network segregation takes place." msgstr "" #: ../intro-os-networking.rst:136 msgid "" "Networking allows users to create multiple provider or tenant networks using " "VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical " "network. This allows instances to communicate with each other across the " "environment. They can also communicate with dedicated servers, firewalls, " "load balancers, and other networking infrastructure on the same layer 2 VLAN." msgstr "" #: ../intro-os-networking.rst:144 msgid "" "VXLAN and GRE are encapsulation protocols that create overlay networks to " "activate and control communication between compute instances. A Networking " "router is required to allow traffic to flow outside of the GRE or VXLAN " "tenant network. A router is also required to connect directly-connected " "tenant networks with external networks, including the Internet. The router " "provides the ability to connect to instances directly from an external " "network using floating IP addresses." msgstr "" #: ../intro-os-networking.rst:150 msgid "GRE and VXLAN" msgstr "" # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# ops-resource-purge.pot (Networking Guide 0.9) #-#-#-#-# #: ../intro-os-networking.rst:156 ../ops-resource-purge.rst:11 msgid "Subnets" msgstr "" #: ../intro-os-networking.rst:158 msgid "" "A block of IP addresses and associated configuration state. This is also " "known as the native IPAM (IP Address Management) provided by the networking " "service for both tenant and provider networks. Subnets are used to allocate " "IP addresses when new ports are created on a network." msgstr "" #: ../intro-os-networking.rst:167 msgid "" "End users normally can create subnets with any valid IP addresses without " "other restrictions. However, in some cases, it is nice for the admin or the " "tenant to pre-define a pool of addresses from which to create subnets with " "automatic allocation." msgstr "" #: ../intro-os-networking.rst:172 msgid "" "Using subnet pools constrains what addresses can be used by requiring that " "every subnet be within the defined pool. It also prevents address reuse or " "overlap by two subnets from the same pool." msgstr "" #: ../intro-os-networking.rst:176 msgid "See :ref:`config-subnet-pools` for more information." msgstr "" #: ../intro-os-networking.rst:181 msgid "" "A port is a connection point for attaching a single device, such as the NIC " "of a virtual server, to a virtual network. The port also describes the " "associated network configuration, such as the MAC and IP addresses to be " "used on that port." msgstr "" #: ../intro-os-networking.rst:189 msgid "" "Routers provide virtual layer-3 services such as routing and NAT between " "self-service and provider networks or among self-service networks belonging " "to a project. The Networking service uses a layer-3 agent to manage routers " "via namespaces." msgstr "" # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# ops-resource-purge.pot (Networking Guide 0.9) #-#-#-#-# #: ../intro-os-networking.rst:195 ../ops-resource-purge.rst:16 msgid "Security groups" msgstr "" #: ../intro-os-networking.rst:197 msgid "" "Security groups provide a container for virtual firewall rules that control " "ingress (inbound to instances) and egress (outbound from instances) network " "traffic at the port level. Security groups use a default deny policy and " "only contain rules that allow specific traffic. Each port can reference one " "or more security groups in an additive fashion. The firewall driver " "translates security group rules to a configuration for the underlying packet " "filtering technology such as ``iptables``." msgstr "" #: ../intro-os-networking.rst:205 msgid "" "Each project contains a ``default`` security group that allows all egress " "traffic and denies all ingress traffic. You can change the rules in the " "``default`` security group. If you launch an instance without specifying a " "security group, the ``default`` security group automatically applies to it. " "Similarly, if you create a port without specifying a security group, the " "``default`` security group automatically applies to it." msgstr "" #: ../intro-os-networking.rst:214 msgid "" "If you use the metadata service, removing the default egress rules denies " "access to TCP port 80 on 169.254.169.254, thus preventing instances from " "retrieving metadata." msgstr "" #: ../intro-os-networking.rst:218 msgid "" "Security group rules are stateful. Thus, allowing ingress TCP port 22 for " "secure shell automatically creates rules that allow return egress traffic " "and ICMP error messages involving those TCP connections." msgstr "" #: ../intro-os-networking.rst:222 msgid "" "By default, all security groups contain a series of basic (sanity) and anti-" "spoofing rules that perform the following actions:" msgstr "" #: ../intro-os-networking.rst:225 msgid "" "Allow egress traffic only if it uses the source MAC and IP addresses of the " "port for the instance, source MAC and IP combination in ``allowed-address-" "pairs``, or valid MAC address (port or ``allowed-address-pairs``) and " "associated EUI64 link-local IPv6 address." msgstr "" #: ../intro-os-networking.rst:229 msgid "" "Allow egress DHCP discovery and request messages that use the source MAC " "address of the port for the instance and the unspecified IPv4 address " "(0.0.0.0)." msgstr "" #: ../intro-os-networking.rst:232 msgid "" "Allow ingress DHCP and DHCPv6 responses from the DHCP server on the subnet " "so instances can acquire IP addresses." msgstr "" #: ../intro-os-networking.rst:234 msgid "" "Deny egress DHCP and DHCPv6 responses to prevent instances from acting as " "DHCP(v6) servers." msgstr "" #: ../intro-os-networking.rst:236 msgid "" "Allow ingress/egress ICMPv6 MLD, neighbor solicitation, and neighbor " "discovery messages so instances can discover neighbors and join multicast " "groups." msgstr "" #: ../intro-os-networking.rst:239 msgid "" "Deny egress ICMPv6 router advertisements to prevent instances from acting as " "IPv6 routers and forwarding IPv6 traffic for other instances." msgstr "" #: ../intro-os-networking.rst:241 msgid "" "Allow egress ICMPv6 MLD reports (v1 and v2) and neighbor solicitation " "messages that use the source MAC address of a particular instance and the " "unspecified IPv6 address (::). Duplicate address detection (DAD) relies on " "these messages." msgstr "" #: ../intro-os-networking.rst:245 msgid "" "Allow egress non-IP traffic from the MAC address of the port for the " "instance and any additional MAC addresses in ``allowed-address-pairs`` on " "the port for the instance." msgstr "" #: ../intro-os-networking.rst:249 msgid "" "Although non-IP traffic, security groups do not implicitly allow all ARP " "traffic. Separate ARP filtering rules prevent instances from using ARP to " "intercept traffic for another instance. You cannot disable or remove these " "rules." msgstr "" #: ../intro-os-networking.rst:254 msgid "" "You can disable security groups including basic and anti-spoofing rules by " "setting the port attribute ``port_security_enabled`` to ``False``." msgstr "" #: ../intro-os-networking.rst:258 msgid "Extensions" msgstr "" #: ../intro-os-networking.rst:260 msgid "" "The OpenStack Networking service is extensible. Extensions serve two " "purposes: they allow the introduction of new features in the API without " "requiring a version change and they allow the introduction of vendor " "specific niche functionality. Applications can programmatically list " "available extensions by performing a GET on the :code:`/extensions` URI. " "Note that this is a versioned request; that is, an extension available in " "one API version might not be available in another." msgstr "" #: ../intro-os-networking.rst:272 msgid "" "The optional DHCP service manages IP addresses for instances on provider and " "self-service networks. The Networking service implements the DHCP service " "using an agent that manages ``qdhcp`` namespaces and the ``dnsmasq`` service." msgstr "" #: ../intro-os-networking.rst:278 ../intro-os-networking.rst:322 msgid "Metadata" msgstr "" #: ../intro-os-networking.rst:280 msgid "" "The optional metadata service provides an API for instances to obtain " "metadata such as SSH keys." msgstr "" #: ../intro-os-networking.rst:284 msgid "Service and component hierarchy" msgstr "" #: ../intro-os-networking.rst:289 msgid "Provides API, manages database, etc." msgstr "" #: ../intro-os-networking.rst:292 msgid "Plug-ins" msgstr "" #: ../intro-os-networking.rst:294 msgid "Manages agents" msgstr "" #: ../intro-os-networking.rst:299 msgid "Provides layer 2/3 connectivity to instances" msgstr "" #: ../intro-os-networking.rst:301 msgid "Handles physical-virtual network transition" msgstr "" #: ../intro-os-networking.rst:303 msgid "Handles metadata, etc." msgstr "" #: ../intro-os-networking.rst:306 msgid "Layer 2 (Ethernet and Switching)" msgstr "" #: ../intro-os-networking.rst:308 msgid "Linux Bridge" msgstr "" #: ../intro-os-networking.rst:313 msgid "Layer 3 (IP and Routing)" msgstr "" #: ../intro-os-networking.rst:315 msgid "L3" msgstr "" # #-#-#-#-# intro-os-networking.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# misc.pot (Networking Guide 0.9) #-#-#-#-# #: ../intro-os-networking.rst:320 ../misc.rst:5 msgid "Miscellaneous" msgstr "" #: ../intro-os-networking.rst:325 msgid "Services" msgstr "" #: ../intro-os-networking.rst:328 msgid "Routing services" msgstr "" #: ../intro-os-networking.rst:333 msgid "" "The Virtual Private Network-as-a-Service (VPNaaS) is a neutron extension " "that introduces the VPN feature set." msgstr "" #: ../intro-os-networking.rst:339 msgid "" "The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load " "balancers. The reference implementation is based on the HAProxy software " "load balancer." msgstr "" #: ../intro-os-networking.rst:346 msgid "" "The Firewall-as-a-Service (FWaaS) API is an experimental API that enables " "early adopters and vendors to test their networking implementations." msgstr "" #: ../intro-overlay-protocols.rst:5 msgid "Overlay (tunnel) protocols" msgstr "" #: ../intro-overlay-protocols.rst:7 msgid "" "Tunneling is a mechanism that makes transfer of payloads feasible over an " "incompatible delivery network. It allows the network user to gain access to " "denied or insecure networks. Data encryption may be employed to transport " "the payload, ensuring that the encapsulated user network data appears as " "public even though it is private and can easily pass the conflicting network." msgstr "" #: ../intro-overlay-protocols.rst:15 msgid "Generic routing encapsulation (GRE)" msgstr "" #: ../intro-overlay-protocols.rst:17 msgid "" "Generic routing encapsulation (GRE) is a protocol that runs over IP and is " "employed when delivery and payload protocols are compatible but payload " "addresses are incompatible. For instance, a payload might think it is " "running on a datalink layer but it is actually running over a transport " "layer using datagram protocol over IP. GRE creates a private point-to-point " "connection and works by encapsulating a payload. GRE is a foundation " "protocol for other tunnel protocols but the GRE tunnels provide only weak " "authentication." msgstr "" #: ../intro-overlay-protocols.rst:28 msgid "Virtual extensible local area network (VXLAN)" msgstr "" #: ../intro-overlay-protocols.rst:30 msgid "" "The purpose of VXLAN is to provide scalable network isolation. VXLAN " "underlay can spread over two or more layer-3 network domains. It allows an " "overlay layer-2 network to spread across multiple underlay layer-3 network " "domains." msgstr "" #: ../intro.rst:5 msgid "Introduction" msgstr "" #: ../intro.rst:7 msgid "" "The OpenStack :term:`Networking service` provides an API that allows users " "to set up and define network connectivity and addressing in the cloud. The " "project code-name for Networking services is neutron. OpenStack Networking " "handles the creation and management of a virtual networking infrastructure, " "including networks, switches, subnets, and routers for devices managed by " "the OpenStack Compute service (nova). Advanced services such as firewalls " "or :term:`virtual private networks (VPNs) ` " "can also be used." msgstr "" #: ../intro.rst:16 msgid "" "OpenStack Networking consists of the neutron-server, a database for " "persistent storage, and any number of plug-in agents, which provide other " "services such as interfacing with native Linux networking mechanisms, " "external devices, or SDN controllers." msgstr "" #: ../intro.rst:21 msgid "" "OpenStack Networking is entirely standalone and can be deployed to a " "dedicated host. If your deployment uses a controller host to run centralized " "Compute components, you can deploy the Networking server to that specific " "host instead." msgstr "" #: ../intro.rst:26 msgid "OpenStack Networking integrates with various OpenStack components:" msgstr "" #: ../intro.rst:29 msgid "" "OpenStack :term:`Identity service` (keystone) is used for authentication and " "authorization of API requests." msgstr "" #: ../intro.rst:32 msgid "" "OpenStack :term:`Compute service` (nova) is used to plug each virtual NIC on " "the VM into a particular network." msgstr "" #: ../intro.rst:35 msgid "" "OpenStack :term:`Dashboard` (horizon) is used by administrators and tenant " "users to create and manage network services through a web-based graphical " "interface." msgstr "" #: ../intro.rst:41 msgid "" "To reduce clutter, this guide removes command output without relevance to " "the particular action." msgstr "" #: ../migration-classic-to-l3ha.rst:5 msgid "Add VRRP to an existing router" msgstr "" #: ../migration-classic-to-l3ha.rst:7 msgid "" "This section describes the process of migrating from a classic router to an " "L3 HA router, which is available starting from the Mitaka release." msgstr "" #: ../migration-classic-to-l3ha.rst:10 msgid "" "Similar to the classic scenario, all network traffic on a project network " "that requires routing actively traverses only one network node regardless of " "the quantity of network nodes providing HA for the router. Therefore, this " "high-availability implementation primarily addresses failure situations " "instead of bandwidth constraints that limit performance. However, it " "supports random distribution of routers on different network nodes to reduce " "the chances of bandwidth constraints and to improve scaling." msgstr "" #: ../migration-classic-to-l3ha.rst:18 msgid "" "This section references parts of :ref:`deploy-lb-ha-vrrp` and :ref:`deploy-" "ovs-ha-vrrp`. For details regarding needed infrastructure and configuration " "to allow actual L3 HA deployment, read the relevant guide before continuing " "with the migration process." msgstr "" # #-#-#-#-# migration-classic-to-l3ha.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# migration.pot (Networking Guide 0.9) #-#-#-#-# #: ../migration-classic-to-l3ha.rst:24 ../migration.rst:5 msgid "Migration" msgstr "" #: ../migration-classic-to-l3ha.rst:26 msgid "" "The migration process is quite simple, it involves turning down the router " "by setting the router's ``admin_state_up`` attribute to ``False``, upgrading " "the router to L3 HA and then setting the router's ``admin_state_up`` " "attribute back to ``True``." msgstr "" #: ../migration-classic-to-l3ha.rst:33 ../migration-classic-to-l3ha.rst:110 msgid "" "Once starting the migration, south-north connections (instances to internet) " "will be severed. New connections will be able to start only when the " "migration is complete." msgstr "" #: ../migration-classic-to-l3ha.rst:37 ../migration-classic-to-l3ha.rst:114 msgid "Here is the router we have used in our demonstration:" msgstr "" #: ../migration-classic-to-l3ha.rst:58 ../migration-classic-to-l3ha.rst:135 msgid "" "Set the admin_state_up to ``False``. This will severe south-north " "connections until admin_state_up is set to ``True`` again." msgstr "" #: ../migration-classic-to-l3ha.rst:66 ../migration-classic-to-l3ha.rst:143 msgid "Set the ``ha`` attribute of the router to ``True``." msgstr "" #: ../migration-classic-to-l3ha.rst:73 ../migration-classic-to-l3ha.rst:150 msgid "" "Set the admin_state_up to ``True``. After this, south-north connections can " "start." msgstr "" #: ../migration-classic-to-l3ha.rst:81 msgid "Make sure that the router's ``ha`` attribute has changed to ``True``." msgstr "" #: ../migration-classic-to-l3ha.rst:103 msgid "L3 HA to Legacy" msgstr "" #: ../migration-classic-to-l3ha.rst:105 msgid "" "To return to classic mode, turn down the router again, turning off L3 HA and " "starting the router again." msgstr "" #: ../migration-classic-to-l3ha.rst:158 msgid "Make sure that the router's ``ha`` attribute has changed to ``False``." msgstr "" #: ../migration-database.rst:5 msgid "Database" msgstr "" #: ../migration-database.rst:7 msgid "" "The upgrade of the Networking service database is implemented with Alembic " "migration chains. The migrations in the ``alembic/versions`` contain the " "changes needed to migrate from older Networking service releases to newer " "ones." msgstr "" #: ../migration-database.rst:11 msgid "" "Since Liberty, Networking maintains two parallel Alembic migration branches." msgstr "" #: ../migration-database.rst:13 msgid "" "The first branch is called expand and is used to store expansion-only " "migration rules. These rules are strictly additive and can be applied while " "the Neutron server is running." msgstr "" #: ../migration-database.rst:17 msgid "" "The second branch is called contract and is used to store those migration " "rules that are not safe to apply while Neutron server is running." msgstr "" #: ../migration-database.rst:20 msgid "" "The intent of separate branches is to allow invoking those safe migrations " "from the expand branch while the Neutron server is running and therefore " "reducing downtime needed to upgrade the service." msgstr "" #: ../migration-database.rst:24 msgid "" "A database management command-line tool uses the Alembic library to manage " "the migration." msgstr "" #: ../migration-database.rst:28 msgid "Database management command-line tool" msgstr "" #: ../migration-database.rst:30 msgid "" "The database management command-line tool is called :command:`neutron-db-" "manage`. Pass the :option:`--help` option to the tool for usage information." msgstr "" #: ../migration-database.rst:34 msgid "The tool takes some options followed by some commands:" msgstr "" #: ../migration-database.rst:40 msgid "" "The tool needs to access the database connection string, which is provided " "in the ``neutron.conf`` configuration file in an installation. The tool " "automatically reads from ``/etc/neutron/neutron.conf`` if it is present. If " "the configuration is in a different location, use the following command:" msgstr "" #: ../migration-database.rst:49 msgid "Multiple :option:`--config-file` options can be passed if needed." msgstr "" #: ../migration-database.rst:51 msgid "" "Instead of reading the DB connection from the configuration file(s), you can " "use the :option:`--database-connection` option:" msgstr "" #: ../migration-database.rst:59 msgid "" "The `branches`, `current`, and `history` commands all accept a :option:`--" "verbose` option, which, when passed, will instruct :command:`neutron-db-" "manage` to display more verbose output for the specified command:" msgstr "" #: ../migration-database.rst:70 msgid "" "The tool usage examples below do not show the options. It is assumed that " "you use the options that you need for your environment." msgstr "" #: ../migration-database.rst:73 msgid "" "In new deployments, you start with an empty database and then upgrade to the " "latest database version using the following command:" msgstr "" #: ../migration-database.rst:80 msgid "" "After installing a new version of the Neutron server, upgrade the database " "using the following command:" msgstr "" #: ../migration-database.rst:87 msgid "" "In existing deployments, check the current database version using the " "following command:" msgstr "" #: ../migration-database.rst:94 msgid "To apply the expansion migration rules, use the following command:" msgstr "" #: ../migration-database.rst:100 msgid "To apply the non-expansive migration rules, use the following command:" msgstr "" #: ../migration-database.rst:106 msgid "" "To check if any contract migrations are pending and therefore if offline " "migration is required, use the following command:" msgstr "" #: ../migration-database.rst:115 msgid "" "Offline migration requires all Neutron server instances in the cluster to be " "shutdown before you apply any contract scripts." msgstr "" #: ../migration-database.rst:118 msgid "" "To generate a script of the command instead of operating immediately on the " "database, use the following command:" msgstr "" #: ../migration-database.rst:131 msgid "" "To migrate between specific migration versions, use the following command:" msgstr "" #: ../migration-database.rst:137 msgid "To upgrade the database incrementally, use the following command:" msgstr "" #: ../migration-database.rst:145 msgid "Database downgrade is not supported." msgstr "" #: ../migration-nova-network-to-neutron.rst:5 msgid "Legacy nova-network to OpenStack Networking (neutron)" msgstr "" #: ../migration-nova-network-to-neutron.rst:7 msgid "" "Two networking models exist in OpenStack. The first is called legacy " "networking (:term:`nova-network`) and it is a sub-process embedded in the " "Compute project (nova). This model has some limitations, such as creating " "complex network topologies, extending its back-end implementation to vendor-" "specific technologies, and providing tenant-specific networking elements. " "These limitations are the main reasons the OpenStack Networking (neutron) " "model was created." msgstr "" #: ../migration-nova-network-to-neutron.rst:15 msgid "" "This section describes the process of migrating clouds based on the legacy " "networking model to the OpenStack Networking model. This process requires " "additional changes to both compute and networking to support the migration. " "This document describes the overall process and the features required in " "both Networking and Compute." msgstr "" #: ../migration-nova-network-to-neutron.rst:21 msgid "" "The current process as designed is a minimally viable migration with the " "goal of deprecating and then removing legacy networking. Both the Compute " "and Networking teams agree that a one-button migration process from legacy " "networking to OpenStack Networking (neutron) is not an essential requirement " "for the deprecation and removal of the legacy networking at a future date. " "This section includes a process and tools which are designed to solve a " "simple use case migration." msgstr "" #: ../migration-nova-network-to-neutron.rst:29 msgid "" "Users are encouraged to take these tools, test them, provide feedback, and " "then expand on the feature set to suit their own deployments; deployers that " "refrain from participating in this process intending to wait for a path that " "better suits their use case are likely to be disappointed." msgstr "" #: ../migration-nova-network-to-neutron.rst:36 msgid "Impact and limitations" msgstr "" #: ../migration-nova-network-to-neutron.rst:38 msgid "" "The migration process from the legacy nova-network networking service to " "OpenStack Networking (neutron) has some limitations and impacts on the " "operational state of the cloud. It is critical to understand them in order " "to decide whether or not this process is acceptable for your cloud and all " "users." msgstr "" #: ../migration-nova-network-to-neutron.rst:45 msgid "Management impact" msgstr "" #: ../migration-nova-network-to-neutron.rst:47 msgid "" "The Networking REST API is publicly read-only until after the migration is " "complete. During the migration, Networking REST API is read-write only to " "nova-api, and changes to Networking are only allowed via nova-api." msgstr "" #: ../migration-nova-network-to-neutron.rst:52 msgid "" "The Compute REST API is available throughout the entire process, although " "there is a brief period where it is made read-only during a database " "migration. The Networking REST API will need to expose (to nova-api) all " "details necessary for reconstructing the information previously held in the " "legacy networking database." msgstr "" #: ../migration-nova-network-to-neutron.rst:58 msgid "" "Compute needs a per-hypervisor \"has_transitioned\" boolean change in the " "data model to be used during the migration process. This flag is no longer " "required once the process is complete." msgstr "" #: ../migration-nova-network-to-neutron.rst:63 msgid "Operations impact" msgstr "" #: ../migration-nova-network-to-neutron.rst:65 msgid "" "In order to support a wide range of deployment options, the migration " "process described here requires a rolling restart of hypervisors. The rate " "and timing of specific hypervisor restarts is under the control of the " "operator." msgstr "" #: ../migration-nova-network-to-neutron.rst:70 msgid "" "The migration may be paused, even for an extended period of time (for " "example, while testing or investigating issues) with some hypervisors on " "legacy networking and some on Networking, and Compute API remains fully " "functional. Individual hypervisors may be rolled back to legacy networking " "during this stage of the migration, although this requires an additional " "restart." msgstr "" #: ../migration-nova-network-to-neutron.rst:77 msgid "" "In order to support the widest range of deployer needs, the process " "described here is easy to automate but is not already automated. Deployers " "should expect to perform multiple manual steps or write some simple scripts " "in order to perform this migration." msgstr "" #: ../migration-nova-network-to-neutron.rst:83 msgid "Performance impact" msgstr "" #: ../migration-nova-network-to-neutron.rst:85 msgid "" "During the migration, nova-network API calls will go through an additional " "internal conversion to Networking calls. This will have different and likely " "poorer performance characteristics compared with either the pre-migration or " "post-migration APIs." msgstr "" #: ../migration-nova-network-to-neutron.rst:91 msgid "Migration process overview" msgstr "" #: ../migration-nova-network-to-neutron.rst:93 msgid "" "Start neutron-server in intended final config, except with REST API " "restricted to read-write only by nova-api." msgstr "" #: ../migration-nova-network-to-neutron.rst:95 msgid "Make the Compute REST API read-only." msgstr "" #: ../migration-nova-network-to-neutron.rst:96 msgid "" "Run a DB dump/restore tool that creates Networking data structures " "representing current legacy networking config." msgstr "" #: ../migration-nova-network-to-neutron.rst:98 msgid "" "Enable a nova-api proxy that recreates internal Compute objects from " "Networking information (via the Networking REST API)." msgstr "" #: ../migration-nova-network-to-neutron.rst:101 msgid "" "Make Compute REST API read-write again. This means legacy networking DB is " "now unused, new changes are now stored in the Networking DB, and no rollback " "is possible from here without losing those new changes." msgstr "" #: ../migration-nova-network-to-neutron.rst:108 msgid "" "At this moment the Networking DB is the source of truth, but nova-api is the " "only public read-write API." msgstr "" #: ../migration-nova-network-to-neutron.rst:111 msgid "" "Next, you'll need to migrate each hypervisor. To do that, follow these " "steps:" msgstr "" #: ../migration-nova-network-to-neutron.rst:113 msgid "" "Disable the hypervisor. This would be a good time to live migrate or " "evacuate the compute node, if supported." msgstr "" #: ../migration-nova-network-to-neutron.rst:115 msgid "Disable nova-compute." msgstr "" #: ../migration-nova-network-to-neutron.rst:116 msgid "Enable the Networking agent." msgstr "" #: ../migration-nova-network-to-neutron.rst:117 msgid "" "Set the \"has_transitioned\" flag in the Compute hypervisor database/config." msgstr "" #: ../migration-nova-network-to-neutron.rst:118 msgid "" "Reboot the hypervisor (or run \"smart\" live transition tool if available)." msgstr "" #: ../migration-nova-network-to-neutron.rst:119 msgid "Re-enable the hypervisor." msgstr "" #: ../migration-nova-network-to-neutron.rst:121 msgid "" "At this point, all compute nodes have been migrated, but they are still " "using the nova-api API and Compute gateways. Finally, enable OpenStack " "Networking by following these steps:" msgstr "" #: ../migration-nova-network-to-neutron.rst:125 msgid "" "Bring up the Networking (l3) nodes. The new routers will have identical MAC" "+IPs as old Compute gateways so some sort of immediate cutover is possible, " "except for stateful connections issues such as NAT." msgstr "" #: ../migration-nova-network-to-neutron.rst:129 msgid "Make the Networking API read-write and disable legacy networking." msgstr "" #: ../migration-nova-network-to-neutron.rst:131 msgid "Migration Completed!" msgstr "" #: ../misc-libvirt.rst:5 msgid "Disable libvirt networking" msgstr "" #: ../misc-libvirt.rst:7 msgid "" "Most OpenStack deployments use the `libvirt `__ toolkit " "for interacting with the hypervisor. Specifically, OpenStack Compute uses " "libvirt for tasks such as booting and terminating virtual machine instances. " "When OpenStack Compute boots a new instance, libvirt provides OpenStack with " "the VIF associated with the instance, and OpenStack Compute plugs the VIF " "into a virtual device provided by OpenStack Network. The libvirt toolkit " "itself does not provide any networking functionality in OpenStack " "deployments." msgstr "" #: ../misc-libvirt.rst:16 msgid "" "However, libvirt is capable of providing networking services to the virtual " "machines that it manages. In particular, libvirt can be configured to " "provide networking functionality akin to a simplified, single-node version " "of OpenStack. Users can use libvirt to create layer 2 networks that are " "similar to OpenStack Networking's networks, confined to a single node." msgstr "" #: ../misc-libvirt.rst:23 msgid "libvirt network implementation" msgstr "" #: ../misc-libvirt.rst:25 msgid "" "By default, libvirt's networking functionality is enabled, and libvirt " "creates a network when the system boots. To implement this network, libvirt " "leverages some of the same technologies that OpenStack Network does. In " "particular, libvirt uses:" msgstr "" #: ../misc-libvirt.rst:30 msgid "Linux bridging for implementing a layer 2 network" msgstr "" #: ../misc-libvirt.rst:31 msgid "dnsmasq for providing IP addresses to virtual machines using DHCP" msgstr "" #: ../misc-libvirt.rst:32 msgid "" "iptables to implement SNAT so instances can connect out to the public " "internet, and to ensure that virtual machines are permitted to communicate " "with dnsmasq using DHCP" msgstr "" #: ../misc-libvirt.rst:36 msgid "" "By default, libvirt creates a network named *default*. The details of this " "network may vary by distribution; on Ubuntu this network involves:" msgstr "" #: ../misc-libvirt.rst:39 msgid "" "a Linux bridge named ``virbr0`` with an IP address of ``192.168.122.1/24``" msgstr "" #: ../misc-libvirt.rst:40 msgid "" "a dnsmasq process that listens on the ``virbr0`` interface and hands out IP " "addresses in the range ``192.168.122.2-192.168.122.254``" msgstr "" #: ../misc-libvirt.rst:42 msgid "a set of iptables rules" msgstr "" #: ../misc-libvirt.rst:44 msgid "" "When libvirt boots a virtual machine, it places the machine's VIF in the " "bridge ``virbr0`` unless explicitly told not to." msgstr "" #: ../misc-libvirt.rst:47 msgid "" "On Ubuntu, the iptables ruleset that libvirt creates includes the following " "rules::" msgstr "" #: ../misc-libvirt.rst:70 msgid "" "The following shows the dnsmasq process that libvirt manages as it appears " "in the output of :command:`ps`::" msgstr "" #: ../misc-libvirt.rst:76 msgid "How to disable libvirt networks" msgstr "" #: ../misc-libvirt.rst:78 msgid "" "Although OpenStack does not make use of libvirt's networking, this " "networking will not interfere with OpenStack's behavior, and can be safely " "left enabled. However, libvirt's networking can be a nuisance when debugging " "OpenStack networking issues. Because libvirt creates an additional bridge, " "dnsmasq process, and iptables ruleset, these may distract an operator " "engaged in network troubleshooting. Unless you need to start up virtual " "machines using libvirt directly, you can safely disable libvirt's network." msgstr "" #: ../misc-libvirt.rst:87 msgid "To view the defined libvirt networks and their state:" msgstr "" #: ../misc-libvirt.rst:96 msgid "To deactivate the libvirt network named ``default``:" msgstr "" #: ../misc-libvirt.rst:102 msgid "" "Deactivating the network will remove the ``virbr0`` bridge, terminate the " "dnsmasq process, and remove the iptables rules." msgstr "" #: ../misc-libvirt.rst:105 msgid "To prevent the network from automatically starting on boot:" msgstr "" #: ../misc-libvirt.rst:111 msgid "To activate the network after it has been deactivated:" msgstr "" #: ../ops-ip-availability.rst:5 msgid "IP availability metrics" msgstr "" #: ../ops-ip-availability.rst:7 msgid "" "Network IP Availability is an information-only API extension that allows a " "user or process to determine the number of IP addresses that are consumed " "across networks and the allocation pools of their subnets. This extension " "was added to neutron in the Mitaka release." msgstr "" #: ../ops-ip-availability.rst:12 msgid "" "This section illustrates how you can get the Network IP address availability " "through the command-line interface." msgstr "" #: ../ops-ip-availability.rst:15 msgid "Get Network IP address availability for all IPv4 networks:" msgstr "" #: ../ops-ip-availability.rst:28 msgid "Get Network IP address availability for all IPv6 networks:" msgstr "" #: ../ops-ip-availability.rst:41 msgid "Get Network IP address availability statistics for a specific network:" msgstr "" #: ../ops-resource-purge.rst:5 msgid "Resource purge" msgstr "" #: ../ops-resource-purge.rst:7 msgid "" "The Networking service provides a purge mechanism to delete the following " "network resources for a project (tenant):" msgstr "" #: ../ops-resource-purge.rst:13 msgid "Router interfaces" msgstr "" #: ../ops-resource-purge.rst:15 msgid "Floating IP addresses" msgstr "" #: ../ops-resource-purge.rst:18 msgid "" "Typically, one uses this mechanism to delete networking resources for a " "defunct project regardless of its existence in the Identity service." msgstr "" #: ../ops-resource-purge.rst:23 msgid "Usage" msgstr "" #: ../ops-resource-purge.rst:25 msgid "" "Source the necessary project credentials. The administrative project can " "delete resources for all other projects. A regular project can delete its " "own network resources and those belonging to other projects for which it has " "sufficient access." msgstr "" #: ../ops-resource-purge.rst:30 msgid "Delete the network resources for a particular project." msgstr "" #: ../ops-resource-purge.rst:36 msgid "Replace ``PROJECT_ID`` with the project (tenant) ID." msgstr "" #: ../ops-resource-purge.rst:38 msgid "" "The command provides output that includes a completion percentage and the " "quantity of successful or unsuccessful network resource deletions. An " "unsuccessful deletion usually indicates sharing of a resource with one or " "more additional projects." msgstr "" #: ../ops-resource-purge.rst:49 msgid "The command also indicates if a project lacks network resources." msgstr "" #: ../ops-resource-tags.rst:5 msgid "Resource tags" msgstr "" #: ../ops-resource-tags.rst:7 msgid "" "Various virtual networking resources support tags for use by external " "systems or any other clients of the Networking service API." msgstr "" #: ../ops-resource-tags.rst:11 msgid "Use cases" msgstr "" #: ../ops-resource-tags.rst:13 msgid "" "The following use cases refer to adding tags to networks, but the same can " "be applicable to any other Networking service resource:" msgstr "" #: ../ops-resource-tags.rst:16 msgid "" "Ability to map different networks in different OpenStack locations to one " "logically same network (for multi-site OpenStack)." msgstr "" #: ../ops-resource-tags.rst:19 msgid "" "Ability to map IDs from different management/orchestration systems to " "OpenStack networks in mixed environments. For example, in the Kuryr project, " "the Docker network ID is mapped to the Neutron network ID." msgstr "" #: ../ops-resource-tags.rst:23 msgid "Ability to leverage tags by deployment tools." msgstr "" #: ../ops-resource-tags.rst:25 msgid "" "Ability to tag information about provider networks (for example, high-" "bandwidth, low-latency, and so on)." msgstr "" #: ../ops-resource-tags.rst:29 msgid "Filtering with tags" msgstr "" #: ../ops-resource-tags.rst:31 msgid "" "The API allows searching/filtering of the ``GET /v2.0/networks`` API. The " "following query parameters are supported:" msgstr "" #: ../ops-resource-tags.rst:34 msgid "``tags``" msgstr "" #: ../ops-resource-tags.rst:35 msgid "``tags-any``" msgstr "" #: ../ops-resource-tags.rst:36 msgid "``not-tags``" msgstr "" #: ../ops-resource-tags.rst:37 msgid "``not-tags-any``" msgstr "" #: ../ops-resource-tags.rst:39 msgid "" "To request the list of networks that have a single tag, ``tags`` argument " "should be set to the desired tag name. Example::" msgstr "" #: ../ops-resource-tags.rst:44 msgid "" "To request the list of networks that have two or more tags, the ``tags`` " "argument should be set to the list of tags, separated by commas. In this " "case, the tags given must all be present for a network to be included in the " "query result. Example that returns networks that have the \"red\" and \"blue" "\" tags::" msgstr "" #: ../ops-resource-tags.rst:51 msgid "" "To request the list of networks that have one or more of a list of given " "tags, the ``tags-any`` argument should be set to the list of tags, separated " "by commas. In this case, as long as one of the given tags is present, the " "network will be included in the query result. Example that returns the " "networks that have the \"red\" or the \"blue\" tag::" msgstr "" #: ../ops-resource-tags.rst:59 msgid "" "To request the list of networks that do not have one or more tags, the ``not-" "tags`` argument should be set to the list of tags, separated by commas. In " "this case, only the networks that do not have any of the given tags will be " "included in the query results. Example that returns the networks that do not " "have either \"red\" or \"blue\" tag::" msgstr "" #: ../ops-resource-tags.rst:67 msgid "" "To request the list of networks that do not have at least one of a list of " "tags, the ``not-tags-any`` argument should be set to the list of tags, " "separated by commas. In this case, only the networks that do not have at " "least one of the given tags will be included in the query result. Example " "that returns the networks that do not have the \"red\" tag, or do not have " "the \"blue\" tag::" msgstr "" #: ../ops-resource-tags.rst:76 msgid "" "The ``tags``, ``tags-any``, ``not-tags``, and ``not-tags-any`` arguments can " "be combined to build more complex queries. Example::" msgstr "" #: ../ops-resource-tags.rst:81 msgid "" "The above example returns any networks that have the \"red\" and \"blue\" " "tags, plus at least one of \"green\" and \"orange\"." msgstr "" #: ../ops-resource-tags.rst:84 msgid "Complex queries may have contradictory parameters. Example::" msgstr "" #: ../ops-resource-tags.rst:88 msgid "" "In this case, we should let the Networking service find these networks. " "Obviously, there are no such networks and the service will return an empty " "list." msgstr "" #: ../ops-resource-tags.rst:95 msgid "Add a tag to a resource:" msgstr "" #: ../ops-resource-tags.rst:121 msgid "Remove a tag from a resource:" msgstr "" #: ../ops-resource-tags.rst:147 msgid "Replace all tags on the resource:" msgstr "" #: ../ops-resource-tags.rst:174 msgid "Clear tags from a resource:" msgstr "" #: ../ops-resource-tags.rst:200 msgid "" "Get list of resources with tag filters from networks. The networks are: test-" "net1 with \"red\" tag, test-net2 with \"red\" and \"blue\" tags, test-net3 " "with \"red\", \"blue\", and \"green\" tags, and test-net4 with \"green\" tag." msgstr "" #: ../ops-resource-tags.rst:204 msgid "Get list of resources with ``tags`` filter:" msgstr "" #: ../ops-resource-tags.rst:216 msgid "Get list of resources with ``tags-any`` filter:" msgstr "" #: ../ops-resource-tags.rst:229 msgid "Get list of resources with ``not-tags`` filter:" msgstr "" #: ../ops-resource-tags.rst:241 msgid "Get list of resources with ``not-tags-any`` filter:" msgstr "" #: ../ops-resource-tags.rst:255 msgid "" "Filtering resources with a tag whose name contains a comma is not supported. " "Thus, do not put such a tag name to resources." msgstr "" #: ../ops-resource-tags.rst:259 msgid "Future support" msgstr "" #: ../ops-resource-tags.rst:261 msgid "" "In future release, the Networking service will support setting tags to " "resources other than network." msgstr "" # #-#-#-#-# config-macvtap.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-config-neutron-common.txt:22 msgid "" "See the `Installation Guide `_ for your OpenStack " "release to obtain the appropriate configuration for the ``[database]``, " "``[keystone_authtoken]``, ``[oslo_messaging_rabbit]``, and ``[nova]`` " "sections." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-initialnetworks.txt:1 msgid "" "Similar to the self-service deployment example, this configuration supports " "multiple VXLAN self-service networks. After enabling high-availability, all " "additional routers use VRRP. The following procedure creates an additional " "self-service network and router. The Networking service also supports adding " "high-availability to existing routers. However, the procedure requires " "administratively disabling and enabling each router which temporarily " "interrupts network connectivity for self-service networks with interfaces on " "that router." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifyfailoveroperation.txt:1 msgid "" "Begin a continuous ``ping`` of both the floating IPv4 address and IPv6 " "address of the instance. While performing the next three steps, you should " "see a minimal, if any, interruption of connectivity to the instance." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifyfailoveroperation.txt:6 msgid "" "On the network node with the master router, administratively disable the " "overlay network interface." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifyfailoveroperation.txt:9 msgid "" "On the other network node, verify promotion of the backup router to master " "router by noting addition of IP addresses to the interfaces in the " "``qrouter`` namespace." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifyfailoveroperation.txt:13 msgid "" "On the original network node in step 2, administratively enable the overlay " "network interface. Note that the master router remains on the network node " "in step 3." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:2 msgid "" "Verify creation of the internal high-availability network that handles VRRP " "*heartbeat* traffic." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:14 msgid "" "On each network node, verify creation of a ``qrouter`` namespace with the " "same ID." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:17 #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:42 msgid "Network node 1:" msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:24 #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:82 msgid "Network node 2:" msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:33 msgid "" "The namespace for router 1 from :ref:`deploy-lb-selfservice` should only " "appear on network node 1 because of creation prior to enabling VRRP." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:37 msgid "" "On each network node, show the IP address of interfaces in the ``qrouter`` " "namespace. With the exception of the VRRP interface, only one namespace " "belonging to the master router instance contains IP addresses on the " "interfaces." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp-verifynetworkoperation.txt:107 msgid "The master router may reside on network node 2." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp.txt:1 msgid "" "This architecture example augments the self-service deployment example with " "a high-availability mechanism using the Virtual Router Redundancy Protocol " "(VRRP) via ``keepalived`` and provides failover of routing for self-service " "networks. It requires a minimum of two network nodes because VRRP creates " "one master (active) instance and at least one backup instance of each router." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp.txt:8 msgid "" "During normal operation, ``keepalived`` on the master router periodically " "transmits *heartbeat* packets over a hidden network that connects all VRRP " "routers for a particular project. Each project with VRRP routers uses a " "separate hidden network. By default this network uses the first value in the " "``tenant_network_types`` option in the ``ml2_conf.ini`` file. For additional " "control, you can specify the self-service network type and physical network " "name for the hidden network using the ``l3_ha_network_type`` and " "``l3_ha_network_name`` options in the ``neutron.conf`` file." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp.txt:17 msgid "" "If ``keepalived`` on the backup router stops receiving *heartbeat* packets, " "it assumes failure of the master router and promotes the backup router to " "master router by configuring IP addresses on the interfaces in the " "``qrouter`` namespace. In environments with more than one backup router, " "``keepalived`` on the backup router with the next highest priority promotes " "that backup router to master router." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp.txt:26 msgid "" "This high-availability mechanism configures VRRP using the same priority for " "all routers. Therefore, VRRP promotes the backup router with the highest IP " "address to the master router." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp.txt:30 msgid "" "Interruption of VRRP *heartbeat* traffic between network nodes, typically " "due to a network interface or physical network infrastructure failure, " "triggers a failover. Restarting the layer-3 agent, or failure of it, does " "not trigger a failover providing ``keepalived`` continues to operate." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp.txt:38 msgid "" "Instance network traffic on self-service networks using a particular router " "only traverses the master instance of that router. Thus, resource " "limitations of a particular network node can impact all master instances of " "routers on that network node without triggering failover to another network " "node. However, you can configure the scheduler to distribute the master " "instance of each router uniformly across a pool of network nodes to reduce " "the chance of resource contention on any particular network node." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp.txt:47 msgid "" "Only supports self-service networks using a router. Provider networks " "operate at layer-2 and rely on physical network infrastructure for " "redundancy." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp.txt:51 msgid "" "For instances with a floating IPv4 address, maintains state of network " "connections during failover as a side effect of 1:1 static NAT. The " "mechanism does not actually implement connection tracking." msgstr "" # #-#-#-#-# deploy-lb-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-vrrp.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-ha-vrrp.txt:55 msgid "" "For production deployments, we recommend at least three network nodes with " "sufficient resources to handle network traffic for the entire environment if " "one network node fails. Also, the remaining two nodes can continue to " "provide redundancy." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-initialnetworks.txt:1 msgid "" "The configuration supports one flat or multiple VLAN provider networks. For " "simplicity, the following procedure creates one flat provider network." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-initialnetworks.txt:5 msgid "Create a flat network." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-initialnetworks.txt:38 msgid "The ``shared`` option allows any project to use this network." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-initialnetworks.txt:42 msgid "" "To create a VLAN network instead of a flat network, change ``--provider:" "network_type flat`` to ``--provider:network_type vlan`` and add ``--provider:" "segmentation_id`` with a value referencing the VLAN ID." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-initialnetworks.txt:47 msgid "Create a IPv4 subnet on the provider network." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-initialnetworks.txt:76 msgid "Create a IPv6 subnet on the provider network." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-initialnetworks.txt:106 msgid "" "By default, IPv6 provider networks rely on physical network infrastructure " "for stateless address autoconfiguration (SLAAC) and router advertisement." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:1 #: ../shared/deploy-selfservice-networktrafficflow.txt:1 msgid "" "The following sections describe the flow of network traffic in several " "common scenarios. *North-south* network traffic travels between an instance " "and external network such as the Internet. *East-west* network traffic " "travels between instances on the same or different networks. In all " "scenarios, the physical network infrastructure handles switching and routing " "among provider networks and external networks such as the Internet. Each " "case references one or more of the following components:" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:9 msgid "Provider network 1 (VLAN)" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:11 #: ../shared/deploy-selfservice-networktrafficflow.txt:11 msgid "VLAN ID 101 (tagged)" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:12 msgid "IP address ranges 203.0.113.0/24 and fd00:203:0:113::/64" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:13 msgid "Gateway (via physical network infrastructure)" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:15 msgid "IP addresses 203.0.113.1 and fd00:203:0:113:0::1" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:17 msgid "Provider network 2 (VLAN)" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:19 msgid "VLAN ID 102 (tagged)" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:20 msgid "IP address range 192.0.2.0/24 and fd00:192:0:2::/64" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:21 msgid "Gateway" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:23 msgid "IP addresses 192.0.2.1 and fd00:192:0:2::1" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:27 msgid "IP addresses 203.0.113.101 and fd00:203:0:113:0::101" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-networktrafficflow.txt:31 msgid "IP addresses 192.0.2.101 and fd00:192:0:2:0::101" msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-verifynetworkoperation.txt:1 msgid "On each compute node, verify creation of the ``qdhcp`` namespace." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-verifynetworkoperation.txt:9 #: ../shared/deploy-selfservice-verifynetworkoperation.txt:17 msgid "" "Create the appropriate security group rules to allow ``ping`` and SSH access " "instances using the network." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-verifynetworkoperation.txt:14 msgid "" "Launch an instance with an interface on the provider network. For example, a " "CirrOS image using flavor ID 1." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-verifynetworkoperation.txt:22 msgid "Replace ``NETWORK_ID`` with the ID of the provider network." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-verifynetworkoperation.txt:37 msgid "" "The IPv4 and IPv6 addresses appear similar only for illustration purposes." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-verifynetworkoperation.txt:40 msgid "" "On the controller node or any host with access to the provider network, " "``ping`` the IPv4 and IPv6 addresses of the instance." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-verifynetworkoperation.txt:67 #: ../shared/deploy-selfservice-verifynetworkoperation.txt:119 msgid "Obtain access to the instance." msgstr "" # #-#-#-#-# deploy-lb-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-provider.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-provider-verifynetworkoperation.txt:68 #: ../shared/deploy-selfservice-verifynetworkoperation.txt:120 msgid "" "Test IPv4 and IPv6 connectivity to the Internet or other external network." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-initialnetworks.txt:1 msgid "" "The configuration supports multiple VXLAN self-service networks. For " "simplicity, the following procedure creates one self-service network and a " "router with a gateway on the flat provider network. The router uses NAT for " "IPv4 network traffic and directly routes IPv6 network traffic." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-initialnetworks.txt:8 msgid "" "IPv6 connectivity with self-service networks often requires addition of " "static routes to nodes and physical network infrastructure." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-initialnetworks.txt:12 msgid "" "Update the provider network to support external connectivity for self-" "service networks." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-initialnetworks.txt:135 msgid "Add the provider network as the gateway on the router." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-networktrafficflow.txt:9 msgid "Provider network (VLAN)" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-networktrafficflow.txt:13 msgid "Self-service network 1 (VXLAN)" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-networktrafficflow.txt:15 msgid "VXLAN ID (VNI) 101" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-networktrafficflow.txt:17 msgid "Self-service network 2 (VXLAN)" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-networktrafficflow.txt:19 msgid "VXLAN ID (VNI) 102" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-networktrafficflow.txt:21 msgid "Self-service router" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-networktrafficflow.txt:23 msgid "Gateway on the provider network" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-networktrafficflow.txt:24 msgid "Interface on self-service network 1" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-ha-dvr.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-networktrafficflow.txt:25 msgid "Interface on self-service network 2" msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-verifynetworkoperation.txt:1 msgid "On each compute node, verify creation of a second ``qdhcp`` namespace." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-verifynetworkoperation.txt:9 msgid "On the network node, verify creation of the ``qrouter`` namespace." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-verifynetworkoperation.txt:22 msgid "" "Launch an instance with an interface on the self-service network. For " "example, a CirrOS image using flavor ID 1." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-verifynetworkoperation.txt:29 msgid "Replace ``NETWORK_ID`` with the ID of the self-service network." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-verifynetworkoperation.txt:44 msgid "" "The IPv4 address resides in a private IP address range (RFC1918). Thus, the " "Networking service performs source network address translation (SNAT) for " "the instance to access external networks such as the Internet. Access from " "external networks such as the Internet to the instance requires a floating " "IPv4 address. The Networking service performs destination network address " "translation (DNAT) from the floating IPv4 address to the instance IPv4 " "address on the self-service network. On the other hand, the Networking " "service architecture for IPv6 lacks support for NAT due to the significantly " "larger address space and complexity of NAT. Thus, floating IP addresses do " "not exist for IPv6 and the Networking service only performs routing for IPv6 " "subnets on self-service networks. In other words, you cannot rely on NAT to " "\"hide\" instances with IPv4 and IPv6 addresses or only IPv6 addresses and " "must properly implement security groups to restrict access." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-verifynetworkoperation.txt:59 msgid "" "On the controller node or any host with access to the provider network, " "``ping`` the IPv6 address of the instance." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-verifynetworkoperation.txt:75 msgid "" "Optionally, enable IPv4 access from external networks such as the Internet " "to the instance." msgstr "" # #-#-#-#-# deploy-lb-selfservice.pot (Networking Guide 0.9) #-#-#-#-# # #-#-#-#-# deploy-ovs-selfservice.pot (Networking Guide 0.9) #-#-#-#-# #: ../shared/deploy-selfservice-verifynetworkoperation.txt:103 msgid "" "On the controller node or any host with access to the provider network, " "``ping`` the floating IPv4 address of the instance." msgstr ""