# SOME DESCRIPTIVE TITLE. # Copyright (C) 2016-2017, OpenStack contributors # This file is distributed under the same license as the Operations Guide package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: Operations Guide 15.0\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2017-07-19 16:14+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: ../acknowledgements.rst:3 msgid "Acknowledgements" msgstr "" #: ../acknowledgements.rst:5 msgid "" "The OpenStack Foundation supported the creation of this book with plane " "tickets to Austin, lodging (including one adventurous evening without power " "after a windstorm), and delicious food. For about USD $10,000, we could " "collaborate intensively for a week in the same room at the Rackspace Austin " "office. The authors are all members of the OpenStack Foundation, which you " "can join. Go to the `Foundation web site `_." msgstr "" #: ../acknowledgements.rst:13 msgid "" "We want to acknowledge our excellent host Rackers at Rackspace in Austin:" msgstr "" #: ../acknowledgements.rst:16 msgid "" "Emma Richards of Rackspace Guest Relations took excellent care of our lunch " "orders and even set aside a pile of sticky notes that had fallen off the " "walls." msgstr "" #: ../acknowledgements.rst:20 msgid "" "Betsy Hagemeier, a Fanatical Executive Assistant, took care of a room " "reshuffle and helped us settle in for the week." msgstr "" #: ../acknowledgements.rst:23 msgid "" "The Real Estate team at Rackspace in Austin, also known as \"The Victors,\" " "were super responsive." msgstr "" #: ../acknowledgements.rst:26 msgid "" "Adam Powell in Racker IT supplied us with bandwidth each day and second " "monitors for those of us needing more screens." msgstr "" #: ../acknowledgements.rst:29 msgid "" "On Wednesday night we had a fun happy hour with the Austin OpenStack Meetup " "group and Racker Katie Schmidt took great care of our group." msgstr "" #: ../acknowledgements.rst:32 msgid "We also had some excellent input from outside of the room:" msgstr "" #: ../acknowledgements.rst:34 msgid "" "Tim Bell from CERN gave us feedback on the outline before we started and " "reviewed it mid-week." msgstr "" #: ../acknowledgements.rst:37 msgid "" "Sébastien Han has written excellent blogs and generously gave his permission " "for re-use." msgstr "" #: ../acknowledgements.rst:40 msgid "" "Oisin Feeley read it, made some edits, and provided emailed feedback right " "when we asked." msgstr "" #: ../acknowledgements.rst:43 msgid "" "Inside the book sprint room with us each day was our book sprint facilitator " "Adam Hyde. Without his tireless support and encouragement, we would have " "thought a book of this scope was impossible in five days. Adam has proven " "the book sprint method effectively again and again. He creates both tools " "and faith in collaborative authoring at `www.booksprints.net `_." msgstr "" #: ../acknowledgements.rst:50 msgid "" "We couldn't have pulled it off without so much supportive help and " "encouragement." msgstr "" #: ../app-crypt.rst:3 msgid "Tales From the Cryp^H^H^H^H Cloud" msgstr "" #: ../app-crypt.rst:5 msgid "" "Herein lies a selection of tales from OpenStack cloud operators. Read, and " "learn from their wisdom." msgstr "" #: ../app-crypt.rst:9 msgid "Double VLAN" msgstr "" #: ../app-crypt.rst:11 msgid "" "I was on-site in Kelowna, British Columbia, Canada setting up a new " "OpenStack cloud. The deployment was fully automated: Cobbler deployed the OS " "on the bare metal, bootstrapped it, and Puppet took over from there. I had " "run the deployment scenario so many times in practice and took for granted " "that everything was working." msgstr "" #: ../app-crypt.rst:17 msgid "" "On my last day in Kelowna, I was in a conference call from my hotel. In the " "background, I was fooling around on the new cloud. I launched an instance " "and logged in. Everything looked fine. Out of boredom, I ran :command:`ps " "aux` and all of the sudden the instance locked up." msgstr "" #: ../app-crypt.rst:22 msgid "" "Thinking it was just a one-off issue, I terminated the instance and launched " "a new one. By then, the conference call ended and I was off to the data " "center." msgstr "" #: ../app-crypt.rst:26 msgid "" "At the data center, I was finishing up some tasks and remembered the lock-" "up. I logged into the new instance and ran :command:`ps aux` again. It " "worked. Phew. I decided to run it one more time. It locked up." msgstr "" #: ../app-crypt.rst:30 msgid "" "After reproducing the problem several times, I came to the unfortunate " "conclusion that this cloud did indeed have a problem. Even worse, my time " "was up in Kelowna and I had to return back to Calgary." msgstr "" #: ../app-crypt.rst:34 msgid "" "Where do you even begin troubleshooting something like this? An instance " "that just randomly locks up when a command is issued. Is it the image? Nope—" "it happens on all images. Is it the compute node? Nope—all nodes. Is the " "instance locked up? No! New SSH connections work just fine!" msgstr "" #: ../app-crypt.rst:39 msgid "" "We reached out for help. A networking engineer suggested it was an MTU " "issue. Great! MTU! Something to go on! What's MTU and why would it cause a " "problem?" msgstr "" #: ../app-crypt.rst:43 msgid "" "MTU is maximum transmission unit. It specifies the maximum number of bytes " "that the interface accepts for each packet. If two interfaces have two " "different MTUs, bytes might get chopped off and weird things happen—such as " "random session lockups." msgstr "" #: ../app-crypt.rst:50 msgid "" "Not all packets have a size of 1500. Running the :command:`ls` command over " "SSH might only create a single packets less than 1500 bytes. However, " "running a command with heavy output, such as :command:`ps aux` requires " "several packets of 1500 bytes." msgstr "" #: ../app-crypt.rst:55 msgid "" "OK, so where is the MTU issue coming from? Why haven't we seen this in any " "other deployment? What's new in this situation? Well, new data center, new " "uplink, new switches, new model of switches, new servers, first time using " "this model of servers… so, basically everything was new. Wonderful. We toyed " "around with raising the MTU at various areas: the switches, the NICs on the " "compute nodes, the virtual NICs in the instances, we even had the data " "center raise the MTU for our uplink interface. Some changes worked, some " "didn't. This line of troubleshooting didn't feel right, though. We shouldn't " "have to be changing the MTU in these areas." msgstr "" #: ../app-crypt.rst:66 msgid "" "As a last resort, our network admin (Alvaro) and myself sat down with four " "terminal windows, a pencil, and a piece of paper. In one window, we ran " "ping. In the second window, we ran ``tcpdump`` on the cloud controller. In " "the third, ``tcpdump`` on the compute node. And the forth had ``tcpdump`` on " "the instance. For background, this cloud was a multi-node, non-multi-host " "setup." msgstr "" #: ../app-crypt.rst:73 msgid "" "One cloud controller acted as a gateway to all compute nodes. VlanManager " "was used for the network config. This means that the cloud controller and " "all compute nodes had a different VLAN for each OpenStack project. We used " "the ``-s`` option of ``ping`` to change the packet size. We watched as " "sometimes packets would fully return, sometimes they'd only make it out and " "never back in, and sometimes the packets would stop at a random point. We " "changed ``tcpdump`` to start displaying the hex dump of the packet. We " "pinged between every combination of outside, controller, compute, and " "instance." msgstr "" #: ../app-crypt.rst:83 msgid "" "Finally, Alvaro noticed something. When a packet from the outside hits the " "cloud controller, it should not be configured with a VLAN. We verified this " "as true. When the packet went from the cloud controller to the compute node, " "it should only have a VLAN if it was destined for an instance. This was " "still true. When the ping reply was sent from the instance, it should be in " "a VLAN. True. When it came back to the cloud controller and on its way out " "to the Internet, it should no longer have a VLAN. False. Uh oh. It looked as " "though the VLAN part of the packet was not being removed." msgstr "" #: ../app-crypt.rst:93 msgid "That made no sense." msgstr "" #: ../app-crypt.rst:95 msgid "" "While bouncing this idea around in our heads, I was randomly typing commands " "on the compute node:" msgstr "" #: ../app-crypt.rst:105 msgid "\"Hey Alvaro, can you run a VLAN on top of a VLAN?\"" msgstr "" #: ../app-crypt.rst:107 msgid "\"If you did, you'd add an extra 4 bytes to the packet…\"" msgstr "" #: ../app-crypt.rst:109 msgid "Then it all made sense…" msgstr "" #: ../app-crypt.rst:116 msgid "" "In ``nova.conf``, ``vlan_interface`` specifies what interface OpenStack " "should attach all VLANs to. The correct setting should have been:" msgstr "" #: ../app-crypt.rst:123 msgid "As this would be the server's bonded NIC." msgstr "" #: ../app-crypt.rst:125 msgid "" "vlan20 is the VLAN that the data center gave us for outgoing Internet " "access. It's a correct VLAN and is also attached to bond0." msgstr "" #: ../app-crypt.rst:128 msgid "" "By mistake, I configured OpenStack to attach all tenant VLANs to vlan20 " "instead of bond0 thereby stacking one VLAN on top of another. This added an " "extra 4 bytes to each packet and caused a packet of 1504 bytes to be sent " "out which would cause problems when it arrived at an interface that only " "accepted 1500." msgstr "" #: ../app-crypt.rst:134 msgid "As soon as this setting was fixed, everything worked." msgstr "" #: ../app-crypt.rst:137 msgid "\"The Issue\"" msgstr "" #: ../app-crypt.rst:139 msgid "" "At the end of August 2012, a post-secondary school in Alberta, Canada " "migrated its infrastructure to an OpenStack cloud. As luck would have it, " "within the first day or two of it running, one of their servers just " "disappeared from the network. Blip. Gone." msgstr "" #: ../app-crypt.rst:144 msgid "" "After restarting the instance, everything was back up and running. We " "reviewed the logs and saw that at some point, network communication stopped " "and then everything went idle. We chalked this up to a random occurrence." msgstr "" #: ../app-crypt.rst:149 msgid "A few nights later, it happened again." msgstr "" #: ../app-crypt.rst:151 msgid "" "We reviewed both sets of logs. The one thing that stood out the most was " "DHCP. At the time, OpenStack, by default, set DHCP leases for one minute " "(it's now two minutes). This means that every instance contacts the cloud " "controller (DHCP server) to renew its fixed IP. For some reason, this " "instance could not renew its IP. We correlated the instance's logs with the " "logs on the cloud controller and put together a conversation:" msgstr "" #: ../app-crypt.rst:158 msgid "Instance tries to renew IP." msgstr "" #: ../app-crypt.rst:160 msgid "Cloud controller receives the renewal request and sends a response." msgstr "" #: ../app-crypt.rst:162 msgid "Instance \"ignores\" the response and re-sends the renewal request." msgstr "" #: ../app-crypt.rst:164 msgid "Cloud controller receives the second request and sends a new response." msgstr "" #: ../app-crypt.rst:167 msgid "" "Instance begins sending a renewal request to ``255.255.255.255`` since it " "hasn't heard back from the cloud controller." msgstr "" #: ../app-crypt.rst:170 msgid "" "The cloud controller receives the ``255.255.255.255`` request and sends a " "third response." msgstr "" #: ../app-crypt.rst:173 msgid "The instance finally gives up." msgstr "" #: ../app-crypt.rst:175 msgid "" "With this information in hand, we were sure that the problem had to do with " "DHCP. We thought that for some reason, the instance wasn't getting a new IP " "address and with no IP, it shut itself off from the network." msgstr "" #: ../app-crypt.rst:179 msgid "" "A quick Google search turned up this: `DHCP lease errors in VLAN mode " "`_ which further " "supported our DHCP theory." msgstr "" #: ../app-crypt.rst:183 msgid "" "An initial idea was to just increase the lease time. If the instance only " "renewed once every week, the chances of this problem happening would be " "tremendously smaller than every minute. This didn't solve the problem, " "though. It was just covering the problem up." msgstr "" #: ../app-crypt.rst:188 msgid "" "We decided to have ``tcpdump`` run on this instance and see if we could " "catch it in action again. Sure enough, we did." msgstr "" #: ../app-crypt.rst:191 msgid "" "The ``tcpdump`` looked very, very weird. In short, it looked as though " "network communication stopped before the instance tried to renew its IP. " "Since there is so much DHCP chatter from a one minute lease, it's very hard " "to confirm it, but even with only milliseconds difference between packets, " "if one packet arrives first, it arrived first, and if that packet reported " "network issues, then it had to have happened before DHCP." msgstr "" #: ../app-crypt.rst:199 msgid "" "Additionally, this instance in question was responsible for a very, very " "large backup job each night. While \"The Issue\" (as we were now calling it) " "didn't happen exactly when the backup happened, it was close enough (a few " "hours) that we couldn't ignore it." msgstr "" #: ../app-crypt.rst:204 msgid "" "Further days go by and we catch The Issue in action more and more. We find " "that dhclient is not running after The Issue happens. Now we're back to " "thinking it's a DHCP issue. Running ``/etc/init.d/networking`` restart " "brings everything back up and running." msgstr "" #: ../app-crypt.rst:209 msgid "" "Ever have one of those days where all of the sudden you get the Google " "results you were looking for? Well, that's what happened here. I was looking " "for information on dhclient and why it dies when it can't renew its lease " "and all of the sudden I found a bunch of OpenStack and dnsmasq discussions " "that were identical to the problem we were seeing!" msgstr "" #: ../app-crypt.rst:215 msgid "" "`Problem with Heavy Network IO and Dnsmasq `_." msgstr "" #: ../app-crypt.rst:218 msgid "" "`instances losing IP address while running, due to No DHCPOFFER `_." msgstr "" #: ../app-crypt.rst:221 msgid "Seriously, Google." msgstr "" #: ../app-crypt.rst:223 msgid "" "This bug report was the key to everything: `KVM images lose connectivity " "with bridged network `_." msgstr "" #: ../app-crypt.rst:227 msgid "" "It was funny to read the report. It was full of people who had some strange " "network problem but didn't quite explain it in the same way." msgstr "" #: ../app-crypt.rst:230 msgid "So it was a qemu/kvm bug." msgstr "" #: ../app-crypt.rst:232 msgid "" "At the same time of finding the bug report, a co-worker was able to " "successfully reproduce The Issue! How? He used ``iperf`` to spew a ton of " "bandwidth at an instance. Within 30 minutes, the instance just disappeared " "from the network." msgstr "" #: ../app-crypt.rst:237 msgid "" "Armed with a patched qemu and a way to reproduce, we set out to see if we've " "finally solved The Issue. After 48 hours straight of hammering the instance " "with bandwidth, we were confident. The rest is history. You can search the " "bug report for \"joe\" to find my comments and actual tests." msgstr "" #: ../app-crypt.rst:243 msgid "Disappearing Images" msgstr "" #: ../app-crypt.rst:245 msgid "" "At the end of 2012, Cybera (a nonprofit with a mandate to oversee the " "development of cyberinfrastructure in Alberta, Canada) deployed an updated " "OpenStack cloud for their `DAIR project `_. A " "few days into production, a compute node locks up. Upon rebooting the node, " "I checked to see what instances were hosted on that node so I could boot " "them on behalf of the customer. Luckily, only one instance." msgstr "" #: ../app-crypt.rst:253 msgid "" "The :command:`nova reboot` command wasn't working, so I used :command:" "`virsh`, but it immediately came back with an error saying it was unable to " "find the backing disk. In this case, the backing disk is the Glance image " "that is copied to ``/var/lib/nova/instances/_base`` when the image is used " "for the first time. Why couldn't it find it? I checked the directory and " "sure enough it was gone." msgstr "" #: ../app-crypt.rst:260 msgid "" "I reviewed the ``nova`` database and saw the instance's entry in the ``nova." "instances`` table. The image that the instance was using matched what virsh " "was reporting, so no inconsistency there." msgstr "" #: ../app-crypt.rst:264 msgid "" "I checked Glance and noticed that this image was a snapshot that the user " "created. At least that was good news—this user would have been the only user " "affected." msgstr "" #: ../app-crypt.rst:268 msgid "" "Finally, I checked StackTach and reviewed the user's events. They had " "created and deleted several snapshots—most likely experimenting. Although " "the timestamps didn't match up, my conclusion was that they launched their " "instance and then deleted the snapshot and it was somehow removed from ``/" "var/lib/nova/instances/_base``. None of that made sense, but it was the best " "I could come up with." msgstr "" #: ../app-crypt.rst:275 msgid "" "It turns out the reason that this compute node locked up was a hardware " "issue. We removed it from the DAIR cloud and called Dell to have it " "serviced. Dell arrived and began working. Somehow or another (or a fat " "finger), a different compute node was bumped and rebooted. Great." msgstr "" #: ../app-crypt.rst:280 msgid "" "When this node fully booted, I ran through the same scenario of seeing what " "instances were running so I could turn them back on. There were a total of " "four. Three booted and one gave an error. It was the same error as before: " "unable to find the backing disk. Seriously, what?" msgstr "" #: ../app-crypt.rst:285 msgid "" "Again, it turns out that the image was a snapshot. The three other instances " "that successfully started were standard cloud images. Was it a problem with " "snapshots? That didn't make sense." msgstr "" #: ../app-crypt.rst:289 msgid "" "A note about DAIR's architecture: ``/var/lib/nova/instances`` is a shared " "NFS mount. This means that all compute nodes have access to it, which " "includes the ``_base`` directory. Another centralized area is ``/var/log/" "rsyslog`` on the cloud controller. This directory collects all OpenStack " "logs from all compute nodes. I wondered if there were any entries for the " "file that :command:`virsh` is reporting:" msgstr "" #: ../app-crypt.rst:303 msgid "Ah-hah! So OpenStack was deleting it. But why?" msgstr "" #: ../app-crypt.rst:305 msgid "" "A feature was introduced in Essex to periodically check and see if there " "were any ``_base`` files not in use. If there were, OpenStack Compute would " "delete them. This idea sounds innocent enough and has some good qualities to " "it. But how did this feature end up turned on? It was disabled by default in " "Essex. As it should be. It was `decided to be turned on in Folsom `_. I cannot emphasize enough that:" msgstr "" #: ../app-crypt.rst:313 msgid "*Actions which delete things should not be enabled by default.*" msgstr "" #: ../app-crypt.rst:315 msgid "Disk space is cheap these days. Data recovery is not." msgstr "" #: ../app-crypt.rst:317 msgid "" "Secondly, DAIR's shared ``/var/lib/nova/instances`` directory contributed to " "the problem. Since all compute nodes have access to this directory, all " "compute nodes periodically review the \\_base directory. If there is only " "one instance using an image, and the node that the instance is on is down " "for a few minutes, it won't be able to mark the image as still in use. " "Therefore, the image seems like it's not in use and is deleted. When the " "compute node comes back online, the instance hosted on that node is unable " "to start." msgstr "" #: ../app-crypt.rst:327 msgid "The Valentine's Day Compute Node Massacre" msgstr "" #: ../app-crypt.rst:329 msgid "" "Although the title of this story is much more dramatic than the actual " "event, I don't think, or hope, that I'll have the opportunity to use " "\"Valentine's Day Massacre\" again in a title." msgstr "" #: ../app-crypt.rst:333 msgid "" "This past Valentine's Day, I received an alert that a compute node was no " "longer available in the cloud—meaning," msgstr "" #: ../app-crypt.rst:340 msgid "showed this particular node in a down state." msgstr "" #: ../app-crypt.rst:342 msgid "" "I logged into the cloud controller and was able to both ``ping`` and SSH " "into the problematic compute node which seemed very odd. Usually if I " "receive this type of alert, the compute node has totally locked up and would " "be inaccessible." msgstr "" #: ../app-crypt.rst:347 msgid "After a few minutes of troubleshooting, I saw the following details:" msgstr "" #: ../app-crypt.rst:349 msgid "A user recently tried launching a CentOS instance on that node" msgstr "" #: ../app-crypt.rst:351 msgid "This user was the only user on the node (new node)" msgstr "" #: ../app-crypt.rst:353 msgid "The load shot up to 8 right before I received the alert" msgstr "" #: ../app-crypt.rst:355 msgid "The bonded 10gb network device (bond0) was in a DOWN state" msgstr "" #: ../app-crypt.rst:357 msgid "The 1gb NIC was still alive and active" msgstr "" #: ../app-crypt.rst:359 msgid "" "I looked at the status of both NICs in the bonded pair and saw that neither " "was able to communicate with the switch port. Seeing as how each NIC in the " "bond is connected to a separate switch, I thought that the chance of a " "switch port dying on each switch at the same time was quite improbable. I " "concluded that the 10gb dual port NIC had died and needed replaced. I " "created a ticket for the hardware support department at the data center " "where the node was hosted. I felt lucky that this was a new node and no one " "else was hosted on it yet." msgstr "" #: ../app-crypt.rst:368 msgid "" "An hour later I received the same alert, but for another compute node. Crap. " "OK, now there's definitely a problem going on. Just like the original node, " "I was able to log in by SSH. The bond0 NIC was DOWN but the 1gb NIC was " "active." msgstr "" #: ../app-crypt.rst:373 msgid "" "And the best part: the same user had just tried creating a CentOS instance. " "What?" msgstr "" #: ../app-crypt.rst:376 msgid "" "I was totally confused at this point, so I texted our network admin to see " "if he was available to help. He logged in to both switches and immediately " "saw the problem: the switches detected spanning tree packets coming from the " "two compute nodes and immediately shut the ports down to prevent spanning " "tree loops:" msgstr "" #: ../app-crypt.rst:391 msgid "" "He re-enabled the switch ports and the two compute nodes immediately came " "back to life." msgstr "" #: ../app-crypt.rst:394 msgid "" "Unfortunately, this story has an open ending... we're still looking into why " "the CentOS image was sending out spanning tree packets. Further, we're " "researching a proper way on how to mitigate this from happening. It's a " "bigger issue than one might think. While it's extremely important for " "switches to prevent spanning tree loops, it's very problematic to have an " "entire compute node be cut from the network when this happens. If a compute " "node is hosting 100 instances and one of them sends a spanning tree packet, " "that instance has effectively DDOS'd the other 99 instances." msgstr "" #: ../app-crypt.rst:404 msgid "" "This is an ongoing and hot topic in networking circles —especially with the " "raise of virtualization and virtual switches." msgstr "" #: ../app-crypt.rst:408 msgid "Down the Rabbit Hole" msgstr "" #: ../app-crypt.rst:410 msgid "" "Users being able to retrieve console logs from running instances is a boon " "for support—many times they can figure out what's going on inside their " "instance and fix what's going on without bothering you. Unfortunately, " "sometimes overzealous logging of failures can cause problems of its own." msgstr "" #: ../app-crypt.rst:416 msgid "" "A report came in: VMs were launching slowly, or not at all. Cue the standard " "checks—nothing on the Nagios, but there was a spike in network towards the " "current master of our RabbitMQ cluster. Investigation started, but soon the " "other parts of the queue cluster were leaking memory like a sieve. Then the " "alert came in—the master Rabbit server went down and connections failed over " "to the slave." msgstr "" #: ../app-crypt.rst:423 msgid "" "At that time, our control services were hosted by another team and we didn't " "have much debugging information to determine what was going on with the " "master, and we could not reboot it. That team noted that it failed without " "alert, but managed to reboot it. After an hour, the cluster had returned to " "its normal state and we went home for the day." msgstr "" #: ../app-crypt.rst:429 msgid "" "Continuing the diagnosis the next morning was kick started by another " "identical failure. We quickly got the message queue running again, and tried " "to work out why Rabbit was suffering from so much network traffic. Enabling " "debug logging on nova-api quickly brought understanding. A ``tail -f /var/" "log/nova/nova-api.log`` was scrolling by faster than we'd ever seen before. " "CTRL+C on that and we could plainly see the contents of a system log spewing " "failures over and over again - a system log from one of our users' instances." msgstr "" #: ../app-crypt.rst:438 msgid "" "After finding the instance ID we headed over to ``/var/lib/nova/instances`` " "to find the ``console.log``:" msgstr "" #: ../app-crypt.rst:448 msgid "" "Sure enough, the user had been periodically refreshing the console log page " "on the dashboard and the 5G file was traversing the Rabbit cluster to get to " "the dashboard." msgstr "" #: ../app-crypt.rst:452 msgid "" "We called them and asked them to stop for a while, and they were happy to " "abandon the horribly broken VM. After that, we started monitoring the size " "of console logs." msgstr "" #: ../app-crypt.rst:456 msgid "" "To this day, `the issue `__ " "doesn't have a permanent resolution, but we look forward to the discussion " "at the next summit." msgstr "" #: ../app-crypt.rst:461 msgid "Havana Haunted by the Dead" msgstr "" #: ../app-crypt.rst:463 msgid "" "Felix Lee of Academia Sinica Grid Computing Centre in Taiwan contributed " "this story." msgstr "" #: ../app-crypt.rst:466 msgid "" "I just upgraded OpenStack from Grizzly to Havana 2013.2-2 using the RDO " "repository and everything was running pretty well—except the EC2 API." msgstr "" #: ../app-crypt.rst:469 msgid "" "I noticed that the API would suffer from a heavy load and respond slowly to " "particular EC2 requests such as ``RunInstances``." msgstr "" #: ../app-crypt.rst:472 msgid "Output from ``/var/log/nova/nova-api.log`` on :term:`Havana`:" msgstr "" #: ../app-crypt.rst:483 msgid "" "This request took over two minutes to process, but executed quickly on " "another co-existing Grizzly deployment using the same hardware and system " "configuration." msgstr "" #: ../app-crypt.rst:487 msgid "Output from ``/var/log/nova/nova-api.log`` on :term:`Grizzly`:" msgstr "" #: ../app-crypt.rst:498 msgid "" "While monitoring system resources, I noticed a significant increase in " "memory consumption while the EC2 API processed this request. I thought it " "wasn't handling memory properly—possibly not releasing memory. If the API " "received several of these requests, memory consumption quickly grew until " "the system ran out of RAM and began using swap. Each node has 48 GB of RAM " "and the \"nova-api\" process would consume all of it within minutes. Once " "this happened, the entire system would become unusably slow until I " "restarted the nova-api service." msgstr "" #: ../app-crypt.rst:507 msgid "" "So, I found myself wondering what changed in the EC2 API on Havana that " "might cause this to happen. Was it a bug or a normal behavior that I now " "need to work around?" msgstr "" #: ../app-crypt.rst:511 msgid "" "After digging into the nova (OpenStack Compute) code, I noticed two areas in " "``api/ec2/cloud.py`` potentially impacting my system:" msgstr "" #: ../app-crypt.rst:524 msgid "" "Since my database contained many records—over 1 million metadata records and " "over 300,000 instance records in \"deleted\" or \"errored\" states—each " "search took a long time. I decided to clean up the database by first " "archiving a copy for backup and then performing some deletions using the " "MySQL client. For example, I ran the following SQL command to remove rows of " "instances deleted for over a year:" msgstr "" #: ../app-crypt.rst:535 msgid "" "Performance increased greatly after deleting the old records and my new " "deployment continues to behave well." msgstr "" #: ../app-resources.rst:3 ../app-usecases.rst:41 ../app-usecases.rst:111 #: ../app-usecases.rst:152 ../app-usecases.rst:183 msgid "Resources" msgstr "" #: ../app-resources.rst:6 msgid "OpenStack" msgstr "" #: ../app-resources.rst:8 msgid "" "`OpenStack Installation Tutorial for openSUSE and SUSE Linux Enterprise " "Server `_" msgstr "" #: ../app-resources.rst:11 ../preface.rst:144 msgid "" "`OpenStack Installation Tutorial for Red Hat Enterprise Linux and CentOS " "`_" msgstr "" #: ../app-resources.rst:14 msgid "" "`OpenStack Installation Tutorial for Ubuntu Server `_" msgstr "" #: ../app-resources.rst:17 ../preface.rst:160 msgid "" "`OpenStack Administrator Guide `_" msgstr "" #: ../app-resources.rst:19 msgid "" "`OpenStack Cloud Computing Cookbook (Packt Publishing) `_" msgstr "" #: ../app-resources.rst:23 msgid "Cloud (General)" msgstr "" #: ../app-resources.rst:25 msgid "" "`The NIST Definition of Cloud Computing `_" msgstr "" #: ../app-resources.rst:29 msgid "Python" msgstr "" #: ../app-resources.rst:31 msgid "`Dive Into Python (Apress) `_" msgstr "" #: ../app-resources.rst:34 ../ops-backup-recovery.rst:140 msgid "Networking" msgstr "" #: ../app-resources.rst:36 msgid "" "`TCP/IP Illustrated, Volume 1: The Protocols, 2/E (Pearson) `_" msgstr "" #: ../app-resources.rst:39 msgid "" "`The TCP/IP Guide (No Starch Press) `_" msgstr "" #: ../app-resources.rst:42 msgid "" "`A tcpdump Tutorial and Primer `_" msgstr "" #: ../app-resources.rst:46 msgid "Systems Administration" msgstr "" #: ../app-resources.rst:48 msgid "" "`UNIX and Linux Systems Administration Handbook (Prentice Hall) `_" msgstr "" #: ../app-resources.rst:52 msgid "Virtualization" msgstr "" #: ../app-resources.rst:54 msgid "`The Book of Xen (No Starch Press) `_" msgstr "" #: ../app-resources.rst:58 ../ops-maintenance-configuration.rst:3 msgid "Configuration Management" msgstr "" #: ../app-resources.rst:60 msgid "`Puppet Labs Documentation `_" msgstr "" #: ../app-resources.rst:62 msgid "`Pro Puppet (Apress) `_" msgstr "" #: ../app-roadmaps.rst:3 msgid "Working with Roadmaps" msgstr "" #: ../app-roadmaps.rst:5 msgid "" "The good news: OpenStack has unprecedented transparency when it comes to " "providing information about what's coming up. The bad news: each release " "moves very quickly. The purpose of this appendix is to highlight some of the " "useful pages to track, and take an educated guess at what is coming up in " "the next release and perhaps further afield." msgstr "" #: ../app-roadmaps.rst:11 msgid "" "OpenStack follows a six month release cycle, typically releasing in April/" "May and October/November each year. At the start of each cycle, the " "community gathers in a single location for a design summit. At the summit, " "the features for the coming releases are discussed, prioritized, and " "planned. The below figure shows an example release cycle, with dates showing " "milestone releases, code freeze, and string freeze dates, along with an " "example of when the summit occurs. Milestones are interim releases within " "the cycle that are available as packages for download and testing. Code " "freeze is putting a stop to adding new features to the release. String " "freeze is putting a stop to changing any strings within the source code." msgstr "" #: ../app-roadmaps.rst:28 msgid "Information Available to You" msgstr "" #: ../app-roadmaps.rst:30 msgid "" "There are several good sources of information available that you can use to " "track your OpenStack development desires." msgstr "" #: ../app-roadmaps.rst:33 msgid "" "Release notes are maintained on the OpenStack wiki, and also shown here:" msgstr "" #: ../app-roadmaps.rst:39 msgid "Series" msgstr "" #: ../app-roadmaps.rst:40 msgid "Status" msgstr "" #: ../app-roadmaps.rst:41 msgid "Releases" msgstr "" #: ../app-roadmaps.rst:42 msgid "Date" msgstr "" #: ../app-roadmaps.rst:43 msgid "Liberty" msgstr "" #: ../app-roadmaps.rst:44 msgid "" "`Under Development `_" msgstr "" #: ../app-roadmaps.rst:46 msgid "2015.2" msgstr "" #: ../app-roadmaps.rst:47 msgid "Oct, 2015" msgstr "" #: ../app-roadmaps.rst:48 msgid "Kilo" msgstr "" #: ../app-roadmaps.rst:49 msgid "" "`Current stable release, security-supported `_" msgstr "" #: ../app-roadmaps.rst:51 msgid "`2015.1 `_" msgstr "" #: ../app-roadmaps.rst:52 msgid "Apr 30, 2015" msgstr "" #: ../app-roadmaps.rst:53 msgid "Juno" msgstr "" #: ../app-roadmaps.rst:54 msgid "" "`Security-supported `_" msgstr "" #: ../app-roadmaps.rst:56 msgid "`2014.2 `_" msgstr "" #: ../app-roadmaps.rst:57 msgid "Oct 16, 2014" msgstr "" #: ../app-roadmaps.rst:58 msgid "Icehouse" msgstr "" #: ../app-roadmaps.rst:59 msgid "" "`End-of-life `_" msgstr "" #: ../app-roadmaps.rst:61 msgid "`2014.1 `_" msgstr "" #: ../app-roadmaps.rst:62 msgid "Apr 17, 2014" msgstr "" #: ../app-roadmaps.rst:65 msgid "`2014.1.1 `_" msgstr "" #: ../app-roadmaps.rst:66 msgid "Jun 9, 2014" msgstr "" #: ../app-roadmaps.rst:69 msgid "`2014.1.2 `_" msgstr "" #: ../app-roadmaps.rst:70 msgid "Aug 8, 2014" msgstr "" #: ../app-roadmaps.rst:73 msgid "`2014.1.3 `_" msgstr "" #: ../app-roadmaps.rst:74 msgid "Oct 2, 2014" msgstr "" #: ../app-roadmaps.rst:75 msgid "Havana" msgstr "" #: ../app-roadmaps.rst:76 ../app-roadmaps.rst:100 ../app-roadmaps.rst:124 #: ../app-roadmaps.rst:144 msgid "End-of-life" msgstr "" #: ../app-roadmaps.rst:77 msgid "`2013.2 `_" msgstr "" #: ../app-roadmaps.rst:78 ../app-roadmaps.rst:102 msgid "Apr 4, 2013" msgstr "" #: ../app-roadmaps.rst:81 ../app-roadmaps.rst:97 msgid "`2013.2.1 `_" msgstr "" #: ../app-roadmaps.rst:82 ../app-roadmaps.rst:98 msgid "Dec 16, 2013" msgstr "" #: ../app-roadmaps.rst:85 msgid "`2013.2.2 `_" msgstr "" #: ../app-roadmaps.rst:86 msgid "Feb 13, 2014" msgstr "" #: ../app-roadmaps.rst:89 msgid "`2013.2.3 `_" msgstr "" #: ../app-roadmaps.rst:90 msgid "Apr 3, 2014" msgstr "" #: ../app-roadmaps.rst:93 msgid "`2013.2.4 `_" msgstr "" #: ../app-roadmaps.rst:94 msgid "Sep 22, 2014" msgstr "" #: ../app-roadmaps.rst:99 msgid "Grizzly" msgstr "" #: ../app-roadmaps.rst:101 msgid "`2013.1 `_" msgstr "" #: ../app-roadmaps.rst:105 msgid "`2013.1.1 `_" msgstr "" #: ../app-roadmaps.rst:106 msgid "May 9, 2013" msgstr "" #: ../app-roadmaps.rst:109 msgid "`2013.1.2 `_" msgstr "" #: ../app-roadmaps.rst:110 msgid "Jun 6, 2013" msgstr "" #: ../app-roadmaps.rst:113 msgid "`2013.1.3 `_" msgstr "" #: ../app-roadmaps.rst:114 msgid "Aug 8, 2013" msgstr "" #: ../app-roadmaps.rst:117 msgid "`2013.1.4 `_" msgstr "" #: ../app-roadmaps.rst:118 msgid "Oct 17, 2013" msgstr "" #: ../app-roadmaps.rst:121 msgid "`2013.1.5 `_" msgstr "" #: ../app-roadmaps.rst:122 msgid "Mar 20, 2015" msgstr "" #: ../app-roadmaps.rst:123 msgid "Folsom" msgstr "" #: ../app-roadmaps.rst:125 msgid "`2012.2 `_" msgstr "" #: ../app-roadmaps.rst:126 msgid "Sep 27, 2012" msgstr "" #: ../app-roadmaps.rst:129 msgid "`2012.2.1 `_" msgstr "" #: ../app-roadmaps.rst:130 msgid "Nov 29, 2012" msgstr "" #: ../app-roadmaps.rst:133 msgid "`2012.2.2 `_" msgstr "" #: ../app-roadmaps.rst:134 msgid "Dec 13, 2012" msgstr "" #: ../app-roadmaps.rst:137 msgid "`2012.2.3 `_" msgstr "" #: ../app-roadmaps.rst:138 msgid "Jan 31, 2013" msgstr "" #: ../app-roadmaps.rst:141 msgid "`2012.2.4 `_" msgstr "" #: ../app-roadmaps.rst:142 msgid "Apr 11, 2013" msgstr "" #: ../app-roadmaps.rst:143 msgid "Essex" msgstr "" #: ../app-roadmaps.rst:145 msgid "`2012.1 `_" msgstr "" #: ../app-roadmaps.rst:146 msgid "Apr 5, 2012" msgstr "" #: ../app-roadmaps.rst:149 msgid "`2012.1.1 `_" msgstr "" #: ../app-roadmaps.rst:150 msgid "Jun 22, 2012" msgstr "" #: ../app-roadmaps.rst:153 msgid "`2012.1.2 `_" msgstr "" #: ../app-roadmaps.rst:154 msgid "Aug 10, 2012" msgstr "" #: ../app-roadmaps.rst:157 msgid "`2012.1.3 `_" msgstr "" #: ../app-roadmaps.rst:158 msgid "Oct 12, 2012" msgstr "" #: ../app-roadmaps.rst:159 msgid "Diablo" msgstr "" #: ../app-roadmaps.rst:160 ../app-roadmaps.rst:168 ../app-roadmaps.rst:172 #: ../app-roadmaps.rst:176 msgid "Deprecated" msgstr "" #: ../app-roadmaps.rst:161 msgid "`2011.3 `_" msgstr "" #: ../app-roadmaps.rst:162 msgid "Sep 22, 2011" msgstr "" #: ../app-roadmaps.rst:165 msgid "`2011.3.1 `_" msgstr "" #: ../app-roadmaps.rst:166 msgid "Jan 19, 2012" msgstr "" #: ../app-roadmaps.rst:167 msgid "Cactus" msgstr "" #: ../app-roadmaps.rst:169 msgid "`2011.2 `_" msgstr "" #: ../app-roadmaps.rst:170 msgid "Apr 15, 2011" msgstr "" #: ../app-roadmaps.rst:171 msgid "Bexar" msgstr "" #: ../app-roadmaps.rst:173 msgid "`2011.1 `_" msgstr "" #: ../app-roadmaps.rst:174 msgid "Feb 3, 2011" msgstr "" #: ../app-roadmaps.rst:175 msgid "Austin" msgstr "" #: ../app-roadmaps.rst:177 msgid "`2010.1 `_" msgstr "" #: ../app-roadmaps.rst:178 msgid "Oct 21, 2010" msgstr "" #: ../app-roadmaps.rst:180 msgid "Here are some other resources:" msgstr "" #: ../app-roadmaps.rst:182 msgid "" "`A breakdown of current features under development, with their target " "milestone `_" msgstr "" #: ../app-roadmaps.rst:185 msgid "" "`A list of all features, including those not yet under development `_" msgstr "" #: ../app-roadmaps.rst:188 msgid "" "`Rough-draft design discussions (\"etherpads\") from the last design summit " "`_" msgstr "" #: ../app-roadmaps.rst:191 msgid "" "`List of individual code changes under review `_" msgstr "" #: ../app-roadmaps.rst:195 msgid "Influencing the Roadmap" msgstr "" #: ../app-roadmaps.rst:197 msgid "" "OpenStack truly welcomes your ideas (and contributions) and highly values " "feedback from real-world users of the software. By learning a little about " "the process that drives feature development, you can participate and perhaps " "get the additions you desire." msgstr "" #: ../app-roadmaps.rst:202 msgid "" "Feature requests typically start their life in Etherpad, a collaborative " "editing tool, which is used to take coordinating notes at a design summit " "session specific to the feature. This then leads to the creation of a " "blueprint on the Launchpad site for the particular project, which is used to " "describe the feature more formally. Blueprints are then approved by project " "team members, and development can begin." msgstr "" #: ../app-roadmaps.rst:209 msgid "" "Therefore, the fastest way to get your feature request up for consideration " "is to create an Etherpad with your ideas and propose a session to the design " "summit. If the design summit has already passed, you may also create a " "blueprint directly. Read this `blog post about how to work with blueprints " "`_ the perspective of Victoria Martínez, a developer intern." msgstr "" #: ../app-roadmaps.rst:217 msgid "" "The roadmap for the next release as it is developed can be seen at `Releases " "`_." msgstr "" #: ../app-roadmaps.rst:220 msgid "" "To determine the potential features going in to future releases, or to look " "at features implemented previously, take a look at the existing blueprints " "such as `OpenStack Compute (nova) Blueprints `_, `OpenStack Identity (keystone) Blueprints `_, and release notes." msgstr "" #: ../app-roadmaps.rst:228 msgid "" "Aside from the direct-to-blueprint pathway, there is another very well-" "regarded mechanism to influence the development roadmap: the user survey. " "Found at `OpenStack User Survey `_, " "it allows you to provide details of your deployments and needs, anonymously " "by default. Each cycle, the user committee analyzes the results and produces " "a report, including providing specific information to the technical " "committee and project team leads." msgstr "" #: ../app-roadmaps.rst:238 msgid "Aspects to Watch" msgstr "" #: ../app-roadmaps.rst:240 msgid "" "You want to keep an eye on the areas improving within OpenStack. The best " "way to \"watch\" roadmaps for each project is to look at the blueprints that " "are being approved for work on milestone releases. You can also learn from " "PTL webinars that follow the OpenStack summits twice a year." msgstr "" #: ../app-roadmaps.rst:247 msgid "Driver Quality Improvements" msgstr "" #: ../app-roadmaps.rst:249 msgid "" "A major quality push has occurred across drivers and plug-ins in Block " "Storage, Compute, and Networking. Particularly, developers of Compute and " "Networking drivers that require proprietary or hardware products are now " "required to provide an automated external testing system for use during the " "development process." msgstr "" #: ../app-roadmaps.rst:256 msgid "Easier Upgrades" msgstr "" #: ../app-roadmaps.rst:258 msgid "" "One of the most requested features since OpenStack began (for components " "other than Object Storage, which tends to \"just work\"): easier upgrades. " "In all recent releases internal messaging communication is versioned, " "meaning services can theoretically drop back to backward-compatible " "behavior. This allows you to run later versions of some components, while " "keeping older versions of others." msgstr "" #: ../app-roadmaps.rst:265 msgid "" "In addition, database migrations are now tested with the Turbo Hipster tool. " "This tool tests database migration performance on copies of real-world user " "databases." msgstr "" #: ../app-roadmaps.rst:269 msgid "" "These changes have facilitated the first proper OpenStack upgrade guide, " "found in :doc:`ops-upgrades`, and will continue to improve in the next " "release." msgstr "" #: ../app-roadmaps.rst:274 msgid "Deprecation of Nova Network" msgstr "" #: ../app-roadmaps.rst:276 msgid "" "With the introduction of the full software-defined networking stack provided " "by OpenStack Networking (neutron) in the Folsom release, development effort " "on the initial networking code that remains part of the Compute component " "has gradually lessened. While many still use ``nova-network`` in production, " "there has been a long-term plan to remove the code in favor of the more " "flexible and full-featured OpenStack Networking." msgstr "" #: ../app-roadmaps.rst:284 msgid "" "An attempt was made to deprecate ``nova-network`` during the Havana release, " "which was aborted due to the lack of equivalent functionality (such as the " "FlatDHCP multi-host high-availability mode mentioned in this guide), lack of " "a migration path between versions, insufficient testing, and simplicity when " "used for the more straightforward use cases ``nova-network`` traditionally " "supported. Though significant effort has been made to address these " "concerns, ``nova-network`` was not be deprecated in the Juno release. In " "addition, to a limited degree, patches to ``nova-network`` have again begin " "to be accepted, such as adding a per-network settings feature and SR-IOV " "support in Juno." msgstr "" #: ../app-roadmaps.rst:295 msgid "" "This leaves you with an important point of decision when designing your " "cloud. OpenStack Networking is robust enough to use with a small number of " "limitations (performance issues in some scenarios, only basic high " "availability of layer 3 systems) and provides many more features than ``nova-" "network``. However, if you do not have the more complex use cases that can " "benefit from fuller software-defined networking capabilities, or are " "uncomfortable with the new concepts introduced, ``nova-network`` may " "continue to be a viable option for the next 12 months." msgstr "" #: ../app-roadmaps.rst:304 msgid "" "Similarly, if you have an existing cloud and are looking to upgrade from " "``nova-network`` to OpenStack Networking, you should have the option to " "delay the upgrade for this period of time. However, each release of " "OpenStack brings significant new innovation, and regardless of your use of " "networking methodology, it is likely best to begin planning for an upgrade " "within a reasonable timeframe of each release." msgstr "" #: ../app-roadmaps.rst:311 msgid "" "As mentioned, there's currently no way to cleanly migrate from ``nova-" "network`` to neutron. We recommend that you keep a migration in mind and " "what that process might involve for when a proper migration path is released." msgstr "" #: ../app-roadmaps.rst:317 msgid "Distributed Virtual Router" msgstr "" #: ../app-roadmaps.rst:319 msgid "" "One of the long-time complaints surrounding OpenStack Networking was the " "lack of high availability for the layer 3 components. The Juno release " "introduced Distributed Virtual Router (DVR), which aims to solve this " "problem." msgstr "" #: ../app-roadmaps.rst:324 msgid "" "Early indications are that it does do this well for a base set of scenarios, " "such as using the ML2 plug-in with Open vSwitch, one flat external network " "and VXLAN tenant networks. However, it does appear that there are problems " "with the use of VLANs, IPv6, Floating IPs, high north-south traffic " "scenarios and large numbers of compute nodes. It is expected these will " "improve significantly with the next release, but bug reports on specific " "issues are highly desirable." msgstr "" #: ../app-roadmaps.rst:333 msgid "Replacement of Open vSwitch Plug-in with Modular Layer 2" msgstr "" #: ../app-roadmaps.rst:335 msgid "" "The Modular Layer 2 plug-in is a framework allowing OpenStack Networking to " "simultaneously utilize the variety of layer-2 networking technologies found " "in complex real-world data centers. It currently works with the existing " "Open vSwitch, Linux Bridge, and Hyper-V L2 agents and is intended to replace " "and deprecate the monolithic plug-ins associated with those L2 agents." msgstr "" #: ../app-roadmaps.rst:343 msgid "New API Versions" msgstr "" #: ../app-roadmaps.rst:345 msgid "" "The third version of the Compute API was broadly discussed and worked on " "during the Havana and Icehouse release cycles. Current discussions indicate " "that the V2 API will remain for many releases, and the next iteration of the " "API will be denoted v2.1 and have similar properties to the existing v2.0, " "rather than an entirely new v3 API. This is a great time to evaluate all API " "and provide comments while the next generation APIs are being defined. A new " "working group was formed specifically to `improve OpenStack APIs `_ and create design guidelines, " "which you are welcome to join." msgstr "" #: ../app-roadmaps.rst:356 msgid "OpenStack on OpenStack (TripleO)" msgstr "" #: ../app-roadmaps.rst:358 msgid "" "This project continues to improve and you may consider using it for " "greenfield deployments, though according to the latest user survey results " "it remains to see widespread uptake." msgstr "" #: ../app-roadmaps.rst:363 msgid "Data processing service for OpenStack (sahara)" msgstr "" #: ../app-roadmaps.rst:365 msgid "" "A much-requested answer to big data problems, a dedicated team has been " "making solid progress on a Hadoop-as-a-Service project." msgstr "" #: ../app-roadmaps.rst:369 msgid "Bare metal Deployment (ironic)" msgstr "" #: ../app-roadmaps.rst:371 msgid "" "The bare-metal deployment has been widely lauded, and development continues. " "The Juno release brought the OpenStack Bare metal drive into the Compute " "project, and it was aimed to deprecate the existing bare-metal driver in " "Kilo. If you are a current user of the bare metal driver, a particular " "blueprint to follow is `Deprecate the bare metal driver `_" msgstr "" #: ../app-roadmaps.rst:380 msgid "Database as a Service (trove)" msgstr "" #: ../app-roadmaps.rst:382 msgid "" "The OpenStack community has had a database-as-a-service tool in development " "for some time, and we saw the first integrated release of it in Icehouse. " "From its release it was able to deploy database servers out of the box in a " "highly available way, initially supporting only MySQL. Juno introduced " "support for Mongo (including clustering), PostgreSQL and Couchbase, in " "addition to replication functionality for MySQL. In Kilo, more advanced " "clustering capability was delivered, in addition to better integration with " "other OpenStack components such as Networking." msgstr "" #: ../app-roadmaps.rst:392 msgid "Message Service (zaqar)" msgstr "" #: ../app-roadmaps.rst:394 msgid "A service to provide queues of messages and notifications was released." msgstr "" #: ../app-roadmaps.rst:397 msgid "DNS service (designate)" msgstr "" #: ../app-roadmaps.rst:399 msgid "" "A long requested service, to provide the ability to manipulate DNS entries " "associated with OpenStack resources has gathered a following. The designate " "project was also released." msgstr "" #: ../app-roadmaps.rst:404 msgid "Scheduler Improvements" msgstr "" #: ../app-roadmaps.rst:406 msgid "" "Both Compute and Block Storage rely on schedulers to determine where to " "place virtual machines or volumes. In Havana, the Compute scheduler " "underwent significant improvement, while in Icehouse it was the scheduler in " "Block Storage that received a boost. Further down the track, an effort " "started this cycle that aims to create a holistic scheduler covering both " "will come to fruition. Some of the work that was done in Kilo can be found " "under the `Gantt project `_." msgstr "" #: ../app-roadmaps.rst:416 msgid "Block Storage Improvements" msgstr "" #: ../app-roadmaps.rst:418 msgid "" "Block Storage is considered a stable project, with wide uptake and a long " "track record of quality drivers. The team has discussed many areas of work " "at the summits, including better error reporting, automated discovery, and " "thin provisioning features." msgstr "" #: ../app-roadmaps.rst:424 msgid "Toward a Python SDK" msgstr "" #: ../app-roadmaps.rst:426 msgid "" "Though many successfully use the various python-\\*client code as an " "effective SDK for interacting with OpenStack, consistency between the " "projects and documentation availability waxes and wanes. To combat this, an " "`effort to improve the experience `_ has started. Cross-project development efforts in " "OpenStack have a checkered history, such as the `unified client project " "`_ having several false " "starts. However, the early signs for the SDK project are promising, and we " "expect to see results during the Juno cycle." msgstr "" #: ../app-usecases.rst:3 msgid "Use Cases" msgstr "" #: ../app-usecases.rst:5 msgid "" "This appendix contains a small selection of use cases from the community, " "with more technical detail than usual. Further examples can be found on the " "`OpenStack website `_." msgstr "" #: ../app-usecases.rst:10 msgid "NeCTAR" msgstr "" #: ../app-usecases.rst:12 msgid "" "Who uses it: researchers from the Australian publicly funded research " "sector. Use is across a wide variety of disciplines, with the purpose of " "instances ranging from running simple web servers to using hundreds of cores " "for high-throughput computing." msgstr "" #: ../app-usecases.rst:18 ../app-usecases.rst:57 ../app-usecases.rst:128 #: ../app-usecases.rst:163 msgid "Deployment" msgstr "" #: ../app-usecases.rst:20 msgid "" "Using OpenStack Compute cells, the NeCTAR Research Cloud spans eight sites " "with approximately 4,000 cores per site." msgstr "" #: ../app-usecases.rst:23 msgid "" "Each site runs a different configuration, as a resource cells in an " "OpenStack Compute cells setup. Some sites span multiple data centers, some " "use off compute node storage with a shared file system, and some use on " "compute node storage with a non-shared file system. Each site deploys the " "Image service with an Object Storage back end. A central Identity, " "dashboard, and Compute API service are used. A login to the dashboard " "triggers a SAML login with Shibboleth, which creates an account in the " "Identity service with an SQL back end. An Object Storage Global Cluster is " "used across several sites." msgstr "" #: ../app-usecases.rst:33 msgid "" "Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per core and " "approximately 40 GB of ephemeral storage per core." msgstr "" #: ../app-usecases.rst:36 msgid "" "All sites are based on Ubuntu 14.04, with KVM as the hypervisor. The " "OpenStack version in use is typically the current stable version, with 5 to " "10 percent back-ported code from trunk and modifications." msgstr "" #: ../app-usecases.rst:43 msgid "" "`OpenStack.org case study `_" msgstr "" #: ../app-usecases.rst:46 msgid "`NeCTAR-RC GitHub `_" msgstr "" #: ../app-usecases.rst:48 msgid "`NeCTAR website `_" msgstr "" #: ../app-usecases.rst:51 msgid "MIT CSAIL" msgstr "" #: ../app-usecases.rst:53 msgid "" "Who uses it: researchers from the MIT Computer Science and Artificial " "Intelligence Lab." msgstr "" #: ../app-usecases.rst:59 msgid "" "The CSAIL cloud is currently 64 physical nodes with a total of 768 physical " "cores and 3,456 GB of RAM. Persistent data storage is largely outside the " "cloud on NFS, with cloud resources focused on compute resources. There are " "more than 130 users in more than 40 projects, typically running 2,000–2,500 " "vCPUs in 300 to 400 instances." msgstr "" #: ../app-usecases.rst:65 msgid "" "We initially deployed on Ubuntu 12.04 with the Essex release of OpenStack " "using FlatDHCP multi-host networking." msgstr "" #: ../app-usecases.rst:68 msgid "" "The software stack is still Ubuntu 12.04 LTS, but now with OpenStack Havana " "from the Ubuntu Cloud Archive. KVM is the hypervisor, deployed using `FAI " "`_ and Puppet for configuration management. The FAI " "and Puppet combination is used lab-wide, not only for OpenStack. There is a " "single cloud controller node, which also acts as network controller, with " "the remainder of the server hardware dedicated to compute nodes." msgstr "" #: ../app-usecases.rst:76 msgid "" "Host aggregates and instance-type extra specs are used to provide two " "different resource allocation ratios. The default resource allocation ratios " "we use are 4:1 CPU and 1.5:1 RAM. Compute-intensive workloads use instance " "types that require non-oversubscribed hosts where ``cpu_ratio`` and " "``ram_ratio`` are both set to 1.0. Since we have hyper-threading enabled on " "our compute nodes, this provides one vCPU per CPU thread, or two vCPUs per " "physical core." msgstr "" #: ../app-usecases.rst:84 msgid "" "With our upgrade to Grizzly in August 2013, we moved to OpenStack " "Networking, neutron (quantum at the time). Compute nodes have two-gigabit " "network interfaces and a separate management card for IPMI management. One " "network interface is used for node-to-node communications. The other is used " "as a trunk port for OpenStack managed VLANs. The controller node uses two " "bonded 10g network interfaces for its public IP communications. Big pipes " "are used here because images are served over this port, and it is also used " "to connect to iSCSI storage, back-ending the image storage and database. The " "controller node also has a gigabit interface that is used in trunk mode for " "OpenStack managed VLAN traffic. This port handles traffic to the dhcp-agent " "and metadata-proxy." msgstr "" #: ../app-usecases.rst:97 msgid "" "We approximate the older ``nova-network`` multi-host HA setup by using " "\"provider VLAN networks\" that connect instances directly to existing " "publicly addressable networks and use existing physical routers as their " "default gateway. This means that if our network controller goes down, " "running instances still have their network available, and no single Linux " "host becomes a traffic bottleneck. We are able to do this because we have a " "sufficient supply of IPv4 addresses to cover all of our instances and thus " "don't need NAT and don't use floating IP addresses. We provide a single " "generic public network to all projects and additional existing VLANs on a " "project-by-project basis as needed. Individual projects are also allowed to " "create their own private GRE based networks." msgstr "" #: ../app-usecases.rst:113 msgid "`CSAIL homepage `_" msgstr "" #: ../app-usecases.rst:116 msgid "DAIR" msgstr "" #: ../app-usecases.rst:118 msgid "" "Who uses it: DAIR is an integrated virtual environment that leverages the " "CANARIE network to develop and test new information communication technology " "(ICT) and other digital technologies. It combines such digital " "infrastructure as advanced networking and cloud computing and storage to " "create an environment for developing and testing innovative ICT " "applications, protocols, and services; performing at-scale experimentation " "for deployment; and facilitating a faster time to market." msgstr "" #: ../app-usecases.rst:130 msgid "" "DAIR is hosted at two different data centers across Canada: one in Alberta " "and the other in Quebec. It consists of a cloud controller at each location, " "although, one is designated the \"master\" controller that is in charge of " "central authentication and quotas. This is done through custom scripts and " "light modifications to OpenStack. DAIR is currently running Havana." msgstr "" #: ../app-usecases.rst:137 msgid "For Object Storage, each region has a swift environment." msgstr "" #: ../app-usecases.rst:139 msgid "" "A NetApp appliance is used in each region for both block storage and " "instance storage. There are future plans to move the instances off the " "NetApp appliance and onto a distributed file system such as :term:`Ceph` or " "GlusterFS." msgstr "" #: ../app-usecases.rst:144 msgid "" "VlanManager is used extensively for network management. All servers have two " "bonded 10GbE NICs that are connected to two redundant switches. DAIR is set " "up to use single-node networking where the cloud controller is the gateway " "for all instances on all compute nodes. Internal OpenStack traffic (for " "example, storage traffic) does not go through the cloud controller." msgstr "" #: ../app-usecases.rst:154 msgid "`DAIR homepage `__" msgstr "" #: ../app-usecases.rst:157 msgid "CERN" msgstr "" #: ../app-usecases.rst:159 msgid "" "Who uses it: researchers at CERN (European Organization for Nuclear " "Research) conducting high-energy physics research." msgstr "" #: ../app-usecases.rst:165 msgid "" "The environment is largely based on Scientific Linux 6, which is Red Hat " "compatible. We use KVM as our primary hypervisor, although tests are ongoing " "with Hyper-V on Windows Server 2008." msgstr "" #: ../app-usecases.rst:169 msgid "" "We use the Puppet Labs OpenStack modules to configure Compute, Image " "service, Identity, and dashboard. Puppet is used widely for instance " "configuration, and Foreman is used as a GUI for reporting and instance " "provisioning." msgstr "" #: ../app-usecases.rst:174 msgid "" "Users and groups are managed through Active Directory and imported into the " "Identity service using LDAP. CLIs are available for nova and Euca2ools to do " "this." msgstr "" #: ../app-usecases.rst:178 msgid "" "There are three clouds currently running at CERN, totaling about 4,700 " "compute nodes, with approximately 120,000 cores. The CERN IT cloud aims to " "expand to 300,000 cores by 2015." msgstr "" #: ../app-usecases.rst:185 msgid "" "`OpenStack in Production: A tale of 3 OpenStack Clouds `_" msgstr "" #: ../app-usecases.rst:188 msgid "" "`Review of CERN Data Centre Infrastructure `_" msgstr "" #: ../app-usecases.rst:191 msgid "" "`CERN Cloud Infrastructure User Guide `_" msgstr "" #: ../appendix.rst:2 msgid "Appendix" msgstr "" #: ../index.rst:3 msgid "OpenStack Operations Guide" msgstr "" #: ../index.rst:6 msgid "Abstract" msgstr "" #: ../index.rst:8 msgid "This guide provides information about operating OpenStack clouds." msgstr "" #: ../index.rst:10 msgid "" "We recommend that you turn to the `Installation Tutorials and Guides " "`_, which contains " "a step-by-step guide on how to manually install the OpenStack packages and " "dependencies on your cloud." msgstr "" #: ../index.rst:15 msgid "" "While it is important for an operator to be familiar with the steps involved " "in deploying OpenStack, we also strongly encourage you to evaluate " "`OpenStack deployment tools `_ and configuration-management tools, such as :term:`Puppet` " "or :term:`Chef`, which can help automate this deployment process." msgstr "" #: ../index.rst:22 msgid "" "In this guide, we assume that you have successfully deployed an OpenStack " "cloud and are able to perform basic operations such as adding images, " "booting instances, and attaching volumes." msgstr "" #: ../index.rst:26 msgid "" "As your focus turns to stable operations, we recommend that you do skim this " "guide to get a sense of the content. Some of this content is useful to read " "in advance so that you can put best practices into effect to simplify your " "life in the long run. Other content is more useful as a reference that you " "might turn to when an unexpected event occurs (such as a power failure), or " "to troubleshoot a particular problem." msgstr "" #: ../index.rst:34 msgid "Contents" msgstr "" #: ../ops-advanced-configuration.rst:3 msgid "Advanced Configuration" msgstr "" #: ../ops-advanced-configuration.rst:5 msgid "" "OpenStack is intended to work well across a variety of installation flavors, " "from very small private clouds to large public clouds. To achieve this, the " "developers add configuration options to their code that allow the behavior " "of the various components to be tweaked depending on your needs. " "Unfortunately, it is not possible to cover all possible deployments with the " "default configuration values." msgstr "" #: ../ops-advanced-configuration.rst:12 msgid "" "At the time of writing, OpenStack has more than 3,000 configuration options. " "You can see them documented at the `OpenStack Configuration Reference " "`_. " "This chapter cannot hope to document all of these, but we do try to " "introduce the important concepts so that you know where to go digging for " "more information." msgstr "" #: ../ops-advanced-configuration.rst:21 msgid "Differences Between Various Drivers" msgstr "" #: ../ops-advanced-configuration.rst:23 msgid "" "Many OpenStack projects implement a driver layer, and each of these drivers " "will implement its own configuration options. For example, in OpenStack " "Compute (nova), there are various hypervisor drivers implemented—libvirt, " "xenserver, hyper-v, and vmware, for example. Not all of these hypervisor " "drivers have the same features, and each has different tuning requirements." msgstr "" #: ../ops-advanced-configuration.rst:32 msgid "" "The currently implemented hypervisors are listed on the `OpenStack " "Configuration Reference `__. You can see a matrix of the various features " "in OpenStack Compute (nova) hypervisor drivers at the `Hypervisor support " "matrix page `_." msgstr "" #: ../ops-advanced-configuration.rst:39 msgid "" "The point we are trying to make here is that just because an option exists " "doesn't mean that option is relevant to your driver choices. Normally, the " "documentation notes which drivers the configuration applies to." msgstr "" #: ../ops-advanced-configuration.rst:45 msgid "Implementing Periodic Tasks" msgstr "" #: ../ops-advanced-configuration.rst:47 msgid "" "Another common concept across various OpenStack projects is that of periodic " "tasks. Periodic tasks are much like cron jobs on traditional Unix systems, " "but they are run inside an OpenStack process. For example, when OpenStack " "Compute (nova) needs to work out what images it can remove from its local " "cache, it runs a periodic task to do this." msgstr "" #: ../ops-advanced-configuration.rst:53 msgid "" "Periodic tasks are important to understand because of limitations in the " "threading model that OpenStack uses. OpenStack uses cooperative threading in " "Python, which means that if something long and complicated is running, it " "will block other tasks inside that process from running unless it " "voluntarily yields execution to another cooperative thread." msgstr "" #: ../ops-advanced-configuration.rst:59 msgid "" "A tangible example of this is the ``nova-compute`` process. In order to " "manage the image cache with libvirt, ``nova-compute`` has a periodic process " "that scans the contents of the image cache. Part of this scan is calculating " "a checksum for each of the images and making sure that checksum matches what " "``nova-compute`` expects it to be. However, images can be very large, and " "these checksums can take a long time to generate. At one point, before it " "was reported as a bug and fixed, ``nova-compute`` would block on this task " "and stop responding to RPC requests. This was visible to users as failure of " "operations such as spawning or deleting instances." msgstr "" #: ../ops-advanced-configuration.rst:70 msgid "" "The take away from this is if you observe an OpenStack process that appears " "to \"stop\" for a while and then continue to process normally, you should " "check that periodic tasks aren't the problem. One way to do this is to " "disable the periodic tasks by setting their interval to zero. Additionally, " "you can configure how often these periodic tasks run—in some cases, it might " "make sense to run them at a different frequency from the default." msgstr "" #: ../ops-advanced-configuration.rst:78 msgid "" "The frequency is defined separately for each periodic task. Therefore, to " "disable every periodic task in OpenStack Compute (nova), you would need to " "set a number of configuration options to zero. The current list of " "configuration options you would need to set to zero are:" msgstr "" #: ../ops-advanced-configuration.rst:83 msgid "``bandwidth_poll_interval``" msgstr "" #: ../ops-advanced-configuration.rst:84 msgid "``sync_power_state_interval``" msgstr "" #: ../ops-advanced-configuration.rst:85 msgid "``heal_instance_info_cache_interval``" msgstr "" #: ../ops-advanced-configuration.rst:86 msgid "``host_state_interval``" msgstr "" #: ../ops-advanced-configuration.rst:87 msgid "``image_cache_manager_interval``" msgstr "" #: ../ops-advanced-configuration.rst:88 msgid "``reclaim_instance_interval``" msgstr "" #: ../ops-advanced-configuration.rst:89 msgid "``volume_usage_poll_interval``" msgstr "" #: ../ops-advanced-configuration.rst:90 msgid "``shelved_poll_interval``" msgstr "" #: ../ops-advanced-configuration.rst:91 msgid "``shelved_offload_time``" msgstr "" #: ../ops-advanced-configuration.rst:92 msgid "``instance_delete_interval``" msgstr "" #: ../ops-advanced-configuration.rst:94 msgid "" "To set a configuration option to zero, include a line such as " "``image_cache_manager_interval=0`` in your ``nova.conf`` file." msgstr "" #: ../ops-advanced-configuration.rst:97 msgid "" "This list will change between releases, so please refer to your " "configuration guide for up-to-date information." msgstr "" #: ../ops-advanced-configuration.rst:101 msgid "Specific Configuration Topics" msgstr "" #: ../ops-advanced-configuration.rst:103 msgid "" "This section covers specific examples of configuration options you might " "consider tuning. It is by no means an exhaustive list." msgstr "" #: ../ops-advanced-configuration.rst:107 msgid "Security Configuration for Compute, Networking, and Storage" msgstr "" #: ../ops-advanced-configuration.rst:109 msgid "" "The `OpenStack Security Guide `_ " "provides a deep dive into securing an OpenStack cloud, including SSL/TLS, " "key management, PKI and certificate management, data transport and privacy " "concerns, and compliance." msgstr "" #: ../ops-advanced-configuration.rst:115 msgid "High Availability" msgstr "" #: ../ops-advanced-configuration.rst:117 msgid "" "The `OpenStack High Availability Guide `_ offers suggestions for elimination of a single point of " "failure that could cause system downtime. While it is not a completely " "prescriptive document, it offers methods and techniques for avoiding " "downtime and data loss." msgstr "" #: ../ops-advanced-configuration.rst:125 msgid "Enabling IPv6 Support" msgstr "" #: ../ops-advanced-configuration.rst:127 msgid "" "You can follow the progress being made on IPV6 support by watching the " "`neutron IPv6 Subteam at work `_." msgstr "" #: ../ops-advanced-configuration.rst:131 msgid "" "By modifying your configuration setup, you can set up IPv6 when using ``nova-" "network`` for networking, and a tested setup is documented for FlatDHCP and " "a multi-host configuration. The key is to make ``nova-network`` think a " "``radvd`` command ran successfully. The entire configuration is detailed in " "a Cybera blog post, `“An IPv6 enabled cloud” `_." msgstr "" #: ../ops-advanced-configuration.rst:139 msgid "Geographical Considerations for Object Storage" msgstr "" #: ../ops-advanced-configuration.rst:141 msgid "" "Support for global clustering of object storage servers is available for all " "supported releases. You would implement these global clusters to ensure " "replication across geographic areas in case of a natural disaster and also " "to ensure that users can write or access their objects more quickly based on " "the closest data center. You configure a default region with one zone for " "each cluster, but be sure your network (WAN) can handle the additional " "request and response load between zones as you add more zones and build a " "ring that handles more zones. Refer to `Geographically Distributed Clusters " "`_ in the documentation for additional information." msgstr "" #: ../ops-backup-recovery.rst:3 msgid "Backup and Recovery" msgstr "" #: ../ops-backup-recovery.rst:5 msgid "" "Standard backup best practices apply when creating your OpenStack backup " "policy. For example, how often to back up your data is closely related to " "how quickly you need to recover from data loss." msgstr "" #: ../ops-backup-recovery.rst:11 msgid "" "If you cannot have any data loss at all, you should also focus on a highly " "available deployment. The `OpenStack High Availability Guide `_ offers suggestions for elimination of a " "single point of failure that could cause system downtime. While it is not a " "completely prescriptive document, it offers methods and techniques for " "avoiding downtime and data loss." msgstr "" #: ../ops-backup-recovery.rst:19 msgid "Other backup considerations include:" msgstr "" #: ../ops-backup-recovery.rst:21 msgid "How many backups to keep?" msgstr "" #: ../ops-backup-recovery.rst:22 msgid "Should backups be kept off-site?" msgstr "" #: ../ops-backup-recovery.rst:23 msgid "How often should backups be tested?" msgstr "" #: ../ops-backup-recovery.rst:25 msgid "" "Just as important as a backup policy is a recovery policy (or at least " "recovery testing)." msgstr "" #: ../ops-backup-recovery.rst:29 msgid "What to Back Up" msgstr "" #: ../ops-backup-recovery.rst:31 msgid "" "While OpenStack is composed of many components and moving parts, backing up " "the critical data is quite simple." msgstr "" #: ../ops-backup-recovery.rst:34 msgid "" "This chapter describes only how to back up configuration files and databases " "that the various OpenStack components need to run. This chapter does not " "describe how to back up objects inside Object Storage or data contained " "inside Block Storage. Generally these areas are left for users to back up on " "their own." msgstr "" #: ../ops-backup-recovery.rst:41 msgid "Database Backups" msgstr "" #: ../ops-backup-recovery.rst:43 msgid "" "The example OpenStack architecture designates the cloud controller as the " "MySQL server. This MySQL server hosts the databases for nova, glance, " "cinder, and keystone. With all of these databases in one place, it's very " "easy to create a database backup:" msgstr "" #: ../ops-backup-recovery.rst:52 msgid "If you only want to backup a single database, you can instead run:" msgstr "" #: ../ops-backup-recovery.rst:58 msgid "where ``nova`` is the database you want to back up." msgstr "" #: ../ops-backup-recovery.rst:60 msgid "" "You can easily automate this process by creating a cron job that runs the " "following script once per day:" msgstr "" #: ../ops-backup-recovery.rst:73 msgid "" "This script dumps the entire MySQL database and deletes any backups older " "than seven days." msgstr "" #: ../ops-backup-recovery.rst:77 msgid "File System Backups" msgstr "" #: ../ops-backup-recovery.rst:79 msgid "" "This section discusses which files and directories should be backed up " "regularly, organized by service." msgstr "" #: ../ops-backup-recovery.rst:83 msgid "Compute" msgstr "" #: ../ops-backup-recovery.rst:85 msgid "" "The ``/etc/nova`` directory on both the cloud controller and compute nodes " "should be regularly backed up." msgstr "" #: ../ops-backup-recovery.rst:88 msgid "" "``/var/log/nova`` does not need to be backed up if you have all logs going " "to a central area. It is highly recommended to use a central logging server " "or back up the log directory." msgstr "" #: ../ops-backup-recovery.rst:92 msgid "" "``/var/lib/nova`` is another important directory to back up. The exception " "to this is the ``/var/lib/nova/instances`` subdirectory on compute nodes. " "This subdirectory contains the KVM images of running instances. You would " "want to back up this directory only if you need to maintain backup copies of " "all instances. Under most circumstances, you do not need to do this, but " "this can vary from cloud to cloud and your service levels. Also be aware " "that making a backup of a live KVM instance can cause that instance to not " "boot properly if it is ever restored from a backup." msgstr "" #: ../ops-backup-recovery.rst:103 msgid "Image Catalog and Delivery" msgstr "" #: ../ops-backup-recovery.rst:105 msgid "" "``/etc/glance`` and ``/var/log/glance`` follow the same rules as their nova " "counterparts." msgstr "" #: ../ops-backup-recovery.rst:108 msgid "" "``/var/lib/glance`` should also be backed up. Take special notice of ``/var/" "lib/glance/images``. If you are using a file-based back end of glance, ``/" "var/lib/glance/images`` is where the images are stored and care should be " "taken." msgstr "" #: ../ops-backup-recovery.rst:113 msgid "" "There are two ways to ensure stability with this directory. The first is to " "make sure this directory is run on a RAID array. If a disk fails, the " "directory is available. The second way is to use a tool such as rsync to " "replicate the images to another server:" msgstr "" #: ../ops-backup-recovery.rst:123 msgid "Identity" msgstr "" #: ../ops-backup-recovery.rst:125 msgid "" "``/etc/keystone`` and ``/var/log/keystone`` follow the same rules as other " "components." msgstr "" #: ../ops-backup-recovery.rst:128 msgid "" "``/var/lib/keystone``, although it should not contain any data being used, " "can also be backed up just in case." msgstr "" #: ../ops-backup-recovery.rst:132 ../ops-user-facing-operations.rst:748 msgid "Block Storage" msgstr "" #: ../ops-backup-recovery.rst:134 msgid "" "``/etc/cinder`` and ``/var/log/cinder`` follow the same rules as other " "components." msgstr "" #: ../ops-backup-recovery.rst:137 msgid "``/var/lib/cinder`` should also be backed up." msgstr "" #: ../ops-backup-recovery.rst:142 msgid "" "``/etc/neutron`` and ``/var/log/neutron`` follow the same rules as other " "components." msgstr "" #: ../ops-backup-recovery.rst:145 msgid "``/var/lib/neutron`` should also be backed up." msgstr "" #: ../ops-backup-recovery.rst:148 msgid "Object Storage" msgstr "" #: ../ops-backup-recovery.rst:150 msgid "" "``/etc/swift`` is very important to have backed up. This directory contains " "the swift configuration files as well as the ring files and ring :term:" "`builder files `, which if lost, render the data on your " "cluster inaccessible. A best practice is to copy the builder files to all " "storage nodes along with the ring files. Multiple backup copies are spread " "throughout your storage cluster." msgstr "" #: ../ops-backup-recovery.rst:158 msgid "Telemetry" msgstr "" #: ../ops-backup-recovery.rst:160 msgid "" "Back up the ``/etc/ceilometer`` directory containing Telemetry configuration " "files." msgstr "" #: ../ops-backup-recovery.rst:164 msgid "Orchestration" msgstr "" #: ../ops-backup-recovery.rst:166 msgid "" "Back up HOT template ``yaml`` files, and the ``/etc/heat/`` directory " "containing Orchestration configuration files." msgstr "" #: ../ops-backup-recovery.rst:170 msgid "Recovering Backups" msgstr "" #: ../ops-backup-recovery.rst:172 msgid "" "Recovering backups is a fairly simple process. To begin, first ensure that " "the service you are recovering is not running. For example, to do a full " "recovery of ``nova`` on the cloud controller, first stop all ``nova`` " "services:" msgstr "" #: ../ops-backup-recovery.rst:185 msgid "Now you can import a previously backed-up database:" msgstr "" #: ../ops-backup-recovery.rst:191 msgid "You can also restore backed-up nova directories:" msgstr "" #: ../ops-backup-recovery.rst:198 msgid "Once the files are restored, start everything back up:" msgstr "" #: ../ops-backup-recovery.rst:209 msgid "" "Other services follow the same process, with their respective directories " "and databases." msgstr "" #: ../ops-backup-recovery.rst:213 ../ops-lay-of-the-land.rst:596 #: ../ops-logging-monitoring-summary.rst:3 ../ops-projects-users-summary.rst:3 msgid "Summary" msgstr "" #: ../ops-backup-recovery.rst:215 msgid "" "Backup and subsequent recovery is one of the first tasks system " "administrators learn. However, each system has different items that need " "attention. By taking care of your database, image service, and appropriate " "file system locations, you can be assured that you can handle any event " "requiring recovery." msgstr "" #: ../ops-capacity-planning-scaling.rst:5 msgid "Capacity planning and scaling" msgstr "" #: ../ops-capacity-planning-scaling.rst:7 msgid "" "Cloud-based applications typically request more discrete hardware " "(horizontal scaling) as opposed to traditional applications, which require " "larger hardware to scale (vertical scaling)." msgstr "" #: ../ops-capacity-planning-scaling.rst:11 msgid "" "OpenStack is designed to be horizontally scalable. Rather than switching to " "larger servers, you procure more servers and simply install identically " "configured services. Ideally, you scale out and load balance among groups of " "functionally identical services (for example, compute nodes or ``nova-api`` " "nodes), that communicate on a message bus." msgstr "" #: ../ops-capacity-planning-scaling.rst:18 msgid "Determining cloud scalability" msgstr "" #: ../ops-capacity-planning-scaling.rst:20 msgid "" "Determining the scalability of your cloud and how to improve it requires " "balancing many variables. No one solution meets everyone's scalability " "goals. However, it is helpful to track a number of metrics. You can define " "virtual hardware templates called \"flavors\" in OpenStack, which will " "impact your cloud scaling decisions. These templates define sizes for memory " "in RAM, root disk size, amount of ephemeral data disk space available, and " "the number of CPU cores." msgstr "" #: ../ops-capacity-planning-scaling.rst:28 msgid "" "The default OpenStack flavors are shown in :ref:`table_default_flavors`." msgstr "" #: ../ops-capacity-planning-scaling.rst:32 msgid "Table. OpenStack default flavors" msgstr "" #: ../ops-capacity-planning-scaling.rst:36 #: ../ops-user-facing-operations.rst:430 #: ../ops-user-facing-operations.rst:2046 msgid "Name" msgstr "" #: ../ops-capacity-planning-scaling.rst:37 msgid "Virtual cores" msgstr "" #: ../ops-capacity-planning-scaling.rst:38 msgid "Memory" msgstr "" #: ../ops-capacity-planning-scaling.rst:39 #: ../ops-user-facing-operations.rst:434 msgid "Disk" msgstr "" #: ../ops-capacity-planning-scaling.rst:40 #: ../ops-user-facing-operations.rst:436 msgid "Ephemeral" msgstr "" #: ../ops-capacity-planning-scaling.rst:41 msgid "m1.tiny" msgstr "" #: ../ops-capacity-planning-scaling.rst:42 #: ../ops-capacity-planning-scaling.rst:47 ../ops-maintenance-complete.rst:17 msgid "1" msgstr "" #: ../ops-capacity-planning-scaling.rst:43 msgid "512 MB" msgstr "" #: ../ops-capacity-planning-scaling.rst:44 msgid "1 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:45 msgid "0 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:46 msgid "m1.small" msgstr "" #: ../ops-capacity-planning-scaling.rst:48 msgid "2 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:49 #: ../ops-capacity-planning-scaling.rst:54 #: ../ops-capacity-planning-scaling.rst:59 #: ../ops-capacity-planning-scaling.rst:64 msgid "10 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:50 msgid "20 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:51 msgid "m1.medium" msgstr "" #: ../ops-capacity-planning-scaling.rst:52 ../ops-maintenance-complete.rst:19 msgid "2" msgstr "" #: ../ops-capacity-planning-scaling.rst:53 msgid "4 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:55 msgid "40 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:56 msgid "m1.large" msgstr "" #: ../ops-capacity-planning-scaling.rst:57 ../ops-maintenance-complete.rst:23 msgid "4" msgstr "" #: ../ops-capacity-planning-scaling.rst:58 msgid "8 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:60 msgid "80 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:61 msgid "m1.xlarge" msgstr "" #: ../ops-capacity-planning-scaling.rst:62 msgid "8" msgstr "" #: ../ops-capacity-planning-scaling.rst:63 msgid "16 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:65 msgid "160 GB" msgstr "" #: ../ops-capacity-planning-scaling.rst:67 msgid "" "The starting point is the core count of your cloud. By applying some ratios, " "you can gather information about:" msgstr "" #: ../ops-capacity-planning-scaling.rst:70 msgid "" "The number of virtual machines (VMs) you expect to run, ``((overcommit " "fraction × cores) / virtual cores per instance)``" msgstr "" #: ../ops-capacity-planning-scaling.rst:73 msgid "" "How much storage is required ``(flavor disk size × number of instances)``" msgstr "" #: ../ops-capacity-planning-scaling.rst:75 msgid "" "You can use these ratios to determine how much additional infrastructure you " "need to support your cloud." msgstr "" #: ../ops-capacity-planning-scaling.rst:78 msgid "" "Here is an example using the ratios for gathering scalability information " "for the number of VMs expected as well as the storage needed. The following " "numbers support (200 / 2) × 16 = 1600 VM instances and require 80 TB of " "storage for ``/var/lib/nova/instances``:" msgstr "" #: ../ops-capacity-planning-scaling.rst:83 msgid "200 physical cores." msgstr "" #: ../ops-capacity-planning-scaling.rst:85 msgid "" "Most instances are size m1.medium (two virtual cores, 50 GB of storage)." msgstr "" #: ../ops-capacity-planning-scaling.rst:88 msgid "" "Default CPU overcommit ratio (``cpu_allocation_ratio`` in the ``nova.conf`` " "file) of 16:1." msgstr "" #: ../ops-capacity-planning-scaling.rst:92 msgid "" "Regardless of the overcommit ratio, an instance can not be placed on any " "physical node with fewer raw (pre-overcommit) resources than instance flavor " "requires." msgstr "" #: ../ops-capacity-planning-scaling.rst:96 msgid "" "However, you need more than the core count alone to estimate the load that " "the API services, database servers, and queue servers are likely to " "encounter. You must also consider the usage patterns of your cloud." msgstr "" #: ../ops-capacity-planning-scaling.rst:100 msgid "" "As a specific example, compare a cloud that supports a managed web-hosting " "platform with one running integration tests for a development project that " "creates one VM per code commit. In the former, the heavy work of creating a " "VM happens only every few months, whereas the latter puts constant heavy " "load on the cloud controller. You must consider your average VM lifetime, as " "a larger number generally means less load on the cloud controller." msgstr "" #: ../ops-capacity-planning-scaling.rst:108 msgid "" "Aside from the creation and termination of VMs, you must consider the impact " "of users accessing the service particularly on ``nova-api`` and its " "associated database. Listing instances garners a great deal of information " "and, given the frequency with which users run this operation, a cloud with a " "large number of users can increase the load significantly. This can occur " "even without their knowledge. For example, leaving the OpenStack dashboard " "instances tab open in the browser refreshes the list of VMs every 30 seconds." msgstr "" #: ../ops-capacity-planning-scaling.rst:117 msgid "" "After you consider these factors, you can determine how many cloud " "controller cores you require. A typical eight core, 8 GB of RAM server is " "sufficient for up to a rack of compute nodes — given the above caveats." msgstr "" #: ../ops-capacity-planning-scaling.rst:122 msgid "" "You must also consider key hardware specifications for the performance of " "user VMs, as well as budget and performance needs, including storage " "performance (spindles/core), memory availability (RAM/core), network " "bandwidth hardware specifications and (Gbps/core), and overall CPU " "performance (CPU/core)." msgstr "" #: ../ops-capacity-planning-scaling.rst:130 msgid "" "For a discussion of metric tracking, including how to extract metrics from " "your cloud, see the `OpenStack Operations Guide `_." msgstr "" #: ../ops-capacity-planning-scaling.rst:135 msgid "Adding cloud controller nodes" msgstr "" #: ../ops-capacity-planning-scaling.rst:137 msgid "" "You can facilitate the horizontal expansion of your cloud by adding nodes. " "Adding compute nodes is straightforward since they are easily picked up by " "the existing installation. However, you must consider some important points " "when you design your cluster to be highly available." msgstr "" #: ../ops-capacity-planning-scaling.rst:142 msgid "" "A cloud controller node runs several different services. You can install " "services that communicate only using the message queue internally— ``nova-" "scheduler`` and ``nova-console`` on a new server for expansion. However, " "other integral parts require more care." msgstr "" #: ../ops-capacity-planning-scaling.rst:147 msgid "" "You should load balance user-facing services such as dashboard, ``nova-" "api``, or the Object Storage proxy. Use any standard HTTP load-balancing " "method (DNS round robin, hardware load balancer, or software such as Pound " "or HAProxy). One caveat with dashboard is the VNC proxy, which uses the " "WebSocket protocol— something that an L7 load balancer might struggle with. " "See also `Horizon session storage `_." msgstr "" #: ../ops-capacity-planning-scaling.rst:155 msgid "" "You can configure some services, such as ``nova-api`` and ``glance-api``, to " "use multiple processes by changing a flag in their configuration file " "allowing them to share work between multiple cores on the one machine." msgstr "" #: ../ops-capacity-planning-scaling.rst:162 msgid "" "Several options are available for MySQL load balancing, and the supported " "AMQP brokers have built-in clustering support. Information on how to " "configure these and many of the other services can be found in the " "`Operations Guide `_." msgstr "" #: ../ops-capacity-planning-scaling.rst:169 msgid "Segregating your cloud" msgstr "" #: ../ops-capacity-planning-scaling.rst:171 msgid "" "Segregating your cloud is needed when users require different regions for " "legal considerations for data storage, redundancy across earthquake fault " "lines, or for low-latency API calls. It can be segregated by *cells*, " "*regions*, *availability zones*, or *host aggregates*." msgstr "" #: ../ops-capacity-planning-scaling.rst:176 msgid "" "Each method provides different functionality and can be best divided into " "two groups:" msgstr "" #: ../ops-capacity-planning-scaling.rst:179 msgid "" "Cells and regions, which segregate an entire cloud and result in running " "separate Compute deployments." msgstr "" #: ../ops-capacity-planning-scaling.rst:182 msgid "" ":term:`Availability zones ` and host aggregates, which " "merely divide a single Compute deployment." msgstr "" #: ../ops-capacity-planning-scaling.rst:185 msgid "" ":ref:`table_segregation_methods` provides a comparison view of each " "segregation method currently provided by OpenStack Compute." msgstr "" #: ../ops-capacity-planning-scaling.rst:190 msgid "Table. OpenStack segregation methods" msgstr "" #: ../ops-capacity-planning-scaling.rst:195 msgid "Cells" msgstr "" #: ../ops-capacity-planning-scaling.rst:196 msgid "Regions" msgstr "" #: ../ops-capacity-planning-scaling.rst:197 msgid "Availability zones" msgstr "" #: ../ops-capacity-planning-scaling.rst:198 msgid "Host aggregates" msgstr "" #: ../ops-capacity-planning-scaling.rst:199 msgid "**Use**" msgstr "" #: ../ops-capacity-planning-scaling.rst:200 msgid "" "A single :term:`API endpoint` for compute, or you require a second level of " "scheduling." msgstr "" #: ../ops-capacity-planning-scaling.rst:202 msgid "" "Discrete regions with separate API endpoints and no coordination between " "regions." msgstr "" #: ../ops-capacity-planning-scaling.rst:204 msgid "" "Logical separation within your nova deployment for physical isolation or " "redundancy." msgstr "" #: ../ops-capacity-planning-scaling.rst:206 msgid "To schedule a group of hosts with common features." msgstr "" #: ../ops-capacity-planning-scaling.rst:207 msgid "**Example**" msgstr "" #: ../ops-capacity-planning-scaling.rst:208 msgid "" "A cloud with multiple sites where you can schedule VMs \"anywhere\" or on a " "particular site." msgstr "" #: ../ops-capacity-planning-scaling.rst:210 msgid "" "A cloud with multiple sites, where you schedule VMs to a particular site and " "you want a shared infrastructure." msgstr "" #: ../ops-capacity-planning-scaling.rst:212 msgid "A single-site cloud with equipment fed by separate power supplies." msgstr "" #: ../ops-capacity-planning-scaling.rst:213 msgid "Scheduling to hosts with trusted hardware support." msgstr "" #: ../ops-capacity-planning-scaling.rst:214 msgid "**Overhead**" msgstr "" #: ../ops-capacity-planning-scaling.rst:215 msgid "" "Considered experimental. A new service, nova-cells. Each cell has a full " "nova installation except nova-api." msgstr "" #: ../ops-capacity-planning-scaling.rst:217 msgid "" "A different API endpoint for every region. Each region has a full nova " "installation." msgstr "" #: ../ops-capacity-planning-scaling.rst:219 #: ../ops-capacity-planning-scaling.rst:220 msgid "Configuration changes to ``nova.conf``." msgstr "" #: ../ops-capacity-planning-scaling.rst:221 msgid "**Shared services**" msgstr "" #: ../ops-capacity-planning-scaling.rst:222 msgid "Keystone, ``nova-api``" msgstr "" #: ../ops-capacity-planning-scaling.rst:223 msgid "Keystone" msgstr "" #: ../ops-capacity-planning-scaling.rst:224 #: ../ops-capacity-planning-scaling.rst:225 msgid "Keystone, All nova services" msgstr "" #: ../ops-capacity-planning-scaling.rst:228 msgid "Cells and regions" msgstr "" #: ../ops-capacity-planning-scaling.rst:230 msgid "" "OpenStack Compute cells are designed to allow running the cloud in a " "distributed fashion without having to use more complicated technologies, or " "be invasive to existing nova installations. Hosts in a cloud are partitioned " "into groups called *cells*. Cells are configured in a tree. The top-level " "cell (\"API cell\") has a host that runs the ``nova-api`` service, but no " "``nova-compute`` services. Each child cell runs all of the other typical " "``nova-*`` services found in a regular installation, except for the ``nova-" "api`` service. Each cell has its own message queue and database service and " "also runs ``nova-cells``, which manages the communication between the API " "cell and child cells." msgstr "" #: ../ops-capacity-planning-scaling.rst:241 msgid "" "This allows for a single API server being used to control access to multiple " "cloud installations. Introducing a second level of scheduling (the cell " "selection), in addition to the regular ``nova-scheduler`` selection of " "hosts, provides greater flexibility to control where virtual machines are " "run." msgstr "" #: ../ops-capacity-planning-scaling.rst:247 msgid "" "Unlike having a single API endpoint, regions have a separate API endpoint " "per installation, allowing for a more discrete separation. Users wanting to " "run instances across sites have to explicitly select a region. However, the " "additional complexity of a running a new service is not required." msgstr "" #: ../ops-capacity-planning-scaling.rst:253 msgid "" "The OpenStack dashboard (horizon) can be configured to use multiple regions. " "This can be configured through the ``AVAILABLE_REGIONS`` parameter." msgstr "" #: ../ops-capacity-planning-scaling.rst:258 msgid "Availability zones and host aggregates" msgstr "" #: ../ops-capacity-planning-scaling.rst:260 msgid "" "You can use availability zones, host aggregates, or both to partition a nova " "deployment. Both methods are configured and implemented in a similar way." msgstr "" #: ../ops-capacity-planning-scaling.rst:265 msgid "Availability zone" msgstr "" #: ../ops-capacity-planning-scaling.rst:267 msgid "" "This enables you to arrange OpenStack compute hosts into logical groups and " "provides a form of physical isolation and redundancy from other availability " "zones, such as by using a separate power supply or network equipment." msgstr "" #: ../ops-capacity-planning-scaling.rst:272 msgid "" "You define the availability zone in which a specified compute host resides " "locally on each server. An availability zone is commonly used to identify a " "set of servers that have a common attribute. For instance, if some of the " "racks in your data center are on a separate power source, you can put " "servers in those racks in their own availability zone. Availability zones " "can also help separate different classes of hardware." msgstr "" #: ../ops-capacity-planning-scaling.rst:279 msgid "" "When users provision resources, they can specify from which availability " "zone they want their instance to be built. This allows cloud consumers to " "ensure that their application resources are spread across disparate machines " "to achieve high availability in the event of hardware failure." msgstr "" #: ../ops-capacity-planning-scaling.rst:285 msgid "Host aggregates zone" msgstr "" #: ../ops-capacity-planning-scaling.rst:287 msgid "" "This enables you to partition OpenStack Compute deployments into logical " "groups for load balancing and instance distribution. You can use host " "aggregates to further partition an availability zone. For example, you might " "use host aggregates to partition an availability zone into groups of hosts " "that either share common resources, such as storage and network, or have a " "special property, such as trusted computing hardware." msgstr "" #: ../ops-capacity-planning-scaling.rst:295 msgid "" "A common use of host aggregates is to provide information for use with the " "``nova-scheduler``. For example, you might use a host aggregate to group a " "set of hosts that share specific flavors or images." msgstr "" #: ../ops-capacity-planning-scaling.rst:299 msgid "" "The general case for this is setting key-value pairs in the aggregate " "metadata and matching key-value pairs in flavor's ``extra_specs`` metadata. " "The ``AggregateInstanceExtraSpecsFilter`` in the filter scheduler will " "enforce that instances be scheduled only on hosts in aggregates that define " "the same key to the same value." msgstr "" #: ../ops-capacity-planning-scaling.rst:305 msgid "" "An advanced use of this general concept allows different flavor types to run " "with different CPU and RAM allocation ratios so that high-intensity " "computing loads and low-intensity development and testing systems can share " "the same cloud without either starving the high-use systems or wasting " "resources on low-utilization systems. This works by setting ``metadata`` in " "your host aggregates and matching ``extra_specs`` in your flavor types." msgstr "" #: ../ops-capacity-planning-scaling.rst:313 msgid "" "The first step is setting the aggregate metadata keys " "``cpu_allocation_ratio`` and ``ram_allocation_ratio`` to a floating-point " "value. The filter schedulers ``AggregateCoreFilter`` and " "``AggregateRamFilter`` will use those values rather than the global defaults " "in ``nova.conf`` when scheduling to hosts in the aggregate. Be cautious when " "using this feature, since each host can be in multiple aggregates, but " "should have only one allocation ratio for each resources. It is up to you to " "avoid putting a host in multiple aggregates that define different values for " "the same resource." msgstr "" #: ../ops-capacity-planning-scaling.rst:323 msgid "" "This is the first half of the equation. To get flavor types that are " "guaranteed a particular ratio, you must set the ``extra_specs`` in the " "flavor type to the key-value pair you want to match in the aggregate. For " "example, if you define ``extra_specs`` ``cpu_allocation_ratio`` to \"1.0\", " "then instances of that type will run in aggregates only where the metadata " "key ``cpu_allocation_ratio`` is also defined as \"1.0.\" In practice, it is " "better to define an additional key-value pair in the aggregate metadata to " "match on rather than match directly on ``cpu_allocation_ratio`` or " "``core_allocation_ratio``. This allows better abstraction. For example, by " "defining a key ``overcommit`` and setting a value of \"high,\" \"medium,\" " "or \"low,\" you could then tune the numeric allocation ratios in the " "aggregates without also needing to change all flavor types relating to them." msgstr "" #: ../ops-capacity-planning-scaling.rst:339 msgid "" "Previously, all services had an availability zone. Currently, only the " "``nova-compute`` service has its own availability zone. Services such as " "``nova-scheduler``, ``nova-network``, and ``nova-conductor`` have always " "spanned all availability zones." msgstr "" #: ../ops-capacity-planning-scaling.rst:344 msgid "" "When you run any of the following operations, the services appear in their " "own internal availability zone (CONF.internal_service_availability_zone):" msgstr "" #: ../ops-capacity-planning-scaling.rst:348 msgid ":command:`openstack host list` (os-hosts)" msgstr "" #: ../ops-capacity-planning-scaling.rst:350 msgid ":command:`euca-describe-availability-zones verbose`" msgstr "" #: ../ops-capacity-planning-scaling.rst:352 msgid ":command:`openstack compute service list`" msgstr "" #: ../ops-capacity-planning-scaling.rst:354 msgid "" "The internal availability zone is hidden in euca-describe-availability_zones " "(nonverbose)." msgstr "" #: ../ops-capacity-planning-scaling.rst:357 msgid "" "CONF.node_availability_zone has been renamed to CONF." "default_availability_zone and is used only by the ``nova-api`` and ``nova-" "scheduler`` services." msgstr "" #: ../ops-capacity-planning-scaling.rst:361 msgid "CONF.node_availability_zone still works but is deprecated." msgstr "" #: ../ops-capacity-planning-scaling.rst:364 msgid "Scalable Hardware" msgstr "" #: ../ops-capacity-planning-scaling.rst:366 msgid "" "While several resources already exist to help with deploying and installing " "OpenStack, it's very important to make sure that you have your deployment " "planned out ahead of time. This guide presumes that you have set aside a " "rack for the OpenStack cloud but also offers suggestions for when and what " "to scale." msgstr "" #: ../ops-capacity-planning-scaling.rst:373 msgid "Hardware Procurement" msgstr "" #: ../ops-capacity-planning-scaling.rst:375 msgid "" "“The Cloud” has been described as a volatile environment where servers can " "be created and terminated at will. While this may be true, it does not mean " "that your servers must be volatile. Ensuring that your cloud's hardware is " "stable and configured correctly means that your cloud environment remains up " "and running." msgstr "" #: ../ops-capacity-planning-scaling.rst:381 msgid "" "OpenStack can be deployed on any hardware supported by an OpenStack " "compatible Linux distribution." msgstr "" #: ../ops-capacity-planning-scaling.rst:384 msgid "" "Hardware does not have to be consistent, but it should at least have the " "same type of CPU to support instance migration." msgstr "" #: ../ops-capacity-planning-scaling.rst:387 msgid "" "The typical hardware recommended for use with OpenStack is the standard " "value-for-money offerings that most hardware vendors stock. It should be " "straightforward to divide your procurement into building blocks such as " "\"compute,\" \"object storage,\" and \"cloud controller,\" and request as " "many of these as you need. Alternatively, any existing servers you have that " "meet performance requirements and virtualization technology are likely to " "support OpenStack." msgstr "" #: ../ops-capacity-planning-scaling.rst:396 msgid "Capacity Planning" msgstr "" #: ../ops-capacity-planning-scaling.rst:398 msgid "" "OpenStack is designed to increase in size in a straightforward manner. " "Taking into account the considerations previous mentioned, particularly on " "the sizing of the cloud controller, it should be possible to procure " "additional compute or object storage nodes as needed. New nodes do not need " "to be the same specification or vendor as existing nodes." msgstr "" #: ../ops-capacity-planning-scaling.rst:404 msgid "" "For compute nodes, ``nova-scheduler`` will manage differences in sizing with " "core count and RAM. However, you should consider that the user experience " "changes with differing CPU speeds. When adding object storage nodes, a :term:" "`weight` should be specified that reflects the :term:`capability` of the " "node." msgstr "" #: ../ops-capacity-planning-scaling.rst:410 msgid "" "Monitoring the resource usage and user growth will enable you to know when " "to procure. The `Logging and Monitoring `_ chapte in the Operations Guide details " "some useful metrics." msgstr "" #: ../ops-capacity-planning-scaling.rst:416 msgid "Burn-in Testing" msgstr "" #: ../ops-capacity-planning-scaling.rst:418 msgid "" "The chances of failure for the server's hardware are high at the start and " "the end of its life. As a result, dealing with hardware failures while in " "production can be avoided by appropriate burn-in testing to attempt to " "trigger the early-stage failures. The general principle is to stress the " "hardware to its limits. Examples of burn-in tests include running a CPU or " "disk benchmark for several days." msgstr "" #: ../ops-customize-compute.rst:3 msgid "Customizing the OpenStack Compute (nova) Scheduler" msgstr "" #: ../ops-customize-compute.rst:5 msgid "" "Many OpenStack projects allow for customization of specific features using a " "driver architecture. You can write a driver that conforms to a particular " "interface and plug it in through configuration. For example, you can easily " "plug in a new scheduler for Compute. The existing schedulers for Compute are " "feature full and well documented at `Scheduling `_. However, depending on " "your user's use cases, the existing schedulers might not meet your " "requirements. You might need to create a new scheduler." msgstr "" #: ../ops-customize-compute.rst:14 msgid "" "To create a scheduler, you must inherit from the class ``nova.scheduler." "driver.Scheduler``. Of the five methods that you can override, you *must* " "override the two methods marked with an asterisk (\\*) below:" msgstr "" #: ../ops-customize-compute.rst:19 msgid "``update_service_capabilities``" msgstr "" #: ../ops-customize-compute.rst:21 msgid "``hosts_up``" msgstr "" #: ../ops-customize-compute.rst:23 msgid "``group_hosts``" msgstr "" #: ../ops-customize-compute.rst:25 msgid "\\* ``schedule_run_instance``" msgstr "" #: ../ops-customize-compute.rst:27 msgid "\\* ``select_destinations``" msgstr "" #: ../ops-customize-compute.rst:29 msgid "" "To demonstrate customizing OpenStack, we'll create an example of a Compute " "scheduler that randomly places an instance on a subset of hosts, depending " "on the originating IP address of the request and the prefix of the hostname. " "Such an example could be useful when you have a group of users on a subnet " "and you want all of their instances to start within some subset of your " "hosts." msgstr "" #: ../ops-customize-compute.rst:38 msgid "" "This example is for illustrative purposes only. It should not be used as a " "scheduler for Compute without further development and testing." msgstr "" #: ../ops-customize-compute.rst:42 msgid "" "When you join the screen session that ``stack.sh`` starts with ``screen -r " "stack``, you are greeted with many screen windows:" msgstr "" #: ../ops-customize-compute.rst:51 ../ops-customize-objectstorage.rst:44 msgid "A shell where you can get some work done" msgstr "" #: ../ops-customize-compute.rst:51 ../ops-customize-objectstorage.rst:44 msgid "``shell``" msgstr "" #: ../ops-customize-compute.rst:54 ../ops-customize-objectstorage.rst:47 msgid "The keystone service" msgstr "" #: ../ops-customize-compute.rst:54 msgid "``key``" msgstr "" #: ../ops-customize-compute.rst:57 ../ops-customize-objectstorage.rst:50 msgid "The horizon dashboard web application" msgstr "" #: ../ops-customize-compute.rst:57 ../ops-customize-objectstorage.rst:50 msgid "``horizon``" msgstr "" #: ../ops-customize-compute.rst:60 msgid "The nova services" msgstr "" #: ../ops-customize-compute.rst:60 msgid "``n-{name}``" msgstr "" #: ../ops-customize-compute.rst:63 msgid "The nova scheduler service" msgstr "" #: ../ops-customize-compute.rst:63 msgid "``n-sch``" msgstr "" #: ../ops-customize-compute.rst:65 msgid "**To create the scheduler and plug it in through configuration**" msgstr "" #: ../ops-customize-compute.rst:67 msgid "" "The code for OpenStack lives in ``/opt/stack``, so go to the ``nova`` " "directory and edit your scheduler module. Change to the directory where " "``nova`` is installed:" msgstr "" #: ../ops-customize-compute.rst:75 msgid "Create the ``ip_scheduler.py`` Python source code file:" msgstr "" #: ../ops-customize-compute.rst:81 msgid "" "The code shown below is a driver that will schedule servers to hosts based " "on IP address as explained at the beginning of the section. Copy the code " "into ``ip_scheduler.py``. When you are done, save and close the file." msgstr "" #: ../ops-customize-compute.rst:213 msgid "" "There is a lot of useful information in ``context``, ``request_spec``, and " "``filter_properties`` that you can use to decide where to schedule the " "instance. To find out more about what properties are available, you can " "insert the following log statements into the ``schedule_run_instance`` " "method of the scheduler above:" msgstr "" #: ../ops-customize-compute.rst:225 msgid "" "To plug this scheduler into nova, edit one configuration file, ``/etc/nova/" "nova.conf``:" msgstr "" #: ../ops-customize-compute.rst:232 msgid "Find the ``scheduler_driver`` config and change it like so:" msgstr "" #: ../ops-customize-compute.rst:238 msgid "" "Restart the nova scheduler service to make nova use your scheduler. Start by " "switching to the ``n-sch`` screen:" msgstr "" #: ../ops-customize-compute.rst:241 msgid "Press **Ctrl+A** followed by **9**." msgstr "" #: ../ops-customize-compute.rst:243 msgid "" "Press **Ctrl+A** followed by **N** until you reach the ``n-sch`` screen." msgstr "" #: ../ops-customize-compute.rst:245 ../ops-customize-objectstorage.rst:231 msgid "Press **Ctrl+C** to kill the service." msgstr "" #: ../ops-customize-compute.rst:247 ../ops-customize-objectstorage.rst:233 msgid "Press **Up Arrow** to bring up the last command." msgstr "" #: ../ops-customize-compute.rst:249 msgid "Press **Enter** to run it." msgstr "" #: ../ops-customize-compute.rst:251 msgid "" "Test your scheduler with the nova CLI. Start by switching to the ``shell`` " "screen and finish by switching back to the ``n-sch`` screen to check the log " "output:" msgstr "" #: ../ops-customize-compute.rst:255 ../ops-customize-objectstorage.rst:241 msgid "Press  **Ctrl+A** followed by **0**." msgstr "" #: ../ops-customize-compute.rst:257 msgid "Make sure you are in the ``devstack`` directory:" msgstr "" #: ../ops-customize-compute.rst:263 msgid "Source ``openrc`` to set up your environment variables for the CLI:" msgstr "" #: ../ops-customize-compute.rst:269 msgid "" "Put the image ID for the only installed image into an environment variable:" msgstr "" #: ../ops-customize-compute.rst:276 msgid "Boot a test server:" msgstr "" #: ../ops-customize-compute.rst:282 msgid "" "Switch back to the ``n-sch`` screen. Among the log statements, you'll see " "the line:" msgstr "" #: ../ops-customize-compute.rst:293 ../ops-customize-objectstorage.rst:325 msgid "" "Functional testing like this is not a replacement for proper unit and " "integration testing, but it serves to get you started." msgstr "" #: ../ops-customize-compute.rst:296 msgid "" "A similar pattern can be followed in other projects that use the driver " "architecture. Simply create a module and class that conform to the driver " "interface and plug it in through configuration. Your code runs when that " "feature is used and can call out to other services as necessary. No project " "core code is touched. Look for a \"driver\" value in the project's ``.conf`` " "configuration files in ``/etc/`` to identify projects that use a " "driver architecture." msgstr "" #: ../ops-customize-compute.rst:304 msgid "" "When your scheduler is done, we encourage you to open source it and let the " "community know on the OpenStack mailing list. Perhaps others need the same " "functionality. They can use your code, provide feedback, and possibly " "contribute. If enough support exists for it, perhaps you can propose that it " "be added to the official Compute `schedulers `_." msgstr "" #: ../ops-customize-conclusion.rst:3 msgid "Conclusion" msgstr "" #: ../ops-customize-conclusion.rst:5 msgid "" "When operating an OpenStack cloud, you may discover that your users can be " "quite demanding. If OpenStack doesn't do what your users need, it may be up " "to you to fulfill those requirements. This chapter provided you with some " "options for customization and gave you the tools you need to get started." msgstr "" #: ../ops-customize-dashboard.rst:3 msgid "Customizing the Dashboard (Horizon)" msgstr "" #: ../ops-customize-dashboard.rst:5 msgid "" "The dashboard is based on the Python `Django `_ web application framework. To know how to build your Dashboard, see " "`Building a Dashboard using Horizon `_." msgstr "" #: ../ops-customize-development.rst:3 msgid "Create an OpenStack Development Environment" msgstr "" #: ../ops-customize-development.rst:5 msgid "" "To create a development environment, you can use DevStack. DevStack is " "essentially a collection of shell scripts and configuration files that " "builds an OpenStack development environment for you. You use it to create " "such an environment for developing a new feature." msgstr "" #: ../ops-customize-development.rst:10 msgid "" "For more information on installing DevStack, see the `DevStack `_ website." msgstr "" #: ../ops-customize-objectstorage.rst:3 msgid "Customizing Object Storage (Swift) Middleware" msgstr "" #: ../ops-customize-objectstorage.rst:5 msgid "" "OpenStack Object Storage, known as swift when reading the code, is based on " "the Python `Paste `_ framework. The best " "introduction to its architecture is `A Do-It-Yourself Framework `_. Because of the swift " "project's use of this framework, you are able to add features to a project " "by placing some custom code in a project's pipeline without having to change " "any of the core code." msgstr "" #: ../ops-customize-objectstorage.rst:13 msgid "" "Imagine a scenario where you have public access to one of your containers, " "but what you really want is to restrict access to that to a set of IPs based " "on a whitelist. In this example, we'll create a piece of middleware for " "swift that allows access to a container from only a set of IP addresses, as " "determined by the container's metadata items. Only those IP addresses that " "you explicitly whitelist using the container's metadata will be able to " "access the container." msgstr "" #: ../ops-customize-objectstorage.rst:23 msgid "" "This example is for illustrative purposes only. It should not be used as a " "container IP whitelist solution without further development and extensive " "security testing." msgstr "" #: ../ops-customize-objectstorage.rst:27 msgid "" "When you join the screen session that ``stack.sh`` starts with ``screen -r " "stack``, you see a screen for each service running, which can be a few or " "several, depending on how many services you configured DevStack to run." msgstr "" #: ../ops-customize-objectstorage.rst:32 msgid "" "The asterisk * indicates which screen window you are viewing. This example " "shows we are viewing the key (for keystone) screen window:" msgstr "" #: ../ops-customize-objectstorage.rst:40 msgid "The purpose of the screen windows are as follows:" msgstr "" #: ../ops-customize-objectstorage.rst:47 msgid "``key*``" msgstr "" #: ../ops-customize-objectstorage.rst:53 msgid "The swift services" msgstr "" #: ../ops-customize-objectstorage.rst:53 msgid "``s-{name}``" msgstr "" #: ../ops-customize-objectstorage.rst:55 msgid "" "**To create the middleware and plug it in through Paste configuration:**" msgstr "" #: ../ops-customize-objectstorage.rst:57 msgid "" "All of the code for OpenStack lives in ``/opt/stack``. Go to the swift " "directory in the ``shell`` screen and edit your middleware module." msgstr "" #: ../ops-customize-objectstorage.rst:60 msgid "Change to the directory where Object Storage is installed:" msgstr "" #: ../ops-customize-objectstorage.rst:66 msgid "Create the ``ip_whitelist.py`` Python source code file:" msgstr "" #: ../ops-customize-objectstorage.rst:72 msgid "" "Copy the code as shown below into ``ip_whitelist.py``. The following code is " "a middleware example that restricts access to a container based on IP " "address as explained at the beginning of the section. Middleware passes the " "request on to another application. This example uses the swift \"swob\" " "library to wrap Web Server Gateway Interface (WSGI) requests and responses " "into objects for swift to interact with. When you're done, save and close " "the file." msgstr "" #: ../ops-customize-objectstorage.rst:177 msgid "" "There is a lot of useful information in ``env`` and ``conf`` that you can " "use to decide what to do with the request. To find out more about what " "properties are available, you can insert the following log statement into " "the ``__init__`` method:" msgstr "" #: ../ops-customize-objectstorage.rst:187 msgid "and the following log statement into the ``__call__`` method:" msgstr "" #: ../ops-customize-objectstorage.rst:193 msgid "" "To plug this middleware into the swift Paste pipeline, you edit one " "configuration file, ``/etc/swift/proxy-server.conf``:" msgstr "" #: ../ops-customize-objectstorage.rst:200 msgid "" "Find the ``[filter:ratelimit]`` section in ``/etc/swift/proxy-server.conf``, " "and copy in the following configuration section after it:" msgstr "" #: ../ops-customize-objectstorage.rst:216 msgid "" "Find the ``[pipeline:main]`` section in ``/etc/swift/proxy-server.conf``, " "and add ``ip_whitelist`` after ratelimit to the list like so. When you're " "done, save and close the file:" msgstr "" #: ../ops-customize-objectstorage.rst:226 msgid "" "Restart the ``swift proxy`` service to make swift use your middleware. Start " "by switching to the ``swift-proxy`` screen:" msgstr "" #: ../ops-customize-objectstorage.rst:229 msgid "Press **Ctrl+A** followed by **3**." msgstr "" #: ../ops-customize-objectstorage.rst:235 msgid "Press Enter to run it." msgstr "" #: ../ops-customize-objectstorage.rst:237 msgid "" "Test your middleware with the ``swift`` CLI. Start by switching to the shell " "screen and finish by switching back to the ``swift-proxy`` screen to check " "the log output:" msgstr "" #: ../ops-customize-objectstorage.rst:243 msgid "Make sure you're in the ``devstack`` directory:" msgstr "" #: ../ops-customize-objectstorage.rst:249 msgid "Source openrc to set up your environment variables for the CLI:" msgstr "" #: ../ops-customize-objectstorage.rst:255 msgid "Create a container called ``middleware-test``:" msgstr "" #: ../ops-customize-objectstorage.rst:261 msgid "Press **Ctrl+A** followed by **3** to check the log output." msgstr "" #: ../ops-customize-objectstorage.rst:263 msgid "Among the log statements you'll see the lines:" msgstr "" #: ../ops-customize-objectstorage.rst:270 msgid "" "These two statements are produced by our middleware and show that the " "request was sent from our DevStack instance and was allowed." msgstr "" #: ../ops-customize-objectstorage.rst:273 msgid "" "Test the middleware from outside DevStack on a remote machine that has " "access to your DevStack instance:" msgstr "" #: ../ops-customize-objectstorage.rst:276 msgid "Install the ``keystone`` and ``swift`` clients on your local machine:" msgstr "" #: ../ops-customize-objectstorage.rst:282 msgid "Attempt to list the objects in the ``middleware-test`` container:" msgstr "" #: ../ops-customize-objectstorage.rst:292 msgid "" "Press **Ctrl+A** followed by **3** to check the log output. Look at the " "swift log statements again, and among the log statements, you'll see the " "lines:" msgstr "" #: ../ops-customize-objectstorage.rst:305 msgid "" "Here we can see that the request was denied because the remote IP address " "wasn't in the set of allowed IPs." msgstr "" #: ../ops-customize-objectstorage.rst:308 msgid "" "Back in your DevStack instance on the shell screen, add some metadata to " "your container to allow the request from the remote machine:" msgstr "" #: ../ops-customize-objectstorage.rst:311 msgid "Press **Ctrl+A** followed by **0**." msgstr "" #: ../ops-customize-objectstorage.rst:313 msgid "Add metadata to the container to allow the IP:" msgstr "" #: ../ops-customize-objectstorage.rst:319 msgid "" "Now try the command from Step 10 again and it succeeds. There are no objects " "in the container, so there is nothing to list; however, there is also no " "error to report." msgstr "" #: ../ops-customize-objectstorage.rst:328 msgid "" "You can follow a similar pattern in other projects that use the Python Paste " "framework. Simply create a middleware module and plug it in through " "configuration. The middleware runs in sequence as part of that project's " "pipeline and can call out to other services as necessary. No project core " "code is touched. Look for a ``pipeline`` value in the project's ``conf`` or " "``ini`` configuration files in ``/etc/`` to identify projects that " "use Paste." msgstr "" #: ../ops-customize-objectstorage.rst:336 msgid "" "When your middleware is done, we encourage you to open source it and let the " "community know on the OpenStack mailing list. Perhaps others need the same " "functionality. They can use your code, provide feedback, and possibly " "contribute. If enough support exists for it, perhaps you can propose that it " "be added to the official swift `middleware `_." msgstr "" #: ../ops-customize-provision-instance.rst:3 msgid "Provision an instance" msgstr "" #: ../ops-customize-provision-instance.rst:5 msgid "" "To help understand how OpenStack works, this section describes the end-to-" "end process and interaction of components when provisioning an instance on " "OpenStack." msgstr "" #: ../ops-customize-provision-instance.rst:9 msgid "**Provision an instance**" msgstr "" #: ../ops-customize.rst:3 msgid "Customization" msgstr "" #: ../ops-customize.rst:15 msgid "" "OpenStack might not do everything you need it to do out of the box. To add a " "new feature, you can follow different paths." msgstr "" #: ../ops-customize.rst:18 msgid "" "To take the first path, you can modify the OpenStack code directly. Learn " "`how to contribute `_, " "follow the `Developer's Guide `_, make your changes, and contribute them back to the " "upstream OpenStack project. This path is recommended if the feature you need " "requires deep integration with an existing project. The community is always " "open to contributions and welcomes new functionality that follows the " "feature-development guidelines. This path still requires you to use DevStack " "for testing your feature additions, so this chapter walks you through the " "DevStack environment." msgstr "" #: ../ops-customize.rst:31 msgid "" "For the second path, you can write new features and plug them in using " "changes to a configuration file. If the project where your feature would " "need to reside uses the Python Paste framework, you can create middleware " "for it and plug it in through configuration. There may also be specific ways " "of customizing a project, such as creating a new scheduler driver for " "Compute or a custom tab for the dashboard." msgstr "" #: ../ops-customize.rst:38 msgid "" "This chapter focuses on the second path for customizing OpenStack by " "providing two examples for writing new features. The first example shows how " "to modify Object Storage service (swift) middleware to add a new feature, " "and the second example provides a new scheduler feature for Compute service " "(nova). To customize OpenStack this way you need a development environment. " "The best way to get an environment up and running quickly is to run DevStack " "within your cloud." msgstr "" #: ../ops-deployment-factors.rst:5 msgid "Factors affecting OpenStack deployment" msgstr "" #: ../ops-deployment-factors.rst:8 msgid "Security requirements" msgstr "" #: ../ops-deployment-factors.rst:10 msgid "" "When deploying OpenStack in an enterprise as a private cloud, it is usually " "behind the firewall and within the trusted network alongside existing " "systems. Users are employees that are bound by the company security " "requirements. This tends to drive most of the security domains towards a " "more trusted model. However, when deploying OpenStack in a public facing " "role, no assumptions can be made and the attack vectors significantly " "increase." msgstr "" #: ../ops-deployment-factors.rst:18 msgid "Consider the following security implications and requirements:" msgstr "" #: ../ops-deployment-factors.rst:20 msgid "" "Managing the users for both public and private clouds. The Identity service " "allows for LDAP to be part of the authentication process. This may ease user " "management if integrating into existing systems." msgstr "" #: ../ops-deployment-factors.rst:24 msgid "" "User authentication requests include sensitive information including " "usernames, passwords, and authentication tokens. It is strongly recommended " "to place API services behind hardware that performs SSL termination." msgstr "" #: ../ops-deployment-factors.rst:28 msgid "" "Negative or hostile users who would attack or compromise the security of " "your deployment regardless of firewalls or security agreements." msgstr "" #: ../ops-deployment-factors.rst:31 msgid "" "Attack vectors increase further in a public facing OpenStack deployment. For " "example, the API endpoints and the software behind it become vulnerable to " "hostile entities attempting to gain unauthorized access or prevent access to " "services. You should provide appropriate filtering and periodic security " "auditing." msgstr "" #: ../ops-deployment-factors.rst:39 msgid "" "Be mindful of consistency when utilizing third party clouds to explore " "authentication options." msgstr "" #: ../ops-deployment-factors.rst:42 msgid "" "For more information OpenStack Security, see the `OpenStack Security Guide " "`_." msgstr "" #: ../ops-deployment-factors.rst:46 msgid "Security domains" msgstr "" #: ../ops-deployment-factors.rst:48 msgid "" "A security domain comprises of users, applications, servers or networks that " "share common trust requirements and expectations within a system. Typically " "they have the same authentication and authorization requirements and users." msgstr "" #: ../ops-deployment-factors.rst:53 msgid "Security domains include:" msgstr "" #: ../ops-deployment-factors.rst:56 msgid "" "The public security domain can refer to the internet as a whole or networks " "over which you have no authority. This domain is considered untrusted. For " "example, in a hybrid cloud deployment, any information traversing between " "and beyond the clouds is in the public domain and untrustworthy." msgstr "" #: ../ops-deployment-factors.rst:60 msgid "Public security domains" msgstr "" #: ../ops-deployment-factors.rst:63 msgid "" "The guest security domain handles compute data generated by instances on the " "cloud, but not services that support the operation of the cloud, such as API " "calls. Public cloud providers and private cloud providers who do not have " "stringent controls on instance use or who allow unrestricted internet access " "to instances should consider this domain to be untrusted. Private cloud " "providers may want to consider this network as internal and therefore " "trusted only if they have controls in place to assert that they trust " "instances and all their tenants." msgstr "" #: ../ops-deployment-factors.rst:71 msgid "Guest security domains" msgstr "" #: ../ops-deployment-factors.rst:74 msgid "" "The management security domain is where services interact. Sometimes " "referred to as the control plane, the networks in this domain transport " "confidential data such as configuration parameters, user names, and " "passwords. In most deployments this domain is considered trusted when it is " "behind an organization's firewall." msgstr "" #: ../ops-deployment-factors.rst:78 msgid "Management security domains" msgstr "" #: ../ops-deployment-factors.rst:81 msgid "" "The data security domain is primarily concerned with information pertaining " "to the storage services within OpenStack. The data that crosses this network " "has high integrity and confidentiality requirements and, depending on the " "type of deployment, may also have strong availability requirements. The " "trust level of this network is heavily dependent on other deployment " "decisions." msgstr "" #: ../ops-deployment-factors.rst:86 msgid "Data security domains" msgstr "" #: ../ops-deployment-factors.rst:88 msgid "" "These security domains can be individually or collectively mapped to an " "OpenStack deployment. The cloud operator should be aware of the appropriate " "security concerns. Security domains should be mapped out against your " "specific OpenStack deployment topology. The domains and their trust " "requirements depend upon whether the cloud instance is public, private, or " "hybrid." msgstr "" #: ../ops-deployment-factors.rst:95 msgid "Hypervisor security" msgstr "" #: ../ops-deployment-factors.rst:97 msgid "" "The hypervisor also requires a security assessment. In a public cloud, " "organizations typically do not have control over the choice of hypervisor. " "Properly securing your hypervisor is important. Attacks made upon the " "unsecured hypervisor are called a **hypervisor breakout**. Hypervisor " "breakout describes the event of a compromised or malicious instance breaking " "out of the resource controls of the hypervisor and gaining access to the " "bare metal operating system and hardware resources." msgstr "" #: ../ops-deployment-factors.rst:107 msgid "" "Hypervisor security is not an issue if the security of instances is not " "important. However, enterprises can minimize vulnerability by avoiding " "hardware sharing with others in a public cloud." msgstr "" #: ../ops-deployment-factors.rst:112 msgid "Baremetal security" msgstr "" #: ../ops-deployment-factors.rst:114 msgid "" "There are other services worth considering that provide a bare metal " "instance instead of a cloud. In other cases, it is possible to replicate a " "second private cloud by integrating with a private Cloud-as-a-Service " "deployment. The organization does not buy the hardware, but also does not " "share with other tenants. It is also possible to use a provider that hosts a " "bare-metal public cloud instance for which the hardware is dedicated only to " "one customer, or a provider that offers private Cloud-as-a-Service." msgstr "" #: ../ops-deployment-factors.rst:126 msgid "" "Each cloud implements services differently. Understand the security " "requirements of every cloud that handles the organization's data or " "workloads." msgstr "" #: ../ops-deployment-factors.rst:131 msgid "Networking security" msgstr "" #: ../ops-deployment-factors.rst:133 msgid "" "Consider security implications and requirements before designing the " "physical and logical network topologies. Make sure that the networks are " "properly segregated and traffic flows are going to the correct destinations " "without crossing through locations that are undesirable. Consider the " "following factors:" msgstr "" #: ../ops-deployment-factors.rst:139 msgid "Firewalls" msgstr "" #: ../ops-deployment-factors.rst:140 msgid "Overlay interconnects for joining separated tenant networks" msgstr "" #: ../ops-deployment-factors.rst:141 msgid "Routing through or avoiding specific networks" msgstr "" #: ../ops-deployment-factors.rst:143 msgid "" "How networks attach to hypervisors can expose security vulnerabilities. To " "mitigate hypervisor breakouts, separate networks from other systems and " "schedule instances for the network onto dedicated Compute nodes. This " "prevents attackers from having access to the networks from a compromised " "instance." msgstr "" #: ../ops-deployment-factors.rst:150 msgid "Multi-site security" msgstr "" #: ../ops-deployment-factors.rst:152 msgid "" "Securing a multi-site OpenStack installation brings several challenges. " "Tenants may expect a tenant-created network to be secure. In a multi-site " "installation the use of a non-private connection between sites may be " "required. This may mean that traffic would be visible to third parties and, " "in cases where an application requires security, this issue requires " "mitigation. In these instances, install a VPN or encrypted connection " "between sites to conceal sensitive traffic." msgstr "" #: ../ops-deployment-factors.rst:161 msgid "" "Identity is another security consideration. Authentication centralization " "provides a single authentication point for users across the deployment, and " "a single administration point for traditional create, read, update, and " "delete operations. Centralized authentication is also useful for auditing " "purposes because all authentication tokens originate from the same source." msgstr "" #: ../ops-deployment-factors.rst:168 msgid "" "Tenants in multi-site installations need isolation from each other. The main " "challenge is ensuring tenant networks function across regions which is not " "currently supported in OpenStack Networking (neutron). Therefore an external " "system may be required to manage mapping. Tenant networks may contain " "sensitive information requiring accurate and consistent mapping to ensure " "that a tenant in one site does not connect to a different tenant in another " "site." msgstr "" #: ../ops-deployment-factors.rst:177 msgid "Legal requirements" msgstr "" #: ../ops-deployment-factors.rst:179 msgid "" "Using remote resources for collection, processing, storage, and retrieval " "provides potential benefits to businesses. With the rapid growth of data " "within organizations, businesses need to be proactive about their data " "storage strategies from a compliance point of view." msgstr "" #: ../ops-deployment-factors.rst:185 msgid "" "Most countries have legislative and regulatory requirements governing the " "storage and management of data in cloud environments. This is particularly " "relevant for public, community and hybrid cloud models, to ensure data " "privacy and protection for organizations using a third party cloud provider." msgstr "" #: ../ops-deployment-factors.rst:191 msgid "Common areas of regulation include:" msgstr "" #: ../ops-deployment-factors.rst:193 msgid "" "Data retention policies ensuring storage of persistent data and records " "management to meet data archival requirements." msgstr "" #: ../ops-deployment-factors.rst:195 msgid "" "Data ownership policies governing the possession and responsibility for data." msgstr "" #: ../ops-deployment-factors.rst:197 msgid "" "Data sovereignty policies governing the storage of data in foreign countries " "or otherwise separate jurisdictions." msgstr "" #: ../ops-deployment-factors.rst:199 msgid "" "Data compliance policies governing certain types of information needing to " "reside in certain locations due to regulatory issues - and more importantly, " "cannot reside in other locations for the same reason." msgstr "" #: ../ops-deployment-factors.rst:203 msgid "" "Data location policies ensuring that the services deployed to the cloud are " "used according to laws and regulations in place for the employees, foreign " "subsidiaries, or third parties." msgstr "" #: ../ops-deployment-factors.rst:206 msgid "" "Disaster recovery policies ensuring regular data backups and relocation of " "cloud applications to another supplier in scenarios where a provider may go " "out of business, or their data center could become inoperable." msgstr "" #: ../ops-deployment-factors.rst:210 msgid "" "Security breach policies governing the ways to notify individuals through " "cloud provider's systems or other means if their personal data gets " "compromised in any way." msgstr "" #: ../ops-deployment-factors.rst:213 msgid "" "Industry standards policy governing additional requirements on what type of " "cardholder data may or may not be stored and how it is to be protected." msgstr "" #: ../ops-deployment-factors.rst:217 msgid "This is an example of such legal frameworks:" msgstr "" #: ../ops-deployment-factors.rst:219 msgid "" "Data storage regulations in Europe are currently driven by provisions of the " "`Data protection framework `_. " "`Financial Industry Regulatory Authority `_ works on this in the United States." msgstr "" #: ../ops-deployment-factors.rst:225 msgid "" "Privacy and security are spread over different industry-specific laws and " "regulations:" msgstr "" #: ../ops-deployment-factors.rst:228 msgid "Health Insurance Portability and Accountability Act (HIPAA)" msgstr "" #: ../ops-deployment-factors.rst:229 msgid "Gramm-Leach-Bliley Act (GLBA)" msgstr "" #: ../ops-deployment-factors.rst:230 msgid "Payment Card Industry Data Security Standard (PCI DSS)" msgstr "" #: ../ops-deployment-factors.rst:231 msgid "Family Educational Rights and Privacy Act (FERPA)" msgstr "" #: ../ops-deployment-factors.rst:234 msgid "Cloud security architecture" msgstr "" #: ../ops-deployment-factors.rst:236 msgid "" "Cloud security architecture should recognize the issues that arise with " "security management, which addresses these issues with security controls. " "Cloud security controls are put in place to safeguard any weaknesses in the " "system, and reduce the effect of an attack." msgstr "" #: ../ops-deployment-factors.rst:241 msgid "The following security controls are described below." msgstr "" #: ../ops-deployment-factors.rst:244 msgid "" "Typically reduce the threat level by informing potential attackers that " "there will be adverse consequences for them if they proceed." msgstr "" #: ../ops-deployment-factors.rst:245 msgid "Deterrent controls:" msgstr "" #: ../ops-deployment-factors.rst:248 msgid "" "Strengthen the system against incidents, generally by reducing if not " "actually eliminating vulnerabilities." msgstr "" #: ../ops-deployment-factors.rst:249 msgid "Preventive controls:" msgstr "" #: ../ops-deployment-factors.rst:252 msgid "" "Intended to detect and react appropriately to any incidents that occur. " "System and network security monitoring, including intrusion detection and " "prevention arrangements, are typically employed to detect attacks on cloud " "systems and the supporting communications infrastructure." msgstr "" #: ../ops-deployment-factors.rst:256 msgid "Detective controls:" msgstr "" #: ../ops-deployment-factors.rst:259 msgid "" "Reduce the consequences of an incident, normally by limiting the damage. " "They come into effect during or after an incident. Restoring system backups " "in order to rebuild a compromised system is an example of a corrective " "control." msgstr "" #: ../ops-deployment-factors.rst:262 msgid "Corrective controls:" msgstr "" #: ../ops-deployment-factors.rst:264 msgid "" "For more information, see See also `NIST Special Publication 800-53 `_." msgstr "" #: ../ops-deployment-factors.rst:269 msgid "Software licensing" msgstr "" #: ../ops-deployment-factors.rst:271 msgid "" "The many different forms of license agreements for software are often " "written with the use of dedicated hardware in mind. This model is relevant " "for the cloud platform itself, including the hypervisor operating system, " "supporting software for items such as database, RPC, backup, and so on. " "Consideration must be made when offering Compute service instances and " "applications to end users of the cloud, since the license terms for that " "software may need some adjustment to be able to operate economically in the " "cloud." msgstr "" #: ../ops-deployment-factors.rst:279 msgid "" "Multi-site OpenStack deployments present additional licensing considerations " "over and above regular OpenStack clouds, particularly where site licenses " "are in use to provide cost efficient access to software licenses. The " "licensing for host operating systems, guest operating systems, OpenStack " "distributions (if applicable), software-defined infrastructure including " "network controllers and storage systems, and even individual applications " "need to be evaluated." msgstr "" #: ../ops-deployment-factors.rst:287 msgid "Topics to consider include:" msgstr "" #: ../ops-deployment-factors.rst:289 msgid "" "The definition of what constitutes a site in the relevant licenses, as the " "term does not necessarily denote a geographic or otherwise physically " "isolated location." msgstr "" #: ../ops-deployment-factors.rst:293 msgid "" "Differentiations between \"hot\" (active) and \"cold\" (inactive) sites, " "where significant savings may be made in situations where one site is a cold " "standby for disaster recovery purposes only." msgstr "" #: ../ops-deployment-factors.rst:297 msgid "" "Certain locations might require local vendors to provide support and " "services for each site which may vary with the licensing agreement in place." msgstr "" #: ../ops-lay-of-the-land.rst:3 msgid "Lay of the Land" msgstr "" #: ../ops-lay-of-the-land.rst:5 msgid "" "This chapter helps you set up your working environment and use it to take a " "look around your cloud." msgstr "" #: ../ops-lay-of-the-land.rst:9 msgid "Using the OpenStack Dashboard for Administration" msgstr "" #: ../ops-lay-of-the-land.rst:11 msgid "" "As a cloud administrative user, you can use the OpenStack dashboard to " "create and manage projects, users, images, and flavors. Users are allowed to " "create and manage images within specified projects and to share images, " "depending on the Image service configuration. Typically, the policy " "configuration allows admin users only to set quotas and create and manage " "services. The dashboard provides an :guilabel:`Admin` tab with a :guilabel:" "`System Panel` and an :guilabel:`Identity` tab. These interfaces give you " "access to system information and usage as well as to settings for " "configuring what end users can do. Refer to the `OpenStack Administrator " "Guide `__ for " "detailed how-to information about using the dashboard as an admin user." msgstr "" #: ../ops-lay-of-the-land.rst:25 msgid "Command-Line Tools" msgstr "" #: ../ops-lay-of-the-land.rst:27 msgid "" "We recommend using a combination of the OpenStack command-line interface " "(CLI) tools and the OpenStack dashboard for administration. Some users with " "a background in other cloud technologies may be using the EC2 Compatibility " "API, which uses naming conventions somewhat different from the native API." msgstr "" #: ../ops-lay-of-the-land.rst:33 msgid "" "The pip utility is used to manage package installation from the PyPI archive " "and is available in the python-pip package in most Linux distributions. " "While each OpenStack project has its own client, they are being deprecated " "in favour of a common OpenStack client. It is generally recommended to " "install the OpenStack client." msgstr "" #: ../ops-lay-of-the-land.rst:41 msgid "" "To perform testing and orchestration, it is usually easier to install the " "OpenStack CLI tools in a dedicated VM in the cloud. We recommend that you " "keep the VM installation simple. All the tools should be installed from a " "single OpenStack release version. If you need to run tools from multiple " "OpenStack releases, then we recommend that you run with multiple VMs that " "are each running a dedicated version." msgstr "" #: ../ops-lay-of-the-land.rst:49 msgid "Install OpenStack command-line clients" msgstr "" #: ../ops-lay-of-the-land.rst:51 msgid "" "For instructions on installing, upgrading, or removing command-line clients, " "see the `Install the OpenStack command-line clients `_ " "section in OpenStack End User Guide." msgstr "" #: ../ops-lay-of-the-land.rst:58 msgid "" "If you support the EC2 API on your cloud, you should also install the " "euca2ools package or some other EC2 API tool so that you can get the same " "view your users have. Using EC2 API-based tools is mostly out of the scope " "of this guide, though we discuss getting credentials for use with it." msgstr "" #: ../ops-lay-of-the-land.rst:65 msgid "Administrative Command-Line Tools" msgstr "" #: ../ops-lay-of-the-land.rst:67 msgid "" "There are also several :command:`*-manage` command-line tools. These are " "installed with the project's services on the cloud controller and do not " "need to be installed separately:" msgstr "" #: ../ops-lay-of-the-land.rst:71 msgid ":command:`nova-manage`" msgstr "" #: ../ops-lay-of-the-land.rst:72 msgid ":command:`glance-manage`" msgstr "" #: ../ops-lay-of-the-land.rst:73 msgid ":command:`keystone-manage`" msgstr "" #: ../ops-lay-of-the-land.rst:74 msgid ":command:`cinder-manage`" msgstr "" #: ../ops-lay-of-the-land.rst:76 msgid "" "Unlike the CLI tools mentioned above, the :command:`*-manage` tools must be " "run from the cloud controller, as root, because they need read access to the " "config files such as ``/etc/nova/nova.conf`` and to make queries directly " "against the database rather than against the OpenStack :term:`API endpoints " "`." msgstr "" #: ../ops-lay-of-the-land.rst:84 msgid "" "The existence of the ``*-manage`` tools is a legacy issue. It is a goal of " "the OpenStack project to eventually migrate all of the remaining " "functionality in the ``*-manage`` tools into the API-based tools. Until that " "day, you need to SSH into the :term:`cloud controller node` to perform some " "maintenance operations that require one of the ``*-manage`` tools." msgstr "" #: ../ops-lay-of-the-land.rst:92 msgid "Getting Credentials" msgstr "" #: ../ops-lay-of-the-land.rst:94 msgid "" "You must have the appropriate credentials if you want to use the command-" "line tools to make queries against your OpenStack cloud. By far, the easiest " "way to obtain :term:`authentication` credentials to use with command-line " "clients is to use the OpenStack dashboard. Select :guilabel:`Project`, click " "the :guilabel:`Project` tab, and click :guilabel:`Access & Security` on the :" "guilabel:`Compute` category. On the :guilabel:`Access & Security` page, " "click the :guilabel:`API Access` tab to display two buttons, :guilabel:" "`Download OpenStack RC File` and :guilabel:`Download EC2 Credentials`, which " "let you generate files that you can source in your shell to populate the " "environment variables the command-line tools require to know where your " "service endpoints and your authentication information are. The user you " "logged in to the dashboard dictates the filename for the openrc file, such " "as ``demo-openrc.sh``. When logged in as admin, the file is named ``admin-" "openrc.sh``." msgstr "" #: ../ops-lay-of-the-land.rst:109 msgid "The generated file looks something like this:" msgstr "" #: ../ops-lay-of-the-land.rst:155 msgid "" "This does not save your password in plain text, which is a good thing. But " "when you source or run the script, it prompts you for your password and then " "stores your response in the environment variable ``OS_PASSWORD``. It is " "important to note that this does require interactivity. It is possible to " "store a value directly in the script if you require a noninteractive " "operation, but you then need to be extremely cautious with the security and " "permissions of this file." msgstr "" #: ../ops-lay-of-the-land.rst:164 msgid "" "EC2 compatibility credentials can be downloaded by selecting :guilabel:" "`Project`, then :guilabel:`Compute`, then :guilabel:`Access & Security`, " "then :guilabel:`API Access` to display the :guilabel:`Download EC2 " "Credentials` button. Click the button to generate a ZIP file with server " "x509 certificates and a shell script fragment. Create a new directory in a " "secure location because these are live credentials containing all the " "authentication information required to access your cloud identity, unlike " "the default ``user-openrc``. Extract the ZIP file here. You should have " "``cacert.pem``, ``cert.pem``, ``ec2rc.sh``, and ``pk.pem``. The ``ec2rc.sh`` " "is similar to this:" msgstr "" #: ../ops-lay-of-the-land.rst:197 msgid "" "To put the EC2 credentials into your environment, source the ``ec2rc.sh`` " "file." msgstr "" #: ../ops-lay-of-the-land.rst:201 msgid "Inspecting API Calls" msgstr "" #: ../ops-lay-of-the-land.rst:203 msgid "" "The command-line tools can be made to show the OpenStack API calls they make " "by passing the ``--debug`` flag to them. For example:" msgstr "" #: ../ops-lay-of-the-land.rst:210 msgid "" "This example shows the HTTP requests from the client and the responses from " "the endpoints, which can be helpful in creating custom tools written to the " "OpenStack API." msgstr "" #: ../ops-lay-of-the-land.rst:215 msgid "Using cURL for further inspection" msgstr "" #: ../ops-lay-of-the-land.rst:217 msgid "" "Underlying the use of the command-line tools is the OpenStack API, which is " "a RESTful API that runs over HTTP. There may be cases where you want to " "interact with the API directly or need to use it because of a suspected bug " "in one of the CLI tools. The best way to do this is to use a combination " "of `cURL `_ and another tool, such as `jq `_, to parse the JSON from the responses." msgstr "" #: ../ops-lay-of-the-land.rst:225 msgid "" "The first thing you must do is authenticate with the cloud using your " "credentials to get an :term:`authentication token`." msgstr "" #: ../ops-lay-of-the-land.rst:228 msgid "" "Your credentials are a combination of username, password, and tenant " "(project). You can extract these values from the ``openrc.sh`` discussed " "above. The token allows you to interact with your other service endpoints " "without needing to reauthenticate for every request. Tokens are typically " "good for 24 hours, and when the token expires, you are alerted with a 401 " "(Unauthorized) response and you can request another token." msgstr "" #: ../ops-lay-of-the-land.rst:236 msgid "Look at your OpenStack service :term:`catalog`:" msgstr "" #: ../ops-lay-of-the-land.rst:244 msgid "" "Read through the JSON response to get a feel for how the catalog is laid out." msgstr "" #: ../ops-lay-of-the-land.rst:247 msgid "" "To make working with subsequent requests easier, store the token in an " "environment variable:" msgstr "" #: ../ops-lay-of-the-land.rst:256 msgid "Now you can refer to your token on the command line as ``$TOKEN``." msgstr "" #: ../ops-lay-of-the-land.rst:258 msgid "" "Pick a service endpoint from your service catalog, such as compute. Try a " "request, for example, listing instances (servers):" msgstr "" #: ../ops-lay-of-the-land.rst:267 msgid "" "To discover how API requests should be structured, read the `OpenStack API " "Reference `_. To chew through the responses using jq, see the `jq Manual `_." msgstr "" #: ../ops-lay-of-the-land.rst:272 msgid "" "The ``-s flag`` used in the cURL commands above are used to prevent " "the progress meter from being shown. If you are having trouble running cURL " "commands, you'll want to remove it. Likewise, to help you troubleshoot cURL " "commands, you can include the ``-v`` flag to show you the verbose output. " "There are many more extremely useful features in cURL; refer to the man page " "for all the options." msgstr "" #: ../ops-lay-of-the-land.rst:280 msgid "Servers and Services" msgstr "" #: ../ops-lay-of-the-land.rst:282 msgid "" "As an administrator, you have a few ways to discover what your OpenStack " "cloud looks like simply by using the OpenStack tools available. This section " "gives you an idea of how to get an overview of your cloud, its shape, size, " "and current state." msgstr "" #: ../ops-lay-of-the-land.rst:287 msgid "" "First, you can discover what servers belong to your OpenStack cloud by " "running:" msgstr "" #: ../ops-lay-of-the-land.rst:294 msgid "The output looks like the following:" msgstr "" #: ../ops-lay-of-the-land.rst:313 msgid "" "The output shows that there are five compute nodes and one cloud controller. " "You see all the services in the up state, which indicates that the services " "are up and running. If a service is in a down state, it is no longer " "available. This is an indication that you should troubleshoot why the " "service is down." msgstr "" #: ../ops-lay-of-the-land.rst:319 msgid "" "If you are using cinder, run the following command to see a similar listing:" msgstr "" #: ../ops-lay-of-the-land.rst:333 msgid "" "With these two tables, you now have a good overview of what servers and " "services make up your cloud." msgstr "" #: ../ops-lay-of-the-land.rst:336 msgid "" "You can also use the Identity service (keystone) to see what services are " "available in your cloud as well as what endpoints have been configured for " "the services." msgstr "" #: ../ops-lay-of-the-land.rst:340 msgid "" "The following command requires you to have your shell environment configured " "with the proper administrative variables:" msgstr "" #: ../ops-lay-of-the-land.rst:364 msgid "" "The preceding output has been truncated to show only two services. You will " "see one service entry for each service that your cloud provides. Note how " "the endpoint domain can be different depending on the endpoint type. " "Different endpoint domains per type are not required, but this can be done " "for different reasons, such as endpoint privacy or network traffic " "segregation." msgstr "" #: ../ops-lay-of-the-land.rst:371 msgid "" "You can find the version of the Compute installation by using the OpenStack " "command-line client:" msgstr "" #: ../ops-lay-of-the-land.rst:379 msgid "Diagnose Your Compute Nodes" msgstr "" #: ../ops-lay-of-the-land.rst:381 msgid "" "You can obtain extra information about virtual machines that are running—" "their CPU usage, the memory, the disk I/O or network I/O—per instance, by " "running the :command:`nova diagnostics` command with a server ID:" msgstr "" #: ../ops-lay-of-the-land.rst:389 msgid "" "The output of this command varies depending on the hypervisor because " "hypervisors support different attributes. The following demonstrates the " "difference between the two most popular hypervisors. Here is example output " "when the hypervisor is Xen:" msgstr "" #: ../ops-lay-of-the-land.rst:410 msgid "" "While the command should work with any hypervisor that is controlled through " "libvirt (KVM, QEMU, or LXC), it has been tested only with KVM. Here is the " "example output when the hypervisor is KVM:" msgstr "" #: ../ops-lay-of-the-land.rst:437 msgid "Network Inspection" msgstr "" #: ../ops-lay-of-the-land.rst:439 msgid "" "To see which fixed IP networks are configured in your cloud, you can use " "the :command:`openstack` command-line client to get the IP ranges:" msgstr "" #: ../ops-lay-of-the-land.rst:452 msgid "The OpenStack command-line client can provide some additional details:" msgstr "" #: ../ops-lay-of-the-land.rst:467 msgid "" "This output shows that two networks are configured, each network containing " "255 IPs (a /24 subnet). The first network has been assigned to a certain " "project, while the second network is still open for assignment. You can " "assign this network manually; otherwise, it is automatically assigned when a " "project launches its first instance." msgstr "" #: ../ops-lay-of-the-land.rst:473 msgid "To find out whether any floating IPs are available in your cloud, run:" msgstr "" #: ../ops-lay-of-the-land.rst:485 msgid "" "Here, two floating IPs are available. The first has been allocated to a " "project, while the other is unallocated." msgstr "" #: ../ops-lay-of-the-land.rst:489 msgid "Users and Projects" msgstr "" #: ../ops-lay-of-the-land.rst:491 msgid "To see a list of projects that have been added to the cloud, run:" msgstr "" #: ../ops-lay-of-the-land.rst:506 msgid "To see a list of users, run:" msgstr "" #: ../ops-lay-of-the-land.rst:524 msgid "" "Sometimes a user and a group have a one-to-one mapping. This happens for " "standard system accounts, such as cinder, glance, nova, and swift, or when " "only one user is part of a group." msgstr "" #: ../ops-lay-of-the-land.rst:529 msgid "Running Instances" msgstr "" #: ../ops-lay-of-the-land.rst:531 msgid "To see a list of running instances, run:" msgstr "" #: ../ops-lay-of-the-land.rst:543 msgid "" "Unfortunately, this command does not tell you various details about the " "running instances, such as what compute node the instance is running on, " "what flavor the instance is, and so on. You can use the following command to " "view details about individual instances:" msgstr "" #: ../ops-lay-of-the-land.rst:552 ../ops-quotas.rst:146 ../ops-quotas.rst:176 #: ../ops-quotas.rst:196 ../ops-quotas.rst:234 ../ops-quotas.rst:388 #: ../ops-quotas.rst:412 ../ops-quotas.rst:439 #: ../ops-user-facing-operations.rst:274 ../ops-user-facing-operations.rst:294 msgid "For example:" msgstr "" #: ../ops-lay-of-the-land.rst:591 msgid "" "This output shows that an instance named ``devstack`` was created from an " "Ubuntu 12.04 image using a flavor of ``m1.small`` and is hosted on the " "compute node ``c02.example.com``." msgstr "" #: ../ops-lay-of-the-land.rst:598 msgid "" "We hope you have enjoyed this quick tour of your working environment, " "including how to interact with your cloud and extract useful information. " "From here, you can use the `OpenStack Administrator Guide `_ as your reference for all of the command-line " "functionality in your cloud." msgstr "" #: ../ops-logging-monitoring-summary.rst:5 msgid "" "For stable operations, you want to detect failure promptly and determine " "causes efficiently. With a distributed system, it's even more important to " "track the right items to meet a service-level target. Learning where these " "logs are located in the file system or API gives you an advantage. This " "chapter also showed how to read, interpret, and manipulate information from " "OpenStack services so that you can monitor effectively." msgstr "" #: ../ops-logging-monitoring.rst:3 msgid "Logging and Monitoring" msgstr "" #: ../ops-logging-monitoring.rst:12 msgid "" "As an OpenStack cloud is composed of so many different services, there are a " "large number of log files. This chapter aims to assist you in locating and " "working with them and describes other ways to track the status of your " "deployment." msgstr "" #: ../ops-logging-rsyslog.rst:3 msgid "rsyslog" msgstr "" #: ../ops-logging-rsyslog.rst:5 msgid "" "A number of operating systems use rsyslog as the default logging service. " "Since it is natively able to send logs to a remote location, you do not have " "to install anything extra to enable this feature, just modify the " "configuration file. In doing this, consider running your logging over a " "management network or using an encrypted VPN to avoid interception." msgstr "" #: ../ops-logging-rsyslog.rst:12 msgid "rsyslog client configuration" msgstr "" #: ../ops-logging-rsyslog.rst:14 msgid "" "To begin, configure all OpenStack components to log to the syslog log file " "in addition to their standard log file location. Also, configure each " "component to log to a different syslog facility. This makes it easier to " "split the logs into individual components on the central server:" msgstr "" #: ../ops-logging-rsyslog.rst:19 msgid "``nova.conf``:" msgstr "" #: ../ops-logging-rsyslog.rst:26 msgid "``glance-api.conf`` and ``glance-registry.conf``:" msgstr "" #: ../ops-logging-rsyslog.rst:33 msgid "``cinder.conf``:" msgstr "" #: ../ops-logging-rsyslog.rst:40 msgid "``keystone.conf``:" msgstr "" #: ../ops-logging-rsyslog.rst:47 msgid "By default, Object Storage logs to syslog." msgstr "" #: ../ops-logging-rsyslog.rst:49 msgid "Next, create ``/etc/rsyslog.d/client.conf`` with the following line:" msgstr "" #: ../ops-logging-rsyslog.rst:55 msgid "" "This instructs rsyslog to send all logs to the IP listed. In this example, " "the IP points to the cloud controller." msgstr "" #: ../ops-logging-rsyslog.rst:59 msgid "rsyslog server configuration" msgstr "" #: ../ops-logging-rsyslog.rst:61 msgid "" "Designate a server as the central logging server. The best practice is to " "choose a server that is solely dedicated to this purpose. Create a file " "called ``/etc/rsyslog.d/server.conf`` with the following contents:" msgstr "" #: ../ops-logging-rsyslog.rst:87 msgid "" "This example configuration handles the nova service only. It first " "configures rsyslog to act as a server that runs on port 514. Next, it " "creates a series of logging templates. Logging templates control where " "received logs are stored. Using the last example, a nova log from c01." "example.com goes to the following locations:" msgstr "" #: ../ops-logging-rsyslog.rst:93 msgid "``/var/log/rsyslog/c01.example.com/nova.log``" msgstr "" #: ../ops-logging-rsyslog.rst:95 ../ops-logging-rsyslog.rst:101 msgid "``/var/log/rsyslog/nova.log``" msgstr "" #: ../ops-logging-rsyslog.rst:97 msgid "This is useful, as logs from c02.example.com go to:" msgstr "" #: ../ops-logging-rsyslog.rst:99 msgid "``/var/log/rsyslog/c02.example.com/nova.log``" msgstr "" #: ../ops-logging-rsyslog.rst:103 msgid "" "This configuration will result in a separate log file for each compute node " "as well as an aggregated log file that contains nova logs from all nodes." msgstr "" #: ../ops-logging.rst:3 msgid "Logging" msgstr "" #: ../ops-logging.rst:6 msgid "Where Are the Logs?" msgstr "" #: ../ops-logging.rst:8 msgid "" "Most services use the convention of writing their log files to " "subdirectories of the ``/var/log directory``, as listed in :ref:" "`table_log_locations`." msgstr "" #: ../ops-logging.rst:14 msgid "Table OpenStack log locations" msgstr "" #: ../ops-logging.rst:18 msgid "Node type" msgstr "" #: ../ops-logging.rst:19 msgid "Service" msgstr "" #: ../ops-logging.rst:20 msgid "Log location" msgstr "" #: ../ops-logging.rst:21 ../ops-logging.rst:24 ../ops-logging.rst:27 #: ../ops-logging.rst:30 ../ops-logging.rst:33 ../ops-logging.rst:36 msgid "Cloud controller" msgstr "" #: ../ops-logging.rst:22 msgid "``nova-*``" msgstr "" #: ../ops-logging.rst:23 msgid "``/var/log/nova``" msgstr "" #: ../ops-logging.rst:25 msgid "``glance-*``" msgstr "" #: ../ops-logging.rst:26 msgid "``/var/log/glance``" msgstr "" #: ../ops-logging.rst:28 msgid "``cinder-*``" msgstr "" #: ../ops-logging.rst:29 msgid "``/var/log/cinder``" msgstr "" #: ../ops-logging.rst:31 msgid "``keystone-*``" msgstr "" #: ../ops-logging.rst:32 msgid "``/var/log/keystone``" msgstr "" #: ../ops-logging.rst:34 msgid "``neutron-*``" msgstr "" #: ../ops-logging.rst:35 msgid "``/var/log/neutron``" msgstr "" #: ../ops-logging.rst:37 msgid "horizon" msgstr "" #: ../ops-logging.rst:38 msgid "``/var/log/apache2/``" msgstr "" #: ../ops-logging.rst:39 msgid "All nodes" msgstr "" #: ../ops-logging.rst:40 msgid "misc (swift, dnsmasq)" msgstr "" #: ../ops-logging.rst:41 msgid "``/var/log/syslog``" msgstr "" #: ../ops-logging.rst:42 ../ops-logging.rst:45 msgid "Compute nodes" msgstr "" #: ../ops-logging.rst:43 msgid "libvirt" msgstr "" #: ../ops-logging.rst:44 msgid "``/var/log/libvirt/libvirtd.log``" msgstr "" #: ../ops-logging.rst:46 msgid "Console (boot up messages) for VM instances:" msgstr "" #: ../ops-logging.rst:47 msgid "``/var/lib/nova/instances/instance-/console.log``" msgstr "" #: ../ops-logging.rst:48 msgid "Block Storage nodes" msgstr "" #: ../ops-logging.rst:49 ../ops-monitoring.rst:47 msgid "cinder-volume" msgstr "" #: ../ops-logging.rst:50 msgid "``/var/log/cinder/cinder-volume.log``" msgstr "" #: ../ops-logging.rst:53 msgid "Reading the Logs" msgstr "" #: ../ops-logging.rst:55 msgid "" "OpenStack services use the standard logging levels, at increasing severity: " "TRACE, DEBUG, INFO, AUDIT, WARNING, ERROR, and CRITICAL. That is, messages " "only appear in the logs if they are more \"severe\" than the particular log " "level, with DEBUG allowing all log statements through. For example, TRACE is " "logged only if the software has a stack trace, while INFO is logged for " "every message including those that are only for information." msgstr "" #: ../ops-logging.rst:63 msgid "" "To disable DEBUG-level logging, edit ``/etc/nova/nova.conf`` file as follows:" msgstr "" #: ../ops-logging.rst:69 msgid "" "Keystone is handled a little differently. To modify the logging level, edit " "the ``/etc/keystone/logging.conf`` file and look at the ``logger_root`` and " "``handler_file`` sections." msgstr "" #: ../ops-logging.rst:73 msgid "" "Logging for horizon is configured in ``/etc/openstack_dashboard/" "local_settings.py``. Because horizon is a Django web application, it follows " "the `Django Logging framework conventions `_." msgstr "" #: ../ops-logging.rst:78 msgid "" "The first step in finding the source of an error is typically to search for " "a CRITICAL, or ERROR message in the log starting at the bottom of the log " "file." msgstr "" #: ../ops-logging.rst:82 msgid "" "Here is an example of a log message with the corresponding ERROR (Python " "traceback) immediately following:" msgstr "" #: ../ops-logging.rst:110 msgid "" "In this example, ``cinder-volumes`` failed to start and has provided a stack " "trace, since its volume back end has been unable to set up the storage volume" "—probably because the LVM volume that is expected from the configuration " "does not exist." msgstr "" #: ../ops-logging.rst:115 msgid "Here is an example error log:" msgstr "" #: ../ops-logging.rst:122 msgid "" "In this error, a nova service has failed to connect to the RabbitMQ server " "because it got a connection refused error." msgstr "" #: ../ops-logging.rst:126 msgid "Tracing Instance Requests" msgstr "" #: ../ops-logging.rst:128 msgid "" "When an instance fails to behave properly, you will often have to trace " "activity associated with that instance across the log files of various " "``nova-*`` services and across both the cloud controller and compute nodes." msgstr "" #: ../ops-logging.rst:133 msgid "" "The typical way is to trace the UUID associated with an instance across the " "service logs." msgstr "" #: ../ops-logging.rst:136 msgid "Consider the following example:" msgstr "" #: ../ops-logging.rst:147 msgid "" "Here, the ID associated with the instance is ``faf7ded8-4a46-413b-b113-" "f19590746ffe``. If you search for this string on the cloud controller in the " "``/var/log/nova-*.log`` files, it appears in ``nova-api.log`` and ``nova-" "scheduler.log``. If you search for this on the compute nodes in ``/var/log/" "nova-*.log``, it appears in ``nova-compute.log``. If no ERROR or CRITICAL " "messages appear, the most recent log entry that reports this may provide a " "hint about what has gone wrong." msgstr "" #: ../ops-logging.rst:157 msgid "Adding Custom Logging Statements" msgstr "" #: ../ops-logging.rst:159 msgid "" "If there is not enough information in the existing logs, you may need to add " "your own custom logging statements to the ``nova-*`` services." msgstr "" #: ../ops-logging.rst:163 msgid "" "The source files are located in ``/usr/lib/python2.7/dist-packages/nova``." msgstr "" #: ../ops-logging.rst:166 msgid "" "To add logging statements, the following line should be near the top of the " "file. For most files, these should already be there:" msgstr "" #: ../ops-logging.rst:174 msgid "To add a DEBUG logging statement, you would do:" msgstr "" #: ../ops-logging.rst:180 msgid "" "You may notice that all the existing logging messages are preceded by an " "underscore and surrounded by parentheses, for example:" msgstr "" #: ../ops-logging.rst:187 msgid "" "This formatting is used to support translation of logging messages into " "different languages using the `gettext `_ internationalization library. You don't need to do this for " "your own custom log messages. However, if you want to contribute the code " "back to the OpenStack project that includes logging statements, you must " "surround your log messages with underscores and parentheses." msgstr "" #: ../ops-logging.rst:196 msgid "RabbitMQ Web Management Interface or rabbitmqctl" msgstr "" #: ../ops-logging.rst:198 msgid "" "Aside from connection failures, RabbitMQ log files are generally not useful " "for debugging OpenStack related issues. Instead, we recommend you use the " "RabbitMQ web management interface. Enable it on your cloud controller:" msgstr "" #: ../ops-logging.rst:211 msgid "" "The RabbitMQ web management interface is accessible on your cloud controller " "at *http://localhost:55672*." msgstr "" #: ../ops-logging.rst:216 msgid "" "Ubuntu 12.04 installs RabbitMQ version 2.7.1, which uses port 55672. " "RabbitMQ versions 3.0 and above use port 15672 instead. You can check which " "version of RabbitMQ you have running on your local Ubuntu machine by doing:" msgstr "" #: ../ops-logging.rst:226 msgid "" "An alternative to enabling the RabbitMQ web management interface is to use " "the ``rabbitmqctl`` commands. For example, :command:`rabbitmqctl " "list_queues| grep cinder` displays any messages left in the queue. If there " "are messages, it's a possible sign that cinder services didn't connect " "properly to rabbitmq and might have to be restarted." msgstr "" #: ../ops-logging.rst:233 msgid "" "Items to monitor for RabbitMQ include the number of items in each of the " "queues and the processing time statistics for the server." msgstr "" #: ../ops-logging.rst:237 msgid "Centrally Managing Logs" msgstr "" #: ../ops-logging.rst:239 msgid "" "Because your cloud is most likely composed of many servers, you must check " "logs on each of those servers to properly piece an event together. A better " "solution is to send the logs of all servers to a central location so that " "they can all be accessed from the same area." msgstr "" #: ../ops-logging.rst:245 msgid "" "The choice of central logging engine will be dependent on the operating " "system in use as well as any organizational requirements for logging tools." msgstr "" #: ../ops-logging.rst:249 msgid "Syslog choices" msgstr "" #: ../ops-logging.rst:251 msgid "" "There are a large number of syslogs engines available, each have differing " "capabilities and configuration requirements." msgstr "" #: ../ops-maintenance-complete.rst:3 msgid "Handling a Complete Failure" msgstr "" #: ../ops-maintenance-complete.rst:5 msgid "" "A common way of dealing with the recovery from a full system failure, such " "as a power outage of a data center, is to assign each service a priority, " "and restore in order. :ref:`table_example_priority` shows an example." msgstr "" #: ../ops-maintenance-complete.rst:12 msgid "Table. Example service restoration priority list" msgstr "" #: ../ops-maintenance-complete.rst:15 msgid "Priority" msgstr "" #: ../ops-maintenance-complete.rst:16 msgid "Services" msgstr "" #: ../ops-maintenance-complete.rst:18 msgid "Internal network connectivity" msgstr "" #: ../ops-maintenance-complete.rst:20 msgid "Backing storage services" msgstr "" #: ../ops-maintenance-complete.rst:21 msgid "3" msgstr "" #: ../ops-maintenance-complete.rst:22 msgid "Public network connectivity for user virtual machines" msgstr "" #: ../ops-maintenance-complete.rst:24 msgid "``nova-compute``, cinder hosts" msgstr "" #: ../ops-maintenance-complete.rst:25 msgid "5" msgstr "" #: ../ops-maintenance-complete.rst:26 msgid "User virtual machines" msgstr "" #: ../ops-maintenance-complete.rst:27 msgid "10" msgstr "" #: ../ops-maintenance-complete.rst:28 msgid "Message queue and database services" msgstr "" #: ../ops-maintenance-complete.rst:29 msgid "15" msgstr "" #: ../ops-maintenance-complete.rst:30 msgid "Keystone services" msgstr "" #: ../ops-maintenance-complete.rst:31 msgid "20" msgstr "" #: ../ops-maintenance-complete.rst:32 msgid "``cinder-scheduler``" msgstr "" #: ../ops-maintenance-complete.rst:33 msgid "21" msgstr "" #: ../ops-maintenance-complete.rst:34 msgid "Image Catalog and Delivery services" msgstr "" #: ../ops-maintenance-complete.rst:35 msgid "22" msgstr "" #: ../ops-maintenance-complete.rst:36 msgid "``nova-scheduler`` services" msgstr "" #: ../ops-maintenance-complete.rst:37 msgid "98" msgstr "" #: ../ops-maintenance-complete.rst:38 msgid "``cinder-api``" msgstr "" #: ../ops-maintenance-complete.rst:39 msgid "99" msgstr "" #: ../ops-maintenance-complete.rst:40 msgid "``nova-api`` services" msgstr "" #: ../ops-maintenance-complete.rst:41 msgid "100" msgstr "" #: ../ops-maintenance-complete.rst:42 msgid "Dashboard node" msgstr "" #: ../ops-maintenance-complete.rst:44 msgid "" "Use this example priority list to ensure that user-affected services are " "restored as soon as possible, but not before a stable environment is in " "place. Of course, despite being listed as a single-line item, each step " "requires significant work. For example, just after starting the database, " "you should check its integrity, or, after starting the nova services, you " "should verify that the hypervisor matches the database and fix any " "mismatches." msgstr "" #: ../ops-maintenance-compute.rst:3 msgid "Compute Node Failures and Maintenance" msgstr "" #: ../ops-maintenance-compute.rst:5 msgid "" "Sometimes a compute node either crashes unexpectedly or requires a reboot " "for maintenance reasons." msgstr "" #: ../ops-maintenance-compute.rst:9 ../ops-maintenance-controller.rst:17 msgid "Planned Maintenance" msgstr "" #: ../ops-maintenance-compute.rst:11 msgid "" "If you need to reboot a compute node due to planned maintenance, such as a " "software or hardware upgrade, perform the following steps:" msgstr "" #: ../ops-maintenance-compute.rst:14 msgid "" "Disable scheduling of new VMs to the node, optionally providing a reason " "comment:" msgstr "" #: ../ops-maintenance-compute.rst:22 msgid "Verify that all hosted instances have been moved off the node:" msgstr "" #: ../ops-maintenance-compute.rst:24 msgid "If your cloud is using a shared storage:" msgstr "" #: ../ops-maintenance-compute.rst:26 msgid "Get a list of instances that need to be moved:" msgstr "" #: ../ops-maintenance-compute.rst:32 msgid "Migrate all instances one by one:" msgstr "" #: ../ops-maintenance-compute.rst:38 msgid "If your cloud is not using a shared storage, run:" msgstr "" #: ../ops-maintenance-compute.rst:44 msgid "Stop the ``nova-compute`` service:" msgstr "" #: ../ops-maintenance-compute.rst:50 msgid "" "If you use a configuration-management system, such as Puppet, that ensures " "the ``nova-compute`` service is always running, you can temporarily move the " "``init`` files:" msgstr "" #: ../ops-maintenance-compute.rst:60 msgid "" "Shut down your compute node, perform the maintenance, and turn the node back " "on." msgstr "" #: ../ops-maintenance-compute.rst:63 msgid "Start the ``nova-compute`` service:" msgstr "" #: ../ops-maintenance-compute.rst:69 msgid "You can re-enable the ``nova-compute`` service by undoing the commands:" msgstr "" #: ../ops-maintenance-compute.rst:76 msgid "Enable scheduling of VMs to the node:" msgstr "" #: ../ops-maintenance-compute.rst:82 msgid "Optionally, migrate the instances back to their original compute node." msgstr "" #: ../ops-maintenance-compute.rst:85 msgid "After a Compute Node Reboots" msgstr "" #: ../ops-maintenance-compute.rst:87 msgid "" "When you reboot a compute node, first verify that it booted successfully. " "This includes ensuring that the ``nova-compute`` service is running:" msgstr "" #: ../ops-maintenance-compute.rst:96 msgid "Also ensure that it has successfully connected to the AMQP server:" msgstr "" #: ../ops-maintenance-compute.rst:103 msgid "" "After the compute node is successfully running, you must deal with the " "instances that are hosted on that compute node because none of them are " "running. Depending on your SLA with your users or customers, you might have " "to start each instance and ensure that they start correctly." msgstr "" #: ../ops-maintenance-compute.rst:109 ../ops-quotas.rst:103 #: ../ops-user-facing-operations.rst:1629 msgid "Instances" msgstr "" #: ../ops-maintenance-compute.rst:111 msgid "" "You can create a list of instances that are hosted on the compute node by " "performing the following command:" msgstr "" #: ../ops-maintenance-compute.rst:118 msgid "" "After you have the list, you can use the :command:`openstack` command to " "start each instance:" msgstr "" #: ../ops-maintenance-compute.rst:127 msgid "" "Any time an instance shuts down unexpectedly, it might have problems on " "boot. For example, the instance might require an ``fsck`` on the root " "partition. If this happens, the user can use the dashboard VNC console to " "fix this." msgstr "" #: ../ops-maintenance-compute.rst:132 msgid "" "If an instance does not boot, meaning ``virsh list`` never shows the " "instance as even attempting to boot, do the following on the compute node:" msgstr "" #: ../ops-maintenance-compute.rst:140 msgid "" "Try executing the :command:`openstack server reboot` command again. You " "should see an error message about why the instance was not able to boot." msgstr "" #: ../ops-maintenance-compute.rst:143 msgid "" "In most cases, the error is the result of something in libvirt's XML file " "(``/etc/libvirt/qemu/instance-xxxxxxxx.xml``) that no longer exists. You can " "enforce re-creation of the XML file as well as rebooting the instance by " "running the following command:" msgstr "" #: ../ops-maintenance-compute.rst:153 msgid "Inspecting and Recovering Data from Failed Instances" msgstr "" #: ../ops-maintenance-compute.rst:155 msgid "" "In some scenarios, instances are running but are inaccessible through SSH " "and do not respond to any command. The VNC console could be displaying a " "boot failure or kernel panic error messages. This could be an indication of " "file system corruption on the VM itself. If you need to recover files or " "inspect the content of the instance, qemu-nbd can be used to mount the disk." msgstr "" #: ../ops-maintenance-compute.rst:164 msgid "If you access or view the user's content and data, get approval first!" msgstr "" #: ../ops-maintenance-compute.rst:166 msgid "" "To access the instance's disk (``/var/lib/nova/instances/instance-xxxxxx/" "disk``), use the following steps:" msgstr "" #: ../ops-maintenance-compute.rst:170 msgid "Suspend the instance using the ``virsh`` command." msgstr "" #: ../ops-maintenance-compute.rst:172 msgid "Connect the qemu-nbd device to the disk." msgstr "" #: ../ops-maintenance-compute.rst:174 ../ops-maintenance-compute.rst:236 msgid "Mount the qemu-nbd device." msgstr "" #: ../ops-maintenance-compute.rst:176 msgid "Unmount the device after inspecting." msgstr "" #: ../ops-maintenance-compute.rst:178 msgid "Disconnect the qemu-nbd device." msgstr "" #: ../ops-maintenance-compute.rst:180 msgid "Resume the instance." msgstr "" #: ../ops-maintenance-compute.rst:182 msgid "" "If you do not follow last three steps, OpenStack Compute cannot manage the " "instance any longer. It fails to respond to any command issued by OpenStack " "Compute, and it is marked as shut down." msgstr "" #: ../ops-maintenance-compute.rst:186 msgid "" "Once you mount the disk file, you should be able to access it and treat it " "as a collection of normal directories with files and a directory structure. " "However, we do not recommend that you edit or touch any files because this " "could change the :term:`access control lists (ACLs) ` that are used to determine which accounts can perform what " "operations on files and directories. Changing ACLs can make the instance " "unbootable if it is not already." msgstr "" #: ../ops-maintenance-compute.rst:195 msgid "" "Suspend the instance using the :command:`virsh` command, taking note of the " "internal ID:" msgstr "" #: ../ops-maintenance-compute.rst:210 msgid "" "Find the ID for each instance by listing the server IDs using the following " "command:" msgstr "" #: ../ops-maintenance-compute.rst:223 msgid "Connect the qemu-nbd device to the disk:" msgstr "" #: ../ops-maintenance-compute.rst:238 msgid "" "The qemu-nbd device tries to export the instance disk's different partitions " "as separate devices. For example, if vda is the disk and vda1 is the root " "partition, qemu-nbd exports the device as ``/dev/nbd0`` and ``/dev/nbd0p1``, " "respectively:" msgstr "" #: ../ops-maintenance-compute.rst:247 msgid "" "You can now access the contents of ``/mnt``, which correspond to the first " "partition of the instance's disk." msgstr "" #: ../ops-maintenance-compute.rst:250 msgid "" "To examine the secondary or ephemeral disk, use an alternate mount point if " "you want both primary and secondary drives mounted at the same time:" msgstr "" #: ../ops-maintenance-compute.rst:282 msgid "" "Once you have completed the inspection, unmount the mount point and release " "the qemu-nbd device:" msgstr "" #: ../ops-maintenance-compute.rst:291 msgid "Resume the instance using :command:`virsh`:" msgstr "" #: ../ops-maintenance-compute.rst:306 msgid "Managing floating IP addresses between instances" msgstr "" #: ../ops-maintenance-compute.rst:308 msgid "" "In an elastic cloud environment using the ``Public_AGILE`` network, each " "instance has a publicly accessible IPv4 & IPv6 address. It does not support " "the concept of OpenStack floating IP addresses that can easily be attached, " "removed, and transferred between instances. However, there is a workaround " "using neutron ports which contain the IPv4 & IPv6 address." msgstr "" #: ../ops-maintenance-compute.rst:314 msgid "**Create a port that can be reused**" msgstr "" #: ../ops-maintenance-compute.rst:316 msgid "Create a port on the ``Public_AGILE`` network:" msgstr "" #: ../ops-maintenance-compute.rst:361 msgid "" "If you know the fully qualified domain name (FQDN) that will be assigned to " "the IP address, assign the port with the same name:" msgstr "" #: ../ops-maintenance-compute.rst:407 msgid "Use the port when creating an instance:" msgstr "" #: ../ops-maintenance-compute.rst:415 msgid "Verify the instance has the correct IP address:" msgstr "" #: ../ops-maintenance-compute.rst:453 msgid "Check the port connection using the netcat utility:" msgstr "" #: ../ops-maintenance-compute.rst:462 msgid "**Detach a port from an instance**" msgstr "" #: ../ops-maintenance-compute.rst:464 msgid "Find the port corresponding to the instance. For example:" msgstr "" #: ../ops-maintenance-compute.rst:472 msgid "" "Run the :command:`openstack port set` command to remove the port from the " "instance:" msgstr "" #: ../ops-maintenance-compute.rst:480 msgid "" "Delete the instance and create a new instance using the ``--nic port-id`` " "option." msgstr "" #: ../ops-maintenance-compute.rst:483 msgid "" "**Retrieve an IP address when an instance is deleted before detaching a " "port**" msgstr "" #: ../ops-maintenance-compute.rst:486 msgid "" "The following procedure is a possible workaround to retrieve an IP address " "when an instance has been deleted with the port still attached:" msgstr "" #: ../ops-maintenance-compute.rst:489 msgid "Launch several neutron ports:" msgstr "" #: ../ops-maintenance-compute.rst:496 msgid "Check the ports for the lost IP address and update the name:" msgstr "" #: ../ops-maintenance-compute.rst:503 msgid "Delete the ports that are not needed:" msgstr "" #: ../ops-maintenance-compute.rst:510 msgid "If you still cannot find the lost IP address, repeat these steps again." msgstr "" #: ../ops-maintenance-compute.rst:516 msgid "Volumes" msgstr "" #: ../ops-maintenance-compute.rst:518 msgid "" "If the affected instances also had attached volumes, first generate a list " "of instance and volume UUIDs:" msgstr "" #: ../ops-maintenance-compute.rst:530 msgid "You should see a result similar to the following:" msgstr "" #: ../ops-maintenance-compute.rst:541 msgid "" "Next, manually detach and reattach the volumes, where X is the proper mount " "point:" msgstr "" #: ../ops-maintenance-compute.rst:549 msgid "" "Be sure that the instance has successfully booted and is at a login screen " "before doing the above." msgstr "" #: ../ops-maintenance-compute.rst:553 msgid "Total Compute Node Failure" msgstr "" #: ../ops-maintenance-compute.rst:555 msgid "" "Compute nodes can fail the same way a cloud controller can fail. A " "motherboard failure or some other type of hardware failure can cause an " "entire compute node to go offline. When this happens, all instances running " "on that compute node will not be available. Just like with a cloud " "controller failure, if your infrastructure monitoring does not detect a " "failed compute node, your users will notify you because of their lost " "instances." msgstr "" #: ../ops-maintenance-compute.rst:563 msgid "" "If a compute node fails and won't be fixed for a few hours (or at all), you " "can relaunch all instances that are hosted on the failed node if you use " "shared storage for ``/var/lib/nova/instances``." msgstr "" #: ../ops-maintenance-compute.rst:567 msgid "" "To do this, generate a list of instance UUIDs that are hosted on the failed " "node by running the following query on the nova database:" msgstr "" #: ../ops-maintenance-compute.rst:575 msgid "" "Next, update the nova database to indicate that all instances that used to " "be hosted on c01.example.com are now hosted on c02.example.com:" msgstr "" #: ../ops-maintenance-compute.rst:583 msgid "" "If you're using the Networking service ML2 plug-in, update the Networking " "service database to indicate that all ports that used to be hosted on c01." "example.com are now hosted on c02.example.com:" msgstr "" #: ../ops-maintenance-compute.rst:594 msgid "" "After that, use the :command:`openstack` command to reboot all instances " "that were on c01.example.com while regenerating their XML files at the same " "time:" msgstr "" #: ../ops-maintenance-compute.rst:602 msgid "" "Finally, reattach volumes using the same method described in the section :" "ref:`volumes`." msgstr "" #: ../ops-maintenance-compute.rst:606 msgid "/var/lib/nova/instances" msgstr "" #: ../ops-maintenance-compute.rst:608 msgid "" "It's worth mentioning this directory in the context of failed compute nodes. " "This directory contains the libvirt KVM file-based disk images for the " "instances that are hosted on that compute node. If you are not running your " "cloud in a shared storage environment, this directory is unique across all " "compute nodes." msgstr "" #: ../ops-maintenance-compute.rst:614 msgid "``/var/lib/nova/instances`` contains two types of directories." msgstr "" #: ../ops-maintenance-compute.rst:616 msgid "" "The first is the ``_base`` directory. This contains all the cached base " "images from glance for each unique image that has been launched on that " "compute node. Files ending in ``_20`` (or a different number) are the " "ephemeral base images." msgstr "" #: ../ops-maintenance-compute.rst:621 msgid "" "The other directories are titled ``instance-xxxxxxxx``. These directories " "correspond to instances running on that compute node. The files inside are " "related to one of the files in the ``_base`` directory. They're essentially " "differential-based files containing only the changes made from the original " "``_base`` directory." msgstr "" #: ../ops-maintenance-compute.rst:627 msgid "" "All files and directories in ``/var/lib/nova/instances`` are uniquely named. " "The files in \\_base are uniquely titled for the glance image that they are " "based on, and the directory names ``instance-xxxxxxxx`` are uniquely titled " "for that particular instance. For example, if you copy all data from ``/var/" "lib/nova/instances`` on one compute node to another, you do not overwrite " "any files or cause any damage to images that have the same unique name, " "because they are essentially the same file." msgstr "" #: ../ops-maintenance-compute.rst:636 msgid "" "Although this method is not documented or supported, you can use it when " "your compute node is permanently offline but you have instances locally " "stored on it." msgstr "" #: ../ops-maintenance-configuration.rst:5 msgid "" "Maintaining an OpenStack cloud requires that you manage multiple physical " "servers, and this number might grow over time. Because managing nodes " "manually is error prone, we strongly recommend that you use a configuration-" "management tool. These tools automate the process of ensuring that all your " "nodes are configured properly and encourage you to maintain your " "configuration information (such as packages and configuration options) in a " "version-controlled repository." msgstr "" #: ../ops-maintenance-configuration.rst:15 msgid "" "Several configuration-management tools are available, and this guide does " "not recommend a specific one. The most popular ones in the OpenStack " "community are:" msgstr "" #: ../ops-maintenance-configuration.rst:19 msgid "" "`Puppet `_, with available `OpenStack Puppet " "modules `_" msgstr "" #: ../ops-maintenance-configuration.rst:21 msgid "" "`Ansible `_, with `OpenStack Ansible `_" msgstr "" #: ../ops-maintenance-configuration.rst:23 msgid "" "`Chef `_, with available `OpenStack Chef " "recipes `_" msgstr "" #: ../ops-maintenance-configuration.rst:26 msgid "" "Other newer configuration tools include `Juju `_ " "and `Salt `_; and more mature configuration " "management tools include `CFEngine `_ and `Bcfg2 " "`_." msgstr "" #: ../ops-maintenance-controller.rst:3 msgid "Cloud Controller and Storage Proxy Failures and Maintenance" msgstr "" #: ../ops-maintenance-controller.rst:5 msgid "" "The cloud controller and storage proxy are very similar to each other when " "it comes to expected and unexpected downtime. One of each server type " "typically runs in the cloud, which makes them very noticeable when they are " "not running." msgstr "" #: ../ops-maintenance-controller.rst:10 msgid "" "For the cloud controller, the good news is if your cloud is using the " "FlatDHCP multi-host HA network mode, existing instances and volumes continue " "to operate while the cloud controller is offline. For the storage proxy, " "however, no storage traffic is possible until it is back up and running." msgstr "" #: ../ops-maintenance-controller.rst:19 msgid "" "One way to plan for cloud controller or storage proxy maintenance is to " "simply do it off-hours, such as at 1 a.m. or 2 a.m. This strategy affects " "fewer users. If your cloud controller or storage proxy is too important to " "have unavailable at any point in time, you must look into high-availability " "options." msgstr "" #: ../ops-maintenance-controller.rst:26 msgid "Rebooting a Cloud Controller or Storage Proxy" msgstr "" #: ../ops-maintenance-controller.rst:28 msgid "" "All in all, just issue the :command:`reboot` command. The operating system " "cleanly shuts down services and then automatically reboots. If you want to " "be very thorough, run your backup jobs just before you reboot." msgstr "" #: ../ops-maintenance-controller.rst:33 msgid "" "After a cloud controller reboots, ensure that all required services were " "successfully started. The following commands use :command:`ps` and :command:" "`grep` to determine if nova, glance, and keystone are currently running:" msgstr "" #: ../ops-maintenance-controller.rst:45 msgid "" "Also check that all services are functioning. The following set of commands " "sources the ``openrc`` file, then runs some basic glance, nova, and " "openstack commands. If the commands work as expected, you can be confident " "that those services are in working condition:" msgstr "" #: ../ops-maintenance-controller.rst:57 msgid "" "For the storage proxy, ensure that the :term:`Object Storage service ` has resumed:" msgstr "" #: ../ops-maintenance-controller.rst:64 msgid "Also check that it is functioning:" msgstr "" #: ../ops-maintenance-controller.rst:71 msgid "Total Cloud Controller Failure" msgstr "" #: ../ops-maintenance-controller.rst:73 msgid "" "The cloud controller could completely fail if, for example, its motherboard " "goes bad. Users will immediately notice the loss of a cloud controller since " "it provides core functionality to your cloud environment. If your " "infrastructure monitoring does not alert you that your cloud controller has " "failed, your users definitely will. Unfortunately, this is a rough " "situation. The cloud controller is an integral part of your cloud. If you " "have only one controller, you will have many missing services if it goes " "down." msgstr "" #: ../ops-maintenance-controller.rst:82 msgid "" "To avoid this situation, create a highly available cloud controller cluster. " "This is outside the scope of this document, but you can read more in the " "`OpenStack High Availability Guide `_." msgstr "" #: ../ops-maintenance-controller.rst:87 msgid "" "The next best approach is to use a configuration-management tool, such as " "Puppet, to automatically build a cloud controller. This should not take more " "than 15 minutes if you have a spare server available. After the controller " "rebuilds, restore any backups taken (see :doc:`ops-backup-recovery`)." msgstr "" #: ../ops-maintenance-controller.rst:93 msgid "" "Also, in practice, the ``nova-compute`` services on the compute nodes do not " "always reconnect cleanly to rabbitmq hosted on the controller when it comes " "back up after a long reboot; a restart on the nova services on the compute " "nodes is required." msgstr "" #: ../ops-maintenance-database.rst:3 msgid "Databases" msgstr "" #: ../ops-maintenance-database.rst:5 msgid "" "Almost all OpenStack components have an underlying database to store " "persistent information. Usually this database is MySQL. Normal MySQL " "administration is applicable to these databases. OpenStack does not " "configure the databases out of the ordinary. Basic administration includes " "performance tweaking, high availability, backup, recovery, and repairing. " "For more information, see a standard MySQL administration guide." msgstr "" #: ../ops-maintenance-database.rst:12 msgid "" "You can perform a couple of tricks with the database to either more quickly " "retrieve information or fix a data inconsistency error—for example, an " "instance was terminated, but the status was not updated in the database. " "These tricks are discussed throughout this book." msgstr "" #: ../ops-maintenance-database.rst:18 msgid "Database Connectivity" msgstr "" #: ../ops-maintenance-database.rst:20 msgid "" "Review the component's configuration file to see how each OpenStack " "component accesses its corresponding database. Look for a ``connection`` " "option. The following command uses ``grep`` to display the SQL connection " "string for nova, glance, cinder, and keystone:" msgstr "" #: ../ops-maintenance-database.rst:38 msgid "The connection strings take this format:" msgstr "" #: ../ops-maintenance-database.rst:45 msgid "Performance and Optimizing" msgstr "" #: ../ops-maintenance-database.rst:47 msgid "" "As your cloud grows, MySQL is utilized more and more. If you suspect that " "MySQL might be becoming a bottleneck, you should start researching MySQL " "optimization. The MySQL manual has an entire section dedicated to this " "topic: `Optimization Overview `_." msgstr "" #: ../ops-maintenance-determine.rst:3 msgid "Determining Which Component Is Broken" msgstr "" #: ../ops-maintenance-determine.rst:5 msgid "" "OpenStack's collection of different components interact with each other " "strongly. For example, uploading an image requires interaction from ``nova-" "api``, ``glance-api``, ``glance-registry``, keystone, and potentially " "``swift-proxy``. As a result, it is sometimes difficult to determine exactly " "where problems lie. Assisting in this is the purpose of this section." msgstr "" #: ../ops-maintenance-determine.rst:13 msgid "Tailing Logs" msgstr "" #: ../ops-maintenance-determine.rst:15 msgid "" "The first place to look is the log file related to the command you are " "trying to run. For example, if ``openstack server list`` is failing, try " "tailing a nova log file and running the command again:" msgstr "" #: ../ops-maintenance-determine.rst:19 ../ops-maintenance-determine.rst:38 msgid "Terminal 1:" msgstr "" #: ../ops-maintenance-determine.rst:25 ../ops-maintenance-determine.rst:44 msgid "Terminal 2:" msgstr "" #: ../ops-maintenance-determine.rst:31 msgid "" "Look for any errors or traces in the log file. For more information, see :" "doc:`ops-logging-monitoring`." msgstr "" #: ../ops-maintenance-determine.rst:34 msgid "" "If the error indicates that the problem is with another component, switch to " "tailing that component's log file. For example, if nova cannot access " "glance, look at the ``glance-api`` log:" msgstr "" #: ../ops-maintenance-determine.rst:50 msgid "Wash, rinse, and repeat until you find the core cause of the problem." msgstr "" #: ../ops-maintenance-determine.rst:53 msgid "Running Daemons on the CLI" msgstr "" #: ../ops-maintenance-determine.rst:55 msgid "" "Unfortunately, sometimes the error is not apparent from the log files. In " "this case, switch tactics and use a different command; maybe run the service " "directly on the command line. For example, if the ``glance-api`` service " "refuses to start and stay running, try launching the daemon from the command " "line:" msgstr "" #: ../ops-maintenance-determine.rst:65 msgid "This might print the error and cause of the problem." msgstr "" #: ../ops-maintenance-determine.rst:69 msgid "" "The ``-H`` flag is required when running the daemons with sudo because some " "daemons will write files relative to the user's home directory, and this " "write may fail if ``-H`` is left off." msgstr "" #: ../ops-maintenance-determine.rst:75 msgid "**Example of Complexity**" msgstr "" #: ../ops-maintenance-determine.rst:77 msgid "" "One morning, a compute node failed to run any instances. The log files were " "a bit vague, claiming that a certain instance was unable to be started. This " "ended up being a red herring because the instance was simply the first " "instance in alphabetical order, so it was the first instance that ``nova-" "compute`` would touch." msgstr "" #: ../ops-maintenance-determine.rst:83 msgid "" "Further troubleshooting showed that libvirt was not running at all. This " "made more sense. If libvirt wasn't running, then no instance could be " "virtualized through KVM. Upon trying to start libvirt, it would silently die " "immediately. The libvirt logs did not explain why." msgstr "" #: ../ops-maintenance-determine.rst:88 msgid "" "Next, the ``libvirtd`` daemon was run on the command line. Finally a helpful " "error message: it could not connect to d-bus. As ridiculous as it sounds, " "libvirt, and thus ``nova-compute``, relies on d-bus and somehow d-bus " "crashed. Simply starting d-bus set the entire chain back on track, and soon " "everything was back up and running." msgstr "" #: ../ops-maintenance-hardware.rst:3 msgid "Working with Hardware" msgstr "" #: ../ops-maintenance-hardware.rst:5 msgid "" "As for your initial deployment, you should ensure that all hardware is " "appropriately burned in before adding it to production. Run software that " "uses the hardware to its limits—maxing out RAM, CPU, disk, and network. Many " "options are available, and normally double as benchmark software, so you " "also get a good idea of the performance of your system." msgstr "" #: ../ops-maintenance-hardware.rst:13 msgid "Adding a Compute Node" msgstr "" #: ../ops-maintenance-hardware.rst:15 msgid "" "If you find that you have reached or are reaching the capacity limit of your " "computing resources, you should plan to add additional compute nodes. Adding " "more nodes is quite easy. The process for adding compute nodes is the same " "as when the initial compute nodes were deployed to your cloud: use an " "automated deployment system to bootstrap the bare-metal server with the " "operating system and then have a configuration-management system install and " "configure OpenStack Compute. Once the Compute service has been installed and " "configured in the same way as the other compute nodes, it automatically " "attaches itself to the cloud. The cloud controller notices the new node(s) " "and begins scheduling instances to launch there." msgstr "" #: ../ops-maintenance-hardware.rst:27 msgid "" "If your OpenStack Block Storage nodes are separate from your compute nodes, " "the same procedure still applies because the same queuing and polling system " "is used in both services." msgstr "" #: ../ops-maintenance-hardware.rst:31 msgid "" "We recommend that you use the same hardware for new compute and block " "storage nodes. At the very least, ensure that the CPUs are similar in the " "compute nodes to not break live migration." msgstr "" #: ../ops-maintenance-hardware.rst:36 msgid "Adding an Object Storage Node" msgstr "" #: ../ops-maintenance-hardware.rst:38 msgid "" "Adding a new object storage node is different from adding compute or block " "storage nodes. You still want to initially configure the server by using " "your automated deployment and configuration-management systems. After that " "is done, you need to add the local disks of the object storage node into the " "object storage ring. The exact command to do this is the same command that " "was used to add the initial disks to the ring. Simply rerun this command on " "the object storage proxy server for all disks on the new object storage " "node. Once this has been done, rebalance the ring and copy the resulting " "ring files to the other storage nodes." msgstr "" #: ../ops-maintenance-hardware.rst:50 msgid "" "If your new object storage node has a different number of disks than the " "original nodes have, the command to add the new node is different from the " "original commands. These parameters vary from environment to environment." msgstr "" #: ../ops-maintenance-hardware.rst:56 msgid "Replacing Components" msgstr "" #: ../ops-maintenance-hardware.rst:58 msgid "" "Failures of hardware are common in large-scale deployments such as an " "infrastructure cloud. Consider your processes and balance time saving " "against availability. For example, an Object Storage cluster can easily live " "with dead disks in it for some period of time if it has sufficient capacity. " "Or, if your compute installation is not full, you could consider live " "migrating instances off a host with a RAM failure until you have time to " "deal with the problem." msgstr "" #: ../ops-maintenance-hdmwy.rst:3 msgid "HDWMY" msgstr "" #: ../ops-maintenance-hdmwy.rst:5 msgid "" "Here's a quick list of various to-do items for each hour, day, week, month, " "and year. Please note that these tasks are neither required nor definitive " "but helpful ideas:" msgstr "" #: ../ops-maintenance-hdmwy.rst:10 msgid "Hourly" msgstr "" #: ../ops-maintenance-hdmwy.rst:12 msgid "Check your monitoring system for alerts and act on them." msgstr "" #: ../ops-maintenance-hdmwy.rst:13 msgid "Check your ticket queue for new tickets." msgstr "" #: ../ops-maintenance-hdmwy.rst:16 msgid "Daily" msgstr "" #: ../ops-maintenance-hdmwy.rst:18 msgid "Check for instances in a failed or weird state and investigate why." msgstr "" #: ../ops-maintenance-hdmwy.rst:19 msgid "Check for security patches and apply them as needed." msgstr "" #: ../ops-maintenance-hdmwy.rst:22 msgid "Weekly" msgstr "" #: ../ops-maintenance-hdmwy.rst:24 msgid "Check cloud usage:" msgstr "" #: ../ops-maintenance-hdmwy.rst:26 msgid "User quotas" msgstr "" #: ../ops-maintenance-hdmwy.rst:27 msgid "Disk space" msgstr "" #: ../ops-maintenance-hdmwy.rst:28 msgid "Image usage" msgstr "" #: ../ops-maintenance-hdmwy.rst:29 msgid "Large instances" msgstr "" #: ../ops-maintenance-hdmwy.rst:30 msgid "Network usage (bandwidth and IP usage)" msgstr "" #: ../ops-maintenance-hdmwy.rst:32 msgid "Verify your alert mechanisms are still working." msgstr "" #: ../ops-maintenance-hdmwy.rst:35 msgid "Monthly" msgstr "" #: ../ops-maintenance-hdmwy.rst:37 msgid "Check usage and trends over the past month." msgstr "" #: ../ops-maintenance-hdmwy.rst:38 msgid "Check for user accounts that should be removed." msgstr "" #: ../ops-maintenance-hdmwy.rst:39 msgid "Check for operator accounts that should be removed." msgstr "" #: ../ops-maintenance-hdmwy.rst:42 msgid "Quarterly" msgstr "" #: ../ops-maintenance-hdmwy.rst:44 msgid "Review usage and trends over the past quarter." msgstr "" #: ../ops-maintenance-hdmwy.rst:45 msgid "Prepare any quarterly reports on usage and statistics." msgstr "" #: ../ops-maintenance-hdmwy.rst:46 msgid "Review and plan any necessary cloud additions." msgstr "" #: ../ops-maintenance-hdmwy.rst:47 msgid "Review and plan any major OpenStack upgrades." msgstr "" #: ../ops-maintenance-hdmwy.rst:50 msgid "Semiannually" msgstr "" #: ../ops-maintenance-hdmwy.rst:52 msgid "Upgrade OpenStack." msgstr "" #: ../ops-maintenance-hdmwy.rst:53 msgid "" "Clean up after an OpenStack upgrade (any unused or new services to be aware " "of?)." msgstr "" #: ../ops-maintenance-rabbitmq.rst:3 msgid "RabbitMQ troubleshooting" msgstr "" #: ../ops-maintenance-rabbitmq.rst:5 msgid "This section provides tips on resolving common RabbitMQ issues." msgstr "" #: ../ops-maintenance-rabbitmq.rst:8 msgid "RabbitMQ service hangs" msgstr "" #: ../ops-maintenance-rabbitmq.rst:10 msgid "" "It is quite common for the RabbitMQ service to hang when it is restarted or " "stopped. Therefore, it is highly recommended that you manually restart " "RabbitMQ on each controller node." msgstr "" #: ../ops-maintenance-rabbitmq.rst:16 msgid "" "The RabbitMQ service name may vary depending on your operating system or " "vendor who supplies your RabbitMQ service." msgstr "" #: ../ops-maintenance-rabbitmq.rst:19 msgid "" "Restart the RabbitMQ service on the first controller node. The :command:" "`service rabbitmq-server restart` command may not work in certain " "situations, so it is best to use:" msgstr "" #: ../ops-maintenance-rabbitmq.rst:29 msgid "" "If the service refuses to stop, then run the :command:`pkill` command to " "stop the service, then restart the service:" msgstr "" #: ../ops-maintenance-rabbitmq.rst:37 msgid "Verify RabbitMQ processes are running:" msgstr "" #: ../ops-maintenance-rabbitmq.rst:45 msgid "" "If there are errors, run the :command:`cluster_status` command to make sure " "there are no partitions:" msgstr "" #: ../ops-maintenance-rabbitmq.rst:52 msgid "" "For more information, see `RabbitMQ documentation `_." msgstr "" #: ../ops-maintenance-rabbitmq.rst:55 msgid "" "Go back to the first step and try restarting the RabbitMQ service again. If " "you still have errors, remove the contents in the ``/var/lib/rabbitmq/mnesia/" "`` directory between stopping and starting the RabbitMQ service." msgstr "" #: ../ops-maintenance-rabbitmq.rst:60 msgid "" "If there are no errors, restart the RabbitMQ service on the next controller " "node." msgstr "" #: ../ops-maintenance-rabbitmq.rst:63 msgid "" "Since the Liberty release, OpenStack services will automatically recover " "from a RabbitMQ outage. You should only consider restarting OpenStack " "services after checking if RabbitMQ heartbeat functionality is enabled, and " "if OpenStack services are not picking up messages from RabbitMQ queues." msgstr "" #: ../ops-maintenance-rabbitmq.rst:69 msgid "RabbitMQ alerts" msgstr "" #: ../ops-maintenance-rabbitmq.rst:71 msgid "" "If you receive alerts for RabbitMQ, take the following steps to troubleshoot " "and resolve the issue:" msgstr "" #: ../ops-maintenance-rabbitmq.rst:74 msgid "Determine which servers the RabbitMQ alarms are coming from." msgstr "" #: ../ops-maintenance-rabbitmq.rst:75 msgid "Attempt to boot a nova instance in the affected environment." msgstr "" #: ../ops-maintenance-rabbitmq.rst:76 msgid "If you cannot launch an instance, continue to troubleshoot the issue." msgstr "" #: ../ops-maintenance-rabbitmq.rst:77 msgid "" "Log in to each of the controller nodes for the affected environment, and " "check the ``/var/log/rabbitmq`` log files for any reported issues." msgstr "" #: ../ops-maintenance-rabbitmq.rst:79 msgid "Look for connection issues identified in the log files." msgstr "" #: ../ops-maintenance-rabbitmq.rst:80 msgid "" "For each controller node in your environment, view the ``/etc/init.d`` " "directory to check it contains nova*, cinder*, neutron*, or glance*. Also " "check RabbitMQ message queues that are growing without being consumed which " "will indicate which OpenStack service is affected. Restart the affected " "OpenStack service." msgstr "" #: ../ops-maintenance-rabbitmq.rst:85 msgid "" "For each compute node your environment, view the ``/etc/init.d`` directory " "and check if it contains nova*, cinder*, neutron*, or glance*, Also check " "RabbitMQ message queues that are growing without being consumed which will " "indicate which OpenStack services are affected. Restart the affected " "OpenStack services." msgstr "" #: ../ops-maintenance-rabbitmq.rst:90 msgid "" "Open OpenStack Dashboard and launch an instance. If the instance launches, " "the issue is resolved." msgstr "" #: ../ops-maintenance-rabbitmq.rst:92 msgid "" "If you cannot launch an instance, check the ``/var/log/rabbitmq`` log files " "for reported connection issues." msgstr "" #: ../ops-maintenance-rabbitmq.rst:94 msgid "Restart the RabbitMQ service on all of the controller nodes:" msgstr "" #: ../ops-maintenance-rabbitmq.rst:103 msgid "" "This step applies if you have already restarted only the OpenStack " "components, and cannot connect to the RabbitMQ service." msgstr "" #: ../ops-maintenance-rabbitmq.rst:106 msgid "Repeat steps 7-8." msgstr "" #: ../ops-maintenance-rabbitmq.rst:109 msgid "Excessive database management memory consumption" msgstr "" #: ../ops-maintenance-rabbitmq.rst:111 msgid "" "Since the Liberty release, OpenStack with RabbitMQ 3.4.x or 3.6.x has an " "issue with the management database consuming the memory allocated to " "RabbitMQ. This is caused by statistics collection and processing. When a " "single node with RabbitMQ reaches its memory threshold, all exchange and " "queue processing is halted until the memory alarm recovers." msgstr "" #: ../ops-maintenance-rabbitmq.rst:117 msgid "To address this issue:" msgstr "" #: ../ops-maintenance-rabbitmq.rst:119 msgid "Check memory consumption:" msgstr "" #: ../ops-maintenance-rabbitmq.rst:125 msgid "" "Edit the ``/etc/rabbitmq/rabbitmq.config`` configuration file, and change " "the ``collect_statistics_interval`` parameter between 30000-60000 " "milliseconds. Alternatively you can turn off statistics collection by " "setting ``collect_statistics`` parameter to \"none\"." msgstr "" #: ../ops-maintenance-rabbitmq.rst:131 msgid "File descriptor limits when scaling a cloud environment" msgstr "" #: ../ops-maintenance-rabbitmq.rst:133 msgid "" "A cloud environment that is scaled to a certain size will require the file " "descriptor limits to be adjusted." msgstr "" #: ../ops-maintenance-rabbitmq.rst:136 msgid "" "Run the :command:`rabbitmqctl status` to view the current file descriptor " "limits:" msgstr "" #: ../ops-maintenance-rabbitmq.rst:147 msgid "" "Adjust the appropriate limits in the ``/etc/security/limits.conf`` " "configuration file." msgstr "" #: ../ops-maintenance-slow.rst:3 msgid "What to do when things are running slowly" msgstr "" #: ../ops-maintenance-slow.rst:5 msgid "" "When you are getting slow responses from various services, it can be hard to " "know where to start looking. The first thing to check is the extent of the " "slowness: is it specific to a single service, or varied among different " "services? If your problem is isolated to a specific service, it can " "temporarily be fixed by restarting the service, but that is often only a fix " "for the symptom and not the actual problem." msgstr "" #: ../ops-maintenance-slow.rst:12 msgid "" "This is a collection of ideas from experienced operators on common things to " "look at that may be the cause of slowness. It is not, however, designed to " "be an exhaustive list." msgstr "" #: ../ops-maintenance-slow.rst:17 msgid "OpenStack Identity service" msgstr "" #: ../ops-maintenance-slow.rst:19 msgid "" "If OpenStack :term:`Identity service ` is " "responding slowly, it could be due to the token table getting large. This " "can be fixed by running the :command:`keystone-manage token_flush` command." msgstr "" #: ../ops-maintenance-slow.rst:24 msgid "" "Additionally, for Identity-related issues, try the tips in :ref:" "`sql_backend`." msgstr "" #: ../ops-maintenance-slow.rst:28 msgid "OpenStack Image service" msgstr "" #: ../ops-maintenance-slow.rst:30 msgid "" "OpenStack :term:`Image service ` can be slowed down " "by things related to the Identity service, but the Image service itself can " "be slowed down if connectivity to the back-end storage in use is slow or " "otherwise problematic. For example, your back-end NFS server might have gone " "down." msgstr "" #: ../ops-maintenance-slow.rst:36 msgid "OpenStack Block Storage service" msgstr "" #: ../ops-maintenance-slow.rst:38 msgid "" "OpenStack :term:`Block Storage service ` is " "similar to the Image service, so start by checking Identity-related " "services, and the back-end storage. Additionally, both the Block Storage and " "Image services rely on AMQP and SQL functionality, so consider these when " "debugging." msgstr "" #: ../ops-maintenance-slow.rst:45 msgid "OpenStack Compute service" msgstr "" #: ../ops-maintenance-slow.rst:47 msgid "" "Services related to OpenStack Compute are normally fairly fast and rely on a " "couple of backend services: Identity for authentication and authorization), " "and AMQP for interoperability. Any slowness related to services is normally " "related to one of these. Also, as with all other services, SQL is used " "extensively." msgstr "" #: ../ops-maintenance-slow.rst:54 msgid "OpenStack Networking service" msgstr "" #: ../ops-maintenance-slow.rst:56 msgid "" "Slowness in the OpenStack :term:`Networking service ` can be caused by services that it relies upon, but it can also " "be related to either physical or virtual networking. For example: network " "namespaces that do not exist or are not tied to interfaces correctly; DHCP " "daemons that have hung or are not running; a cable being physically " "disconnected; a switch not being configured correctly. When debugging " "Networking service problems, begin by verifying all physical networking " "functionality (switch configuration, physical cabling, etc.). After the " "physical networking is verified, check to be sure all of the Networking " "services are running (neutron-server, neutron-dhcp-agent, etc.), then check " "on AMQP and SQL back ends." msgstr "" #: ../ops-maintenance-slow.rst:69 msgid "AMQP broker" msgstr "" #: ../ops-maintenance-slow.rst:71 msgid "" "Regardless of which AMQP broker you use, such as RabbitMQ, there are common " "issues which not only slow down operations, but can also cause real " "problems. Sometimes messages queued for services stay on the queues and are " "not consumed. This can be due to dead or stagnant services and can be " "commonly cleared up by either restarting the AMQP-related services or the " "OpenStack service in question." msgstr "" #: ../ops-maintenance-slow.rst:81 msgid "SQL back end" msgstr "" #: ../ops-maintenance-slow.rst:83 msgid "" "Whether you use SQLite or an RDBMS (such as MySQL), SQL interoperability is " "essential to a functioning OpenStack environment. A large or fragmented " "SQLite file can cause slowness when using files as a back end. A locked or " "long-running query can cause delays for most RDBMS services. In this case, " "do not kill the query immediately, but look into it to see if it is a " "problem with something that is hung, or something that is just taking a long " "time to run and needs to finish on its own. The administration of an RDBMS " "is outside the scope of this document, but it should be noted that a " "properly functioning RDBMS is essential to most OpenStack services." msgstr "" #: ../ops-maintenance-storage.rst:3 msgid "Storage Node Failures and Maintenance" msgstr "" #: ../ops-maintenance-storage.rst:5 msgid "" "Because of the high redundancy of Object Storage, dealing with object " "storage node issues is a lot easier than dealing with compute node issues." msgstr "" #: ../ops-maintenance-storage.rst:10 msgid "Rebooting a Storage Node" msgstr "" #: ../ops-maintenance-storage.rst:12 msgid "" "If a storage node requires a reboot, simply reboot it. Requests for data " "hosted on that node are redirected to other copies while the server is " "rebooting." msgstr "" #: ../ops-maintenance-storage.rst:17 msgid "Shutting Down a Storage Node" msgstr "" #: ../ops-maintenance-storage.rst:19 msgid "" "If you need to shut down a storage node for an extended period of time (one " "or more days), consider removing the node from the storage ring. For example:" msgstr "" #: ../ops-maintenance-storage.rst:32 msgid "Next, redistribute the ring files to the other nodes:" msgstr "" #: ../ops-maintenance-storage.rst:41 msgid "" "These actions effectively take the storage node out of the storage cluster." msgstr "" #: ../ops-maintenance-storage.rst:44 msgid "" "When the node is able to rejoin the cluster, just add it back to the ring. " "The exact syntax you use to add a node to your swift cluster with ``swift-" "ring-builder`` heavily depends on the original options used when you " "originally created your cluster. Please refer back to those commands." msgstr "" #: ../ops-maintenance-storage.rst:51 msgid "Replacing a Swift Disk" msgstr "" #: ../ops-maintenance-storage.rst:53 msgid "" "If a hard drive fails in an Object Storage node, replacing it is relatively " "easy. This assumes that your Object Storage environment is configured " "correctly, where the data that is stored on the failed drive is also " "replicated to other drives in the Object Storage environment." msgstr "" #: ../ops-maintenance-storage.rst:58 msgid "This example assumes that ``/dev/sdb`` has failed." msgstr "" #: ../ops-maintenance-storage.rst:60 msgid "First, unmount the disk:" msgstr "" #: ../ops-maintenance-storage.rst:66 msgid "" "Next, physically remove the disk from the server and replace it with a " "working disk." msgstr "" #: ../ops-maintenance-storage.rst:69 msgid "Ensure that the operating system has recognized the new disk:" msgstr "" #: ../ops-maintenance-storage.rst:75 msgid "You should see a message about ``/dev/sdb``." msgstr "" #: ../ops-maintenance-storage.rst:77 msgid "" "Because it is recommended to not use partitions on a swift disk, simply " "format the disk as a whole:" msgstr "" #: ../ops-maintenance-storage.rst:84 msgid "Finally, mount the disk:" msgstr "" #: ../ops-maintenance-storage.rst:90 msgid "" "Swift should notice the new disk and that no data exists. It then begins " "replicating the data to the disk from the other existing replicas." msgstr "" #: ../ops-maintenance.rst:3 msgid "Maintenance, Failures, and Debugging" msgstr "" #: ../ops-maintenance.rst:21 msgid "" "Downtime, whether planned or unscheduled, is a certainty when running a " "cloud. This chapter aims to provide useful information for dealing " "proactively, or reactively, with these occurrences." msgstr "" #: ../ops-monitoring.rst:3 msgid "Monitoring" msgstr "" #: ../ops-monitoring.rst:5 msgid "" "There are two types of monitoring: watching for problems and watching usage " "trends. The former ensures that all services are up and running, creating a " "functional cloud. The latter involves monitoring resource usage over time in " "order to make informed decisions about potential bottlenecks and upgrades." msgstr "" #: ../ops-monitoring.rst:12 msgid "Process Monitoring" msgstr "" #: ../ops-monitoring.rst:14 msgid "" "A basic type of alert monitoring is to simply check and see whether a " "required process is running. For example, ensure that the ``nova-api`` " "service is running on the cloud controller:" msgstr "" #: ../ops-monitoring.rst:34 msgid "" "The OpenStack processes that should be monitored depend on the specific " "configuration of the environment, but can include:" msgstr "" #: ../ops-monitoring.rst:37 msgid "**Compute service (nova)**" msgstr "" #: ../ops-monitoring.rst:39 msgid "nova-api" msgstr "" #: ../ops-monitoring.rst:40 msgid "nova-scheduler" msgstr "" #: ../ops-monitoring.rst:41 msgid "nova-conductor" msgstr "" #: ../ops-monitoring.rst:42 msgid "nova-novncproxy" msgstr "" #: ../ops-monitoring.rst:43 msgid "nova-compute" msgstr "" #: ../ops-monitoring.rst:45 msgid "**Block Storage service (cinder)**" msgstr "" #: ../ops-monitoring.rst:48 msgid "cinder-api" msgstr "" #: ../ops-monitoring.rst:49 msgid "cinder-scheduler" msgstr "" #: ../ops-monitoring.rst:51 msgid "**Networking service (neutron)**" msgstr "" #: ../ops-monitoring.rst:53 msgid "neutron-api" msgstr "" #: ../ops-monitoring.rst:54 msgid "neutron-server" msgstr "" #: ../ops-monitoring.rst:55 msgid "neutron-openvswitch-agent" msgstr "" #: ../ops-monitoring.rst:56 msgid "neutron-dhcp-agent" msgstr "" #: ../ops-monitoring.rst:57 msgid "neutron-l3-agent" msgstr "" #: ../ops-monitoring.rst:58 msgid "neutron-metadata-agent" msgstr "" #: ../ops-monitoring.rst:60 msgid "**Image service (glance)**" msgstr "" #: ../ops-monitoring.rst:62 msgid "glance-api" msgstr "" #: ../ops-monitoring.rst:63 msgid "glance-registry" msgstr "" #: ../ops-monitoring.rst:65 msgid "**Identity service (keystone)**" msgstr "" #: ../ops-monitoring.rst:67 msgid "The keystone processes are run within Apache as WSGI applications." msgstr "" #: ../ops-monitoring.rst:70 msgid "Resource Alerting" msgstr "" #: ../ops-monitoring.rst:72 msgid "" "Resource alerting provides notifications when one or more resources are " "critically low. While the monitoring thresholds should be tuned to your " "specific OpenStack environment, monitoring resource usage is not specific to " "OpenStack at all—any generic type of alert will work fine." msgstr "" #: ../ops-monitoring.rst:78 msgid "Some of the resources that you want to monitor include:" msgstr "" #: ../ops-monitoring.rst:80 msgid "Disk usage" msgstr "" #: ../ops-monitoring.rst:81 msgid "Server load" msgstr "" #: ../ops-monitoring.rst:82 msgid "Memory usage" msgstr "" #: ../ops-monitoring.rst:83 msgid "Network I/O" msgstr "" #: ../ops-monitoring.rst:84 msgid "Available vCPUs" msgstr "" #: ../ops-monitoring.rst:87 msgid "Telemetry Service" msgstr "" #: ../ops-monitoring.rst:89 msgid "" "The Telemetry service (:term:`ceilometer`) collects metering and event data " "relating to OpenStack services. Data collected by the Telemetry service " "could be used for billing. Depending on deployment configuration, collected " "data may be accessible to users based on the deployment configuration. The " "Telemetry service provides a REST API documented at `ceilometer V2 Web API " "`_. You can " "read more about the module in the `OpenStack Administrator Guide `_ or in the `developer " "documentation `_." msgstr "" #: ../ops-monitoring.rst:102 msgid "OpenStack Specific Resources" msgstr "" #: ../ops-monitoring.rst:104 msgid "" "Resources such as memory, disk, and CPU are generic resources that all " "servers (even non-OpenStack servers) have and are important to the overall " "health of the server. When dealing with OpenStack specifically, these " "resources are important for a second reason: ensuring that enough are " "available to launch instances. There are a few ways you can see OpenStack " "resource usage. The first is through the :command:`nova` command:" msgstr "" #: ../ops-monitoring.rst:116 msgid "" "This command displays a list of how many instances a tenant has running and " "some light usage statistics about the combined instances. This command is " "useful for a quick overview of your cloud, but it doesn't really get into a " "lot of details." msgstr "" #: ../ops-monitoring.rst:121 msgid "" "Next, the ``nova`` database contains three tables that store usage " "information." msgstr "" #: ../ops-monitoring.rst:124 msgid "" "The ``nova.quotas`` and ``nova.quota_usages`` tables store quota " "information. If a tenant's quota is different from the default quota " "settings, its quota is stored in the ``nova.quotas`` table. For example:" msgstr "" #: ../ops-monitoring.rst:145 msgid "" "The ``nova.quota_usages`` table keeps track of how many resources the tenant " "currently has in use:" msgstr "" #: ../ops-monitoring.rst:163 msgid "" "By comparing a tenant's hard limit with their current resource usage, you " "can see their usage percentage. For example, if this tenant is using 1 " "floating IP out of 10, then they are using 10 percent of their floating IP " "quota. Rather than doing the calculation manually, you can use SQL or the " "scripting language of your choice and create a formatted report:" msgstr "" #: ../ops-monitoring.rst:194 msgid "" "The preceding information was generated by using a custom script that can be " "found on `GitHub `_." msgstr "" #: ../ops-monitoring.rst:200 msgid "" "This script is specific to a certain OpenStack installation and must be " "modified to fit your environment. However, the logic should easily be " "transferable." msgstr "" #: ../ops-monitoring.rst:205 msgid "Intelligent Alerting" msgstr "" #: ../ops-monitoring.rst:207 msgid "" "Intelligent alerting can be thought of as a form of continuous integration " "for operations. For example, you can easily check to see whether the Image " "service is up and running by ensuring that the ``glance-api`` and ``glance-" "registry`` processes are running or by seeing whether ``glance-api`` is " "responding on port 9292." msgstr "" #: ../ops-monitoring.rst:213 msgid "" "But how can you tell whether images are being successfully uploaded to the " "Image service? Maybe the disk that Image service is storing the images on is " "full or the S3 back end is down. You could naturally check this by doing a " "quick image upload:" msgstr "" #: ../ops-monitoring.rst:232 msgid "" "By taking this script and rolling it into an alert for your monitoring " "system (such as Nagios), you now have an automated way of ensuring that " "image uploads to the Image Catalog are working." msgstr "" #: ../ops-monitoring.rst:238 msgid "" "You must remove the image after each test. Even better, test whether you can " "successfully delete an image from the Image service." msgstr "" #: ../ops-monitoring.rst:241 msgid "" "Intelligent alerting takes considerably more time to plan and implement than " "the other alerts described in this chapter. A good outline to implement " "intelligent alerting is:" msgstr "" #: ../ops-monitoring.rst:245 msgid "Review common actions in your cloud." msgstr "" #: ../ops-monitoring.rst:247 msgid "Create ways to automatically test these actions." msgstr "" #: ../ops-monitoring.rst:249 msgid "Roll these tests into an alerting system." msgstr "" #: ../ops-monitoring.rst:251 msgid "Some other examples for Intelligent Alerting include:" msgstr "" #: ../ops-monitoring.rst:253 msgid "Can instances launch and be destroyed?" msgstr "" #: ../ops-monitoring.rst:255 msgid "Can users be created?" msgstr "" #: ../ops-monitoring.rst:257 msgid "Can objects be stored and deleted?" msgstr "" #: ../ops-monitoring.rst:259 msgid "Can volumes be created and destroyed?" msgstr "" #: ../ops-monitoring.rst:262 msgid "Trending" msgstr "" #: ../ops-monitoring.rst:264 msgid "" "Trending can give you great insight into how your cloud is performing day to " "day. You can learn, for example, if a busy day was simply a rare occurrence " "or if you should start adding new compute nodes." msgstr "" #: ../ops-monitoring.rst:268 msgid "" "Trending takes a slightly different approach than alerting. While alerting " "is interested in a binary result (whether a check succeeds or fails), " "trending records the current state of something at a certain point in time. " "Once enough points in time have been recorded, you can see how the value has " "changed over time." msgstr "" #: ../ops-monitoring.rst:274 msgid "" "All of the alert types mentioned earlier can also be used for trend " "reporting. Some other trend examples include:" msgstr "" #: ../ops-monitoring.rst:277 msgid "The number of instances on each compute node" msgstr "" #: ../ops-monitoring.rst:278 msgid "The types of flavors in use" msgstr "" #: ../ops-monitoring.rst:279 msgid "The number of volumes in use" msgstr "" #: ../ops-monitoring.rst:280 msgid "The number of Object Storage requests each hour" msgstr "" #: ../ops-monitoring.rst:281 msgid "The number of ``nova-api`` requests each hour" msgstr "" #: ../ops-monitoring.rst:282 msgid "The I/O statistics of your storage services" msgstr "" #: ../ops-monitoring.rst:284 msgid "" "As an example, recording ``nova-api`` usage can allow you to track the need " "to scale your cloud controller. By keeping an eye on ``nova-api`` requests, " "you can determine whether you need to spawn more ``nova-api`` processes or " "go as far as introducing an entirely new server to run ``nova-api``. To get " "an approximate count of the requests, look for standard INFO messages in ``/" "var/log/nova/nova-api.log``:" msgstr "" #: ../ops-monitoring.rst:295 msgid "" "You can obtain further statistics by looking for the number of successful " "requests:" msgstr "" #: ../ops-monitoring.rst:302 msgid "" "By running this command periodically and keeping a record of the result, you " "can create a trending report over time that shows whether your ``nova-api`` " "usage is increasing, decreasing, or keeping steady." msgstr "" #: ../ops-monitoring.rst:306 msgid "" "A tool such as **collectd** can be used to store this information. While " "collectd is out of the scope of this book, a good starting point would be to " "use collectd to store the result as a COUNTER data type. More information " "can be found in `collectd's documentation `_." msgstr "" #: ../ops-monitoring.rst:314 msgid "Monitoring Tools" msgstr "" #: ../ops-monitoring.rst:317 msgid "Nagios" msgstr "" #: ../ops-monitoring.rst:320 msgid "" "Nagios is an open source monitoring service. It is capable of executing " "arbitrary commands to check the status of server and network services, " "remotely executing arbitrary commands directly on servers, and allowing " "servers to push notifications back in the form of passive monitoring. Nagios " "has been around since 1999. Although newer monitoring services are " "available, Nagios is a tried-and-true systems administration staple." msgstr "" #: ../ops-monitoring.rst:328 msgid "" "You can create automated alerts for critical processes by using Nagios and " "NRPE. For example, to ensure that the ``nova-compute`` process is running on " "the compute nodes, create an alert on your Nagios server:" msgstr "" #: ../ops-monitoring.rst:343 msgid "On the Compute node, create the following NRPE configuration:" msgstr "" #: ../ops-monitoring.rst:350 msgid "" "Nagios checks that at least one ``nova-compute`` service is running at all " "times." msgstr "" #: ../ops-monitoring.rst:353 msgid "" "For resource alerting, for example, monitor disk capacity on a compute node " "with Nagios, add the following to your Nagios configuration:" msgstr "" #: ../ops-monitoring.rst:366 msgid "On the compute node, add the following to your NRPE configuration:" msgstr "" #: ../ops-monitoring.rst:372 msgid "" "Nagios alerts you with a `WARNING` when any disk on the compute node is 80 " "percent full and `CRITICAL` when 90 percent is full." msgstr "" #: ../ops-monitoring.rst:376 msgid "StackTach" msgstr "" #: ../ops-monitoring.rst:378 msgid "" "StackTach is a tool that collects and reports the notifications sent by " "nova. Notifications are essentially the same as logs but can be much more " "detailed. Nearly all OpenStack components are capable of generating " "notifications when significant events occur. Notifications are messages " "placed on the OpenStack queue (generally RabbitMQ) for consumption by " "downstream systems. An overview of notifications can be found at `System " "Usage Data `_." msgstr "" #: ../ops-monitoring.rst:387 msgid "" "To enable nova to send notifications, add the following to the ``nova.conf`` " "configuration file:" msgstr "" #: ../ops-monitoring.rst:395 msgid "" "Once nova is sending notifications, install and configure StackTach. " "StackTach works for queue consumption and pipeline processing are configured " "to read these notifications from RabbitMQ servers and store them in a " "database. Users can inquire on instances, requests, and servers by using the " "browser interface or command-line tool, `Stacky `_. Since StackTach is relatively new and constantly " "changing, installation instructions quickly become outdated. Refer to the " "`StackTach Git repository `_ for instructions as well as a demostration video. Additional " "details on the latest developments can be discovered at the `official page " "`_" msgstr "" #: ../ops-monitoring.rst:409 msgid "Logstash" msgstr "" #: ../ops-monitoring.rst:411 msgid "" "Logstash is a high performance indexing and search engine for logs. Logs " "from Jenkins test runs are sent to logstash where they are indexed and " "stored. Logstash facilitates reviewing logs from multiple sources in a " "single test run, searching for errors or particular events within a test " "run, and searching for log event trends across test runs." msgstr "" #: ../ops-monitoring.rst:417 msgid "There are four major layers in Logstash setup which are:" msgstr "" #: ../ops-monitoring.rst:419 msgid "Log Pusher" msgstr "" #: ../ops-monitoring.rst:420 msgid "Log Indexer" msgstr "" #: ../ops-monitoring.rst:421 msgid "ElasticSearch" msgstr "" #: ../ops-monitoring.rst:422 msgid "Kibana" msgstr "" #: ../ops-monitoring.rst:424 msgid "" "Each layer scales horizontally. As the number of logs grows you can add more " "log pushers, more Logstash indexers, and more ElasticSearch nodes." msgstr "" #: ../ops-monitoring.rst:427 msgid "" "Logpusher is a pair of Python scripts that first listens to Jenkins build " "events, then converts them into Gearman jobs. Gearman provides a generic " "application framework to farm out work to other machines or processes that " "are better suited to do the work. It allows you to do work in parallel, to " "load balance processing, and to call functions between languages. Later, " "Logpusher performs Gearman jobs to push log files into logstash. Logstash " "indexer reads these log events, filters them to remove unwanted lines, " "collapse multiple events together, and parses useful information before " "shipping them to ElasticSearch for storage and indexing. Kibana is a " "logstash oriented web client for ElasticSearch." msgstr "" #: ../ops-network-troubleshooting.rst:3 msgid "Network Troubleshooting" msgstr "" #: ../ops-network-troubleshooting.rst:5 msgid "" "Network troubleshooting can be challenging. A network issue may cause " "problems at any point in the cloud. Using a logical troubleshooting " "procedure can help mitigate the issue and isolate where the network issue " "is. This chapter aims to give you the information you need to identify any " "issues for ``nova-network`` or OpenStack Networking (neutron) with Linux " "Bridge or Open vSwitch." msgstr "" #: ../ops-network-troubleshooting.rst:13 msgid "Using ip a to Check Interface States" msgstr "" #: ../ops-network-troubleshooting.rst:15 msgid "" "On compute nodes and nodes running ``nova-network``, use the following " "command to see information about interfaces, including information about " "IPs, VLANs, and whether your interfaces are up:" msgstr "" #: ../ops-network-troubleshooting.rst:23 msgid "" "If you are encountering any sort of networking difficulty, one good initial " "troubleshooting step is to make sure that your interfaces are up. For " "example:" msgstr "" #: ../ops-network-troubleshooting.rst:37 msgid "" "You can safely ignore the state of ``virbr0``, which is a default bridge " "created by libvirt and not used by OpenStack." msgstr "" #: ../ops-network-troubleshooting.rst:41 msgid "Visualizing nova-network Traffic in the Cloud" msgstr "" #: ../ops-network-troubleshooting.rst:43 msgid "" "If you are logged in to an instance and ping an external host, for example, " "Google, the ping packet takes the route shown in :ref:`figure_traffic_route`." msgstr "" #: ../ops-network-troubleshooting.rst:53 msgid "Figure. Traffic route for ping packet" msgstr "" #: ../ops-network-troubleshooting.rst:55 msgid "" "The instance generates a packet and places it on the virtual Network " "Interface Card (NIC) inside the instance, such as ``eth0``." msgstr "" #: ../ops-network-troubleshooting.rst:58 msgid "" "The packet transfers to the virtual NIC of the compute host, such as, " "``vnet1``. You can find out what vnet NIC is being used by looking at the ``/" "etc/libvirt/qemu/instance-xxxxxxxx.xml`` file." msgstr "" #: ../ops-network-troubleshooting.rst:62 msgid "" "From the vnet NIC, the packet transfers to a bridge on the compute node, " "such as ``br100``." msgstr "" #: ../ops-network-troubleshooting.rst:65 msgid "" "If you run FlatDHCPManager, one bridge is on the compute node. If you run " "VlanManager, one bridge exists for each VLAN." msgstr "" #: ../ops-network-troubleshooting.rst:68 msgid "To see which bridge the packet will use, run the command:" msgstr "" #: ../ops-network-troubleshooting.rst:74 msgid "" "Look for the vnet NIC. You can also reference ``nova.conf`` and look for the " "``flat_interface_bridge`` option." msgstr "" #: ../ops-network-troubleshooting.rst:77 msgid "" "The packet transfers to the main NIC of the compute node. You can also see " "this NIC in the :command:`brctl` output, or you can find it by referencing " "the ``flat_interface`` option in ``nova.conf``." msgstr "" #: ../ops-network-troubleshooting.rst:81 msgid "" "After the packet is on this NIC, it transfers to the compute node's default " "gateway. The packet is now most likely out of your control at this point. " "The diagram depicts an external gateway. However, in the default " "configuration with multi-host, the compute host is the gateway." msgstr "" #: ../ops-network-troubleshooting.rst:87 msgid "" "Reverse the direction to see the path of a ping reply. From this path, you " "can see that a single packet travels across four different NICs. If a " "problem occurs with any of these NICs, a network issue occurs." msgstr "" #: ../ops-network-troubleshooting.rst:92 msgid "Visualizing OpenStack Networking Service Traffic in the Cloud" msgstr "" #: ../ops-network-troubleshooting.rst:94 msgid "" "OpenStack Networking has many more degrees of freedom than ``nova-network`` " "does because of its pluggable back end. It can be configured with open " "source or vendor proprietary plug-ins that control software defined " "networking (SDN) hardware or plug-ins that use Linux native facilities on " "your hosts, such as Open vSwitch or Linux Bridge." msgstr "" #: ../ops-network-troubleshooting.rst:100 msgid "" "The networking chapter of the `OpenStack Administrator Guide `_ shows a variety of networking " "scenarios and their connection paths. The purpose of this section is to give " "you the tools to troubleshoot the various components involved however they " "are plumbed together in your environment." msgstr "" #: ../ops-network-troubleshooting.rst:107 msgid "" "For this example, we will use the Open vSwitch (OVS) back end. Other back-" "end plug-ins will have very different flow paths. OVS is the most popularly " "deployed network driver, according to the April 2016 OpenStack User Survey. " "We'll describe each step in turn, with :ref:`network_paths` for reference." msgstr "" #: ../ops-network-troubleshooting.rst:113 msgid "" "The instance generates a packet and places it on the virtual NIC inside the " "instance, such as eth0." msgstr "" #: ../ops-network-troubleshooting.rst:116 msgid "" "The packet transfers to a Test Access Point (TAP) device on the compute " "host, such as tap690466bc-92. You can find out what TAP is being used by " "looking at the ``/etc/libvirt/qemu/instance-xxxxxxxx.xml`` file." msgstr "" #: ../ops-network-troubleshooting.rst:121 msgid "" "The TAP device name is constructed using the first 11 characters of the port " "ID (10 hex digits plus an included '-'), so another means of finding the " "device name is to use the :command:`neutron` command. This returns a pipe-" "delimited list, the first item of which is the port ID. For example, to get " "the port ID associated with IP address 10.0.0.10, do this:" msgstr "" #: ../ops-network-troubleshooting.rst:133 msgid "" "Taking the first 11 characters, we can construct a device name of " "tapff387e54-9e from this output." msgstr "" #: ../ops-network-troubleshooting.rst:142 msgid "Figure. Neutron network paths" msgstr "" #: ../ops-network-troubleshooting.rst:144 msgid "" "The TAP device is connected to the integration bridge, ``br-int``. This " "bridge connects all the instance TAP devices and any other bridges on the " "system. In this example, we have ``int-br-eth1`` and ``patch-tun``. ``int-br-" "eth1`` is one half of a veth pair connecting to the bridge ``br-eth1``, " "which handles VLAN networks trunked over the physical Ethernet device " "``eth1``. ``patch-tun`` is an Open vSwitch internal port that connects to " "the ``br-tun`` bridge for GRE networks." msgstr "" #: ../ops-network-troubleshooting.rst:153 msgid "" "The TAP devices and veth devices are normal Linux network devices and may be " "inspected with the usual tools, such as :command:`ip` and :command:" "`tcpdump`. Open vSwitch internal devices, such as ``patch-tun``, are only " "visible within the Open vSwitch environment. If you try to run :command:" "`tcpdump -i patch-tun`, it will raise an error, saying that the device does " "not exist." msgstr "" #: ../ops-network-troubleshooting.rst:160 msgid "" "It is possible to watch packets on internal interfaces, but it does take a " "little bit of networking gymnastics. First you need to create a dummy " "network device that normal Linux tools can see. Then you need to add it to " "the bridge containing the internal interface you want to snoop on. Finally, " "you need to tell Open vSwitch to mirror all traffic to or from the internal " "port onto this dummy port. After all this, you can then run :command:" "`tcpdump` on the dummy interface and see the traffic on the internal port." msgstr "" #: ../ops-network-troubleshooting.rst:169 msgid "" "**To capture packets from the patch-tun internal interface on integration " "bridge, br-int:**" msgstr "" #: ../ops-network-troubleshooting.rst:172 msgid "Create and bring up a dummy interface, ``snooper0``:" msgstr "" #: ../ops-network-troubleshooting.rst:179 msgid "Add device ``snooper0`` to bridge ``br-int``:" msgstr "" #: ../ops-network-troubleshooting.rst:185 msgid "" "Create mirror of ``patch-tun`` to ``snooper0`` (returns UUID of mirror port):" msgstr "" #: ../ops-network-troubleshooting.rst:195 msgid "" "Profit. You can now see traffic on ``patch-tun`` by running :command:" "`tcpdump -i snooper0`." msgstr "" #: ../ops-network-troubleshooting.rst:198 msgid "" "Clean up by clearing all mirrors on ``br-int`` and deleting the dummy " "interface:" msgstr "" #: ../ops-network-troubleshooting.rst:207 msgid "" "On the integration bridge, networks are distinguished using internal VLANs " "regardless of how the networking service defines them. This allows instances " "on the same host to communicate directly without transiting the rest of the " "virtual, or physical, network. These internal VLAN IDs are based on the " "order they are created on the node and may vary between nodes. These IDs are " "in no way related to the segmentation IDs used in the network definition and " "on the physical wire." msgstr "" #: ../ops-network-troubleshooting.rst:216 msgid "" "VLAN tags are translated between the external tag defined in the network " "settings, and internal tags in several places. On the ``br-int``, incoming " "packets from the ``int-br-eth1`` are translated from external tags to " "internal tags. Other translations also happen on the other bridges and will " "be discussed in those sections." msgstr "" #: ../ops-network-troubleshooting.rst:222 msgid "" "**To discover which internal VLAN tag is in use for a given external VLAN by " "using the ovs-ofctl command**" msgstr "" #: ../ops-network-troubleshooting.rst:225 msgid "" "Find the external VLAN tag of the network you're interested in. This is the " "``provider:segmentation_id`` as returned by the networking service:" msgstr "" #: ../ops-network-troubleshooting.rst:239 msgid "" "Grep for the ``provider:segmentation_id``, 2113 in this case, in the output " "of :command:`ovs-ofctl dump-flows br-int`:" msgstr "" #: ../ops-network-troubleshooting.rst:249 msgid "" "Here you can see packets received on port ID 1 with the VLAN tag 2113 are " "modified to have the internal VLAN tag 7. Digging a little deeper, you can " "confirm that port 1 is in fact ``int-br-eth1``:" msgstr "" #: ../ops-network-troubleshooting.rst:283 msgid "" "The next step depends on whether the virtual network is configured to use " "802.1q VLAN tags or GRE:" msgstr "" #: ../ops-network-troubleshooting.rst:286 msgid "" "VLAN-based networks exit the integration bridge via veth interface ``int-br-" "eth1`` and arrive on the bridge ``br-eth1`` on the other member of the veth " "pair ``phy-br-eth1``. Packets on this interface arrive with internal VLAN " "tags and are translated to external tags in the reverse of the process " "described above:" msgstr "" #: ../ops-network-troubleshooting.rst:299 msgid "" "Packets, now tagged with the external VLAN tag, then exit onto the physical " "network via ``eth1``. The Layer2 switch this interface is connected to must " "be configured to accept traffic with the VLAN ID used. The next hop for this " "packet must also be on the same layer-2 network." msgstr "" #: ../ops-network-troubleshooting.rst:305 msgid "" "GRE-based networks are passed with ``patch-tun`` to the tunnel bridge ``br-" "tun`` on interface ``patch-int``. This bridge also contains one port for " "each GRE tunnel peer, so one for each compute node and network node in your " "network. The ports are named sequentially from ``gre-1`` onward." msgstr "" #: ../ops-network-troubleshooting.rst:311 msgid "" "Matching ``gre-`` interfaces to tunnel endpoints is possible by looking " "at the Open vSwitch state:" msgstr "" #: ../ops-network-troubleshooting.rst:324 msgid "" "In this case, ``gre-1`` is a tunnel from IP 10.10.128.21, which should match " "a local interface on this node, to IP 10.10.128.16 on the remote side." msgstr "" #: ../ops-network-troubleshooting.rst:328 msgid "" "These tunnels use the regular routing tables on the host to route the " "resulting GRE packet, so there is no requirement that GRE endpoints are all " "on the same layer-2 network, unlike VLAN encapsulation." msgstr "" #: ../ops-network-troubleshooting.rst:333 msgid "" "All interfaces on the ``br-tun`` are internal to Open vSwitch. To monitor " "traffic on them, you need to set up a mirror port as described above for " "``patch-tun`` in the ``br-int`` bridge." msgstr "" #: ../ops-network-troubleshooting.rst:337 msgid "" "All translation of GRE tunnels to and from internal VLANs happens on this " "bridge." msgstr "" #: ../ops-network-troubleshooting.rst:340 msgid "" "**To discover which internal VLAN tag is in use for a GRE tunnel by using " "the ovs-ofctl command**" msgstr "" #: ../ops-network-troubleshooting.rst:343 msgid "" "Find the ``provider:segmentation_id`` of the network you're interested in. " "This is the same field used for the VLAN ID in VLAN-based networks:" msgstr "" #: ../ops-network-troubleshooting.rst:357 msgid "" "Grep for 0x<``provider:segmentation_id``>, 0x3 in this case, in the output " "of ``ovs-ofctl dump-flows br-tun``:" msgstr "" #: ../ops-network-troubleshooting.rst:384 msgid "" "Here, you see three flows related to this GRE tunnel. The first is the " "translation from inbound packets with this tunnel ID to internal VLAN ID 1. " "The second shows a unicast flow to output port 53 for packets destined for " "MAC address fa:16:3e:a6:48:24. The third shows the translation from the " "internal VLAN representation to the GRE tunnel ID flooded to all output " "ports. For further details of the flow descriptions, see the man page for " "``ovs-ofctl``. As in the previous VLAN example, numeric port IDs can be " "matched with their named representations by examining the output of ``ovs-" "ofctl show br-tun``." msgstr "" #: ../ops-network-troubleshooting.rst:394 msgid "" "The packet is then received on the network node. Note that any traffic to " "the l3-agent or dhcp-agent will be visible only within their network " "namespace. Watching any interfaces outside those namespaces, even those that " "carry the network traffic, will only show broadcast packets like Address " "Resolution Protocols (ARPs), but unicast traffic to the router or DHCP " "address will not be seen. See :ref:`dealing_with_network_namespaces` for " "detail on how to run commands within these namespaces." msgstr "" #: ../ops-network-troubleshooting.rst:403 msgid "" "Alternatively, it is possible to configure VLAN-based networks to use " "external routers rather than the l3-agent shown here, so long as the " "external router is on the same VLAN:" msgstr "" #: ../ops-network-troubleshooting.rst:407 msgid "" "VLAN-based networks are received as tagged packets on a physical network " "interface, ``eth1`` in this example. Just as on the compute node, this " "interface is a member of the ``br-eth1`` bridge." msgstr "" #: ../ops-network-troubleshooting.rst:412 msgid "" "GRE-based networks will be passed to the tunnel bridge ``br-tun``, which " "behaves just like the GRE interfaces on the compute node." msgstr "" #: ../ops-network-troubleshooting.rst:415 msgid "" "Next, the packets from either input go through the integration bridge, again " "just as on the compute node." msgstr "" #: ../ops-network-troubleshooting.rst:418 msgid "" "The packet then makes it to the l3-agent. This is actually another TAP " "device within the router's network namespace. Router namespaces are named in " "the form ``qrouter-``. Running :command:`ip a` within the " "namespace will show the TAP device name, qr-e6256f7d-31 in this example:" msgstr "" #: ../ops-network-troubleshooting.rst:433 msgid "" "The ``qg-`` interface in the l3-agent router namespace sends the packet " "on to its next hop through device ``eth2`` on the external bridge ``br-ex``. " "This bridge is constructed similarly to ``br-eth1`` and may be inspected in " "the same way." msgstr "" #: ../ops-network-troubleshooting.rst:438 msgid "" "This external bridge also includes a physical network interface, ``eth2`` in " "this example, which finally lands the packet on the external network " "destined for an external router or destination." msgstr "" #: ../ops-network-troubleshooting.rst:442 msgid "" "DHCP agents running on OpenStack networks run in namespaces similar to the " "l3-agents. DHCP namespaces are named ``qdhcp-`` and have a TAP device " "on the integration bridge. Debugging of DHCP issues usually involves working " "inside this network namespace." msgstr "" #: ../ops-network-troubleshooting.rst:448 msgid "Finding a Failure in the Path" msgstr "" #: ../ops-network-troubleshooting.rst:450 msgid "" "Use ping to quickly find where a failure exists in the network path. In an " "instance, first see whether you can ping an external host, such as google." "com. If you can, then there shouldn't be a network problem at all." msgstr "" #: ../ops-network-troubleshooting.rst:455 msgid "" "If you can't, try pinging the IP address of the compute node where the " "instance is hosted. If you can ping this IP, then the problem is somewhere " "between the compute node and that compute node's gateway." msgstr "" #: ../ops-network-troubleshooting.rst:459 msgid "" "If you can't ping the IP address of the compute node, the problem is between " "the instance and the compute node. This includes the bridge connecting the " "compute node's main NIC with the vnet NIC of the instance." msgstr "" #: ../ops-network-troubleshooting.rst:464 msgid "" "One last test is to launch a second instance and see whether the two " "instances can ping each other. If they can, the issue might be related to " "the firewall on the compute node." msgstr "" #: ../ops-network-troubleshooting.rst:469 msgid "tcpdump" msgstr "" #: ../ops-network-troubleshooting.rst:471 msgid "" "One great, although very in-depth, way of troubleshooting network issues is " "to use ``tcpdump``. We recommended using ``tcpdump`` at several points along " "the network path to correlate where a problem might be. If you prefer " "working with a GUI, either live or by using a ``tcpdump`` capture, check out " "`Wireshark `_." msgstr "" #: ../ops-network-troubleshooting.rst:478 msgid "For example, run the following command:" msgstr "" #: ../ops-network-troubleshooting.rst:484 msgid "Run this on the command line of the following areas:" msgstr "" #: ../ops-network-troubleshooting.rst:486 msgid "An external server outside of the cloud" msgstr "" #: ../ops-network-troubleshooting.rst:488 msgid "A compute node" msgstr "" #: ../ops-network-troubleshooting.rst:490 msgid "An instance running on that compute node" msgstr "" #: ../ops-network-troubleshooting.rst:492 msgid "In this example, these locations have the following IP addresses:" msgstr "" #: ../ops-network-troubleshooting.rst:505 msgid "" "Next, open a new shell to the instance and then ping the external host where " "``tcpdump`` is running. If the network path to the external server and back " "is fully functional, you see something like the following:" msgstr "" #: ../ops-network-troubleshooting.rst:509 msgid "On the external server:" msgstr "" #: ../ops-network-troubleshooting.rst:521 msgid "On the compute node:" msgstr "" #: ../ops-network-troubleshooting.rst:544 msgid "On the instance:" msgstr "" #: ../ops-network-troubleshooting.rst:552 msgid "" "Here, the external server received the ping request and sent a ping reply. " "On the compute node, you can see that both the ping and ping reply " "successfully passed through. You might also see duplicate packets on the " "compute node, as seen above, because ``tcpdump`` captured the packet on both " "the bridge and outgoing interface." msgstr "" #: ../ops-network-troubleshooting.rst:559 msgid "iptables" msgstr "" #: ../ops-network-troubleshooting.rst:561 msgid "" "Through ``nova-network`` or ``neutron``, OpenStack Compute automatically " "manages iptables, including forwarding packets to and from instances on a " "compute node, forwarding floating IP traffic, and managing security group " "rules. In addition to managing the rules, comments (if supported) will be " "inserted in the rules to help indicate the purpose of the rule." msgstr "" #: ../ops-network-troubleshooting.rst:567 msgid "The following comments are added to the rule set as appropriate:" msgstr "" #: ../ops-network-troubleshooting.rst:569 msgid "Perform source NAT on outgoing traffic." msgstr "" #: ../ops-network-troubleshooting.rst:570 msgid "Default drop rule for unmatched traffic." msgstr "" #: ../ops-network-troubleshooting.rst:571 msgid "Direct traffic from the VM interface to the security group chain." msgstr "" #: ../ops-network-troubleshooting.rst:572 msgid "Jump to the VM specific chain." msgstr "" #: ../ops-network-troubleshooting.rst:573 msgid "Direct incoming traffic from VM to the security group chain." msgstr "" #: ../ops-network-troubleshooting.rst:574 msgid "Allow traffic from defined IP/MAC pairs." msgstr "" #: ../ops-network-troubleshooting.rst:575 msgid "Drop traffic without an IP/MAC allow rule." msgstr "" #: ../ops-network-troubleshooting.rst:576 msgid "Allow DHCP client traffic." msgstr "" #: ../ops-network-troubleshooting.rst:577 msgid "Prevent DHCP Spoofing by VM." msgstr "" #: ../ops-network-troubleshooting.rst:578 msgid "Send unmatched traffic to the fallback chain." msgstr "" #: ../ops-network-troubleshooting.rst:579 msgid "Drop packets that are not associated with a state." msgstr "" #: ../ops-network-troubleshooting.rst:580 msgid "Direct packets associated with a known session to the RETURN chain." msgstr "" #: ../ops-network-troubleshooting.rst:581 msgid "Allow IPv6 ICMP traffic to allow RA packets." msgstr "" #: ../ops-network-troubleshooting.rst:583 msgid "Run the following command to view the current iptables configuration:" msgstr "" #: ../ops-network-troubleshooting.rst:591 msgid "" "If you modify the configuration, it reverts the next time you restart ``nova-" "network`` or ``neutron-server``. You must use OpenStack to manage iptables." msgstr "" #: ../ops-network-troubleshooting.rst:596 msgid "Network Configuration in the Database for nova-network" msgstr "" #: ../ops-network-troubleshooting.rst:598 msgid "" "With ``nova-network``, the nova database table contains a few tables with " "networking information:" msgstr "" #: ../ops-network-troubleshooting.rst:602 msgid "" "Contains each possible IP address for the subnet(s) added to Compute. This " "table is related to the ``instances`` table by way of the ``fixed_ips." "instance_uuid`` column." msgstr "" #: ../ops-network-troubleshooting.rst:604 msgid "``fixed_ips``" msgstr "" #: ../ops-network-troubleshooting.rst:607 msgid "" "Contains each floating IP address that was added to Compute. This table is " "related to the ``fixed_ips`` table by way of the ``floating_ips." "fixed_ip_id`` column." msgstr "" #: ../ops-network-troubleshooting.rst:609 msgid "``floating_ips``" msgstr "" #: ../ops-network-troubleshooting.rst:612 msgid "" "Not entirely network specific, but it contains information about the " "instance that is utilizing the ``fixed_ip`` and optional ``floating_ip``." msgstr "" #: ../ops-network-troubleshooting.rst:614 ../ops-quotas.rst:105 msgid "``instances``" msgstr "" #: ../ops-network-troubleshooting.rst:616 msgid "" "From these tables, you can see that a floating IP is technically never " "directly related to an instance; it must always go through a fixed IP." msgstr "" #: ../ops-network-troubleshooting.rst:620 msgid "Manually Disassociating a Floating IP" msgstr "" #: ../ops-network-troubleshooting.rst:622 msgid "" "Sometimes an instance is terminated but the floating IP was not correctly " "disassociated from that instance. Because the database is in an inconsistent " "state, the usual tools to disassociate the IP no longer work. To fix this, " "you must manually update the database." msgstr "" #: ../ops-network-troubleshooting.rst:627 msgid "First, find the UUID of the instance in question:" msgstr "" #: ../ops-network-troubleshooting.rst:633 msgid "Next, find the fixed IP entry for that UUID:" msgstr "" #: ../ops-network-troubleshooting.rst:639 msgid "You can now get the related floating IP entry:" msgstr "" #: ../ops-network-troubleshooting.rst:645 msgid "And finally, you can disassociate the floating IP:" msgstr "" #: ../ops-network-troubleshooting.rst:652 msgid "You can optionally also deallocate the IP from the user's pool:" msgstr "" #: ../ops-network-troubleshooting.rst:660 msgid "Debugging DHCP Issues with nova-network" msgstr "" #: ../ops-network-troubleshooting.rst:662 msgid "" "One common networking problem is that an instance boots successfully but is " "not reachable because it failed to obtain an IP address from dnsmasq, which " "is the DHCP server that is launched by the ``nova-network`` service." msgstr "" #: ../ops-network-troubleshooting.rst:667 msgid "" "The simplest way to identify that this is the problem with your instance is " "to look at the console output of your instance. If DHCP failed, you can " "retrieve the console log by doing:" msgstr "" #: ../ops-network-troubleshooting.rst:675 msgid "" "If your instance failed to obtain an IP through DHCP, some messages should " "appear in the console. For example, for the Cirros image, you see output " "that looks like the following:" msgstr "" #: ../ops-network-troubleshooting.rst:691 msgid "" "After you establish that the instance booted properly, the task is to figure " "out where the failure is." msgstr "" #: ../ops-network-troubleshooting.rst:694 msgid "" "A DHCP problem might be caused by a misbehaving dnsmasq process. First, " "debug by checking logs and then restart the dnsmasq processes only for that " "project (tenant). In VLAN mode, there is a dnsmasq process for each tenant. " "Once you have restarted targeted dnsmasq processes, the simplest way to rule " "out dnsmasq causes is to kill all of the dnsmasq processes on the machine " "and restart ``nova-network``. As a last resort, do this as root:" msgstr "" #: ../ops-network-troubleshooting.rst:709 msgid "" "Use ``openstack-nova-network`` on RHEL/CentOS/Fedora but ``nova-network`` on " "Ubuntu/Debian." msgstr "" #: ../ops-network-troubleshooting.rst:712 msgid "" "Several minutes after ``nova-network`` is restarted, you should see new " "dnsmasq processes running:" msgstr "" #: ../ops-network-troubleshooting.rst:735 msgid "" "If your instances are still not able to obtain IP addresses, the next thing " "to check is whether dnsmasq is seeing the DHCP requests from the instance. " "On the machine that is running the dnsmasq process, which is the compute " "host if running in multi-host mode, look at ``/var/log/syslog`` to see the " "dnsmasq output. If dnsmasq is seeing the request properly and handing out an " "IP, the output looks like this:" msgstr "" #: ../ops-network-troubleshooting.rst:752 msgid "" "If you do not see the ``DHCPDISCOVER``, a problem exists with the packet " "getting from the instance to the machine running dnsmasq. If you see all of " "the preceding output and your instances are still not able to obtain IP " "addresses, then the packet is able to get from the instance to the host " "running dnsmasq, but it is not able to make the return trip." msgstr "" #: ../ops-network-troubleshooting.rst:758 msgid "You might also see a message such as this:" msgstr "" #: ../ops-network-troubleshooting.rst:765 msgid "" "This may be a dnsmasq and/or ``nova-network`` related issue. (For the " "preceding example, the problem happened to be that dnsmasq did not have any " "more IP addresses to give away because there were no more fixed IPs " "available in the OpenStack Compute database.)" msgstr "" #: ../ops-network-troubleshooting.rst:770 msgid "" "If there's a suspicious-looking dnsmasq log message, take a look at the " "command-line arguments to the dnsmasq processes to see if they look correct:" msgstr "" #: ../ops-network-troubleshooting.rst:778 msgid "The output looks something like the following:" msgstr "" #: ../ops-network-troubleshooting.rst:809 msgid "" "The output shows three different dnsmasq processes. The dnsmasq process that " "has the DHCP subnet range of 192.168.122.0 belongs to libvirt and can be " "ignored. The other two dnsmasq processes belong to ``nova-network``. The two " "processes are actually related—one is simply the parent process of the " "other. The arguments of the dnsmasq processes should correspond to the " "details you configured ``nova-network`` with." msgstr "" #: ../ops-network-troubleshooting.rst:816 msgid "" "If the problem does not seem to be related to dnsmasq itself, at this point " "use ``tcpdump`` on the interfaces to determine where the packets are getting " "lost." msgstr "" #: ../ops-network-troubleshooting.rst:820 msgid "" "DHCP traffic uses UDP. The client sends from port 68 to port 67 on the " "server. Try to boot a new instance and then systematically listen on the " "NICs until you identify the one that isn't seeing the traffic. To use " "``tcpdump`` to listen to ports 67 and 68 on br100, you would do:" msgstr "" #: ../ops-network-troubleshooting.rst:829 msgid "" "You should be doing sanity checks on the interfaces using command such as :" "command:`ip a` and :command:`brctl show` to ensure that the interfaces are " "actually up and configured the way that you think that they are." msgstr "" #: ../ops-network-troubleshooting.rst:834 msgid "Debugging DNS Issues" msgstr "" #: ../ops-network-troubleshooting.rst:836 msgid "" "If you are able to use :term:`SSH ` to log into an " "instance, but it takes a very long time (on the order of a minute) to get a " "prompt, then you might have a DNS issue. The reason a DNS issue can cause " "this problem is that the SSH server does a reverse DNS lookup on the IP " "address that you are connecting from. If DNS lookup isn't working on your " "instances, then you must wait for the DNS reverse lookup timeout to occur " "for the SSH login process to complete." msgstr "" #: ../ops-network-troubleshooting.rst:844 msgid "" "When debugging DNS issues, start by making sure that the host where the " "dnsmasq process for that instance runs is able to correctly resolve. If the " "host cannot resolve, then the instances won't be able to either." msgstr "" #: ../ops-network-troubleshooting.rst:848 msgid "" "A quick way to check whether DNS is working is to resolve a hostname inside " "your instance by using the :command:`host` command. If DNS is working, you " "should see:" msgstr "" #: ../ops-network-troubleshooting.rst:859 msgid "" "If you're running the Cirros image, it doesn't have the \"host\" program " "installed, in which case you can use ping to try to access a machine by " "hostname to see whether it resolves. If DNS is working, the first line of " "ping would be:" msgstr "" #: ../ops-network-troubleshooting.rst:869 msgid "" "If the instance fails to resolve the hostname, you have a DNS problem. For " "example:" msgstr "" #: ../ops-network-troubleshooting.rst:877 msgid "" "In an OpenStack cloud, the dnsmasq process acts as the DNS server for the " "instances in addition to acting as the DHCP server. A misbehaving dnsmasq " "process may be the source of DNS-related issues inside the instance. As " "mentioned in the previous section, the simplest way to rule out a " "misbehaving dnsmasq process is to kill all the dnsmasq processes on the " "machine and restart ``nova-network``. However, be aware that this command " "affects everyone running instances on this node, including tenants that have " "not seen the issue. As a last resort, as root:" msgstr "" #: ../ops-network-troubleshooting.rst:891 msgid "After the dnsmasq processes start again, check whether DNS is working." msgstr "" #: ../ops-network-troubleshooting.rst:893 msgid "" "If restarting the dnsmasq process doesn't fix the issue, you might need to " "use ``tcpdump`` to look at the packets to trace where the failure is. The " "DNS server listens on UDP port 53. You should see the DNS request on the " "bridge (such as, br100) of your compute node. Let's say you start listening " "with ``tcpdump`` on the compute node:" msgstr "" #: ../ops-network-troubleshooting.rst:904 msgid "" "Then, if you use SSH to log into your instance and try ``ping openstack." "org``, you should see something like:" msgstr "" #: ../ops-network-troubleshooting.rst:918 msgid "Troubleshooting Open vSwitch" msgstr "" #: ../ops-network-troubleshooting.rst:920 msgid "" "Open vSwitch, as used in the previous OpenStack Networking examples is a " "full-featured multilayer virtual switch licensed under the open source " "Apache 2.0 license. Full documentation can be found at `the project's " "website `_. In practice, given the preceding " "configuration, the most common issues are being sure that the required " "bridges (``br-int``, ``br-tun``, and ``br-ex``) exist and have the proper " "ports connected to them." msgstr "" #: ../ops-network-troubleshooting.rst:928 msgid "" "The Open vSwitch driver should and usually does manage this automatically, " "but it is useful to know how to do this by hand with the :command:`ovs-" "vsctl` command. This command has many more subcommands than we will use " "here; see the man page or use :command:`ovs-vsctl --help` for the full " "listing." msgstr "" #: ../ops-network-troubleshooting.rst:934 msgid "" "To list the bridges on a system, use :command:`ovs-vsctl list-br`. This " "example shows a compute node that has an internal bridge and a tunnel " "bridge. VLAN networks are trunked through the ``eth1`` network interface:" msgstr "" #: ../ops-network-troubleshooting.rst:946 msgid "" "Working from the physical interface inwards, we can see the chain of ports " "and bridges. First, the bridge ``eth1-br``, which contains the physical " "network interface ``eth1`` and the virtual interface ``phy-eth1-br``:" msgstr "" #: ../ops-network-troubleshooting.rst:957 msgid "" "Next, the internal bridge, ``br-int``, contains ``int-eth1-br``, which pairs " "with ``phy-eth1-br`` to connect to the physical network shown in the " "previous bridge, ``patch-tun``, which is used to connect to the GRE tunnel " "bridge and the TAP devices that connect to the instances currently running " "on the system:" msgstr "" #: ../ops-network-troubleshooting.rst:972 msgid "" "The tunnel bridge, ``br-tun``, contains the ``patch-int`` interface and " "``gre-`` interfaces for each peer it connects to via GRE, one for each " "compute and network node in your cluster:" msgstr "" #: ../ops-network-troubleshooting.rst:986 msgid "" "If any of these links are missing or incorrect, it suggests a configuration " "error. Bridges can be added with ``ovs-vsctl add-br``, and ports can be " "added to bridges with ``ovs-vsctl add-port``. While running these by hand " "can be useful debugging, it is imperative that manual changes that you " "intend to keep be reflected back into your configuration files." msgstr "" #: ../ops-network-troubleshooting.rst:996 msgid "Dealing with Network Namespaces" msgstr "" #: ../ops-network-troubleshooting.rst:998 msgid "" "Linux network namespaces are a kernel feature the networking service uses to " "support multiple isolated layer-2 networks with overlapping IP address " "ranges. The support may be disabled, but it is on by default. If it is " "enabled in your environment, your network nodes will run their dhcp-agents " "and l3-agents in isolated namespaces. Network interfaces and traffic on " "those interfaces will not be visible in the default namespace." msgstr "" #: ../ops-network-troubleshooting.rst:1006 msgid "To see whether you are using namespaces, run :command:`ip netns`:" msgstr "" #: ../ops-network-troubleshooting.rst:1017 msgid "" "L3-agent router namespaces are named ``qrouter-``, and dhcp-" "agent name spaces are named ``qdhcp-``. This output shows a " "network node with four networks running dhcp-agents, one of which is also " "running an l3-agent router. It's important to know which network you need to " "be working in. A list of existing networks and their UUIDs can be obtained " "by running ``openstack network list`` with administrative credentials." msgstr "" #: ../ops-network-troubleshooting.rst:1026 msgid "" "Once you've determined which namespace you need to work in, you can use any " "of the debugging tools mention earlier by prefixing the command with ``ip " "netns exec ``. For example, to see what network interfaces exist " "in the first qdhcp namespace returned above, do this:" msgstr "" #: ../ops-network-troubleshooting.rst:1046 msgid "" "From this you see that the DHCP server on that network is using the " "``tape6256f7d-31`` device and has an IP address of ``10.0.1.100``. Seeing " "the address ``169.254.169.254``, you can also see that the dhcp-agent is " "running a metadata-proxy service. Any of the commands mentioned previously " "in this chapter can be run in the same way. It is also possible to run a " "shell, such as ``bash``, and have an interactive session within the " "namespace. In the latter case, exiting the shell returns you to the top-" "level default namespace." msgstr "" #: ../ops-network-troubleshooting.rst:1056 msgid "Assign a lost IPv4 address back to a project" msgstr "" #: ../ops-network-troubleshooting.rst:1058 msgid "" "Using administrator credentials, confirm the lost IP address is still " "available:" msgstr "" #: ../ops-network-troubleshooting.rst:1065 msgid "Create a port:" msgstr "" #: ../ops-network-troubleshooting.rst:1071 msgid "Update the new port with the IPv4 address:" msgstr "" #: ../ops-network-troubleshooting.rst:1082 msgid "Tools for automated neutron diagnosis" msgstr "" #: ../ops-network-troubleshooting.rst:1084 msgid "" "`easyOVS `_ is a useful tool when it comes " "to operating your OpenvSwitch bridges and iptables on your OpenStack " "platform. It automatically associates the virtual ports with the VM MAC/IP, " "VLAN tag and namespace information, as well as the iptables rules for VMs." msgstr "" #: ../ops-network-troubleshooting.rst:1089 msgid "" "`Don `_ is another convenient " "network analysis and diagnostic system that provides a completely automated " "service for verifying and diagnosing the networking functionality provided " "by OVS." msgstr "" #: ../ops-network-troubleshooting.rst:1093 msgid "" "Additionally, you can refer to `neutron debug `_ for more options." msgstr "" #: ../ops-planning.rst:3 msgid "Planning for deploying and provisioning OpenStack" msgstr "" #: ../ops-planning.rst:5 msgid "" "The decisions you make with respect to provisioning and deployment will " "affect your maintenance of the cloud. Your configuration management will be " "able to evolve over time. However, more thought and design need to be done " "for upfront choices about deployment, disk partitioning, and network " "configuration." msgstr "" #: ../ops-planning.rst:11 msgid "" "A critical part of a cloud's scalability is the amount of effort that it " "takes to run your cloud. To minimize the operational cost of running your " "cloud, set up and use an automated deployment and configuration " "infrastructure with a configuration management system, such as :term:" "`Puppet` or :term:`Chef`. Combined, these systems greatly reduce manual " "effort and the chance for operator error." msgstr "" #: ../ops-planning.rst:18 msgid "" "This infrastructure includes systems to automatically install the operating " "system's initial configuration and later coordinate the configuration of all " "services automatically and centrally, which reduces both manual effort and " "the chance for error. Examples include Ansible, CFEngine, Chef, Puppet, and " "Salt. You can even use OpenStack to deploy OpenStack, named TripleO " "(OpenStack On OpenStack)." msgstr "" #: ../ops-planning.rst:26 msgid "Automated deployment" msgstr "" #: ../ops-planning.rst:28 msgid "" "An automated deployment system installs and configures operating systems on " "new servers, without intervention, after the absolute minimum amount of " "manual work, including physical racking, MAC-to-IP assignment, and power " "configuration. Typically, solutions rely on wrappers around PXE boot and " "TFTP servers for the basic operating system install and then hand off to an " "automated configuration management system." msgstr "" #: ../ops-planning.rst:35 msgid "" "Both Ubuntu and Red Hat Enterprise Linux include mechanisms for configuring " "the operating system, including preseed and kickstart, that you can use " "after a network boot. Typically, these are used to bootstrap an automated " "configuration system. Alternatively, you can use an image-based approach for " "deploying the operating system, such as systemimager. You can use both " "approaches with a virtualized infrastructure, such as when you run VMs to " "separate your control services and physical infrastructure." msgstr "" #: ../ops-planning.rst:44 msgid "" "When you create a deployment plan, focus on a few vital areas because they " "are very hard to modify post deployment. The next two sections talk about " "configurations for:" msgstr "" #: ../ops-planning.rst:48 msgid "Disk partitioning and disk array setup for scalability" msgstr "" #: ../ops-planning.rst:50 msgid "Networking configuration just for PXE booting" msgstr "" #: ../ops-planning.rst:53 msgid "Disk partitioning and RAID" msgstr "" #: ../ops-planning.rst:55 msgid "" "At the very base of any operating system are the hard drives on which the " "operating system (OS) is installed." msgstr "" #: ../ops-planning.rst:58 msgid "" "You must complete the following configurations on the server's hard drives:" msgstr "" #: ../ops-planning.rst:61 msgid "" "Partitioning, which provides greater flexibility for layout of operating " "system and swap space, as described below." msgstr "" #: ../ops-planning.rst:64 msgid "" "Adding to a RAID array (RAID stands for redundant array of independent " "disks), based on the number of disks you have available, so that you can add " "capacity as your cloud grows. Some options are described in more detail " "below." msgstr "" #: ../ops-planning.rst:69 msgid "" "The simplest option to get started is to use one hard drive with two " "partitions:" msgstr "" #: ../ops-planning.rst:72 msgid "" "File system to store files and directories, where all the data lives, " "including the root partition that starts and runs the system." msgstr "" #: ../ops-planning.rst:75 msgid "" "Swap space to free up memory for processes, as an independent area of the " "physical disk used only for swapping and nothing else." msgstr "" #: ../ops-planning.rst:78 msgid "" "RAID is not used in this simplistic one-drive setup because generally for " "production clouds, you want to ensure that if one disk fails, another can " "take its place. Instead, for production, use more than one disk. The number " "of disks determine what types of RAID arrays to build." msgstr "" #: ../ops-planning.rst:83 msgid "" "We recommend that you choose one of the following multiple disk options:" msgstr "" #: ../ops-planning.rst:86 msgid "" "Partition all drives in the same way in a horizontal fashion, as shown in :" "ref:`partition_setup`." msgstr "" #: ../ops-planning.rst:89 msgid "" "With this option, you can assign different partitions to different RAID " "arrays. You can allocate partition 1 of disk one and two to the ``/boot`` " "partition mirror. You can make partition 2 of all disks the root partition " "mirror. You can use partition 3 of all disks for a ``cinder-volumes`` LVM " "partition running on a RAID 10 array." msgstr "" #: ../ops-planning.rst:99 msgid "Partition setup of drives" msgstr "" #: ../ops-planning.rst:101 msgid "" "While you might end up with unused partitions, such as partition 1 in disk " "three and four of this example, this option allows for maximum utilization " "of disk space. I/O performance might be an issue as a result of all disks " "being used for all tasks." msgstr "" #: ../ops-planning.rst:104 msgid "Option 1" msgstr "" #: ../ops-planning.rst:107 msgid "" "Add all raw disks to one large RAID array, either hardware or software " "based. You can partition this large array with the boot, root, swap, and LVM " "areas. This option is simple to implement and uses all partitions. However, " "disk I/O might suffer." msgstr "" #: ../ops-planning.rst:110 msgid "Option 2" msgstr "" #: ../ops-planning.rst:113 msgid "" "Dedicate entire disks to certain partitions. For example, you could allocate " "disk one and two entirely to the boot, root, and swap partitions under a " "RAID 1 mirror. Then, allocate disk three and four entirely to the LVM " "partition, also under a RAID 1 mirror. Disk I/O should be better because I/O " "is focused on dedicated tasks. However, the LVM partition is much smaller." msgstr "" #: ../ops-planning.rst:118 msgid "Option 3" msgstr "" #: ../ops-planning.rst:122 msgid "" "You may find that you can automate the partitioning itself. For example, MIT " "uses `Fully Automatic Installation (FAI) `_ to do " "the initial PXE-based partition and then install using a combination of min/" "max and percentage-based partitioning." msgstr "" #: ../ops-planning.rst:128 msgid "" "As with most architecture choices, the right answer depends on your " "environment. If you are using existing hardware, you know the disk density " "of your servers and can determine some decisions based on the options above. " "If you are going through a procurement process, your user's requirements " "also help you determine hardware purchases. Here are some examples from a " "private cloud providing web developers custom environments at AT&T. This " "example is from a specific deployment, so your existing hardware or " "procurement opportunity may vary from this. AT&T uses three types of " "hardware in its deployment:" msgstr "" #: ../ops-planning.rst:138 msgid "" "Hardware for controller nodes, used for all stateless OpenStack API " "services. About 32–64 GB memory, small attached disk, one processor, varied " "number of cores, such as 6–12." msgstr "" #: ../ops-planning.rst:142 msgid "" "Hardware for compute nodes. Typically 256 or 144 GB memory, two processors, " "24 cores. 4–6 TB direct attached storage, typically in a RAID 5 " "configuration." msgstr "" #: ../ops-planning.rst:146 msgid "" "Hardware for storage nodes. Typically for these, the disk space is optimized " "for the lowest cost per GB of storage while maintaining rack-space " "efficiency." msgstr "" #: ../ops-planning.rst:150 msgid "" "Again, the right answer depends on your environment. You have to make your " "decision based on the trade-offs between space utilization, simplicity, and " "I/O performance." msgstr "" #: ../ops-planning.rst:155 msgid "Network configuration" msgstr "" #: ../ops-planning.rst:159 msgid "" "Network configuration is a very large topic that spans multiple areas of " "this book. For now, make sure that your servers can PXE boot and " "successfully communicate with the deployment server." msgstr "" #: ../ops-planning.rst:163 msgid "" "For example, you usually cannot configure NICs for VLANs when PXE booting. " "Additionally, you usually cannot PXE boot with bonded NICs. If you run into " "this scenario, consider using a simple 1 GB switch in a private network on " "which only your cloud communicates." msgstr "" #: ../ops-planning.rst:169 msgid "Automated configuration" msgstr "" #: ../ops-planning.rst:171 msgid "" "The purpose of automatic configuration management is to establish and " "maintain the consistency of a system without using human intervention. You " "want to maintain consistency in your deployments so that you can have the " "same cloud every time, repeatably. Proper use of automatic configuration-" "management tools ensures that components of the cloud systems are in " "particular states, in addition to simplifying deployment, and configuration " "change propagation." msgstr "" #: ../ops-planning.rst:179 msgid "" "These tools also make it possible to test and roll back changes, as they are " "fully repeatable. Conveniently, a large body of work has been done by the " "OpenStack community in this space. Puppet, a configuration management tool, " "even provides official modules for OpenStack projects in an OpenStack " "infrastructure system known as `Puppet OpenStack `_. Chef configuration management is provided within https://git." "openstack.org/cgit/openstack/openstack-chef-repo. Additional configuration " "management systems include Juju, Ansible, and Salt. Also, PackStack is a " "command-line utility for Red Hat Enterprise Linux and derivatives that uses " "Puppet modules to support rapid deployment of OpenStack on existing servers " "over an SSH connection." msgstr "" #: ../ops-planning.rst:192 msgid "" "An integral part of a configuration-management system is the item that it " "controls. You should carefully consider all of the items that you want, or " "do not want, to be automatically managed. For example, you may not want to " "automatically format hard drives with user data." msgstr "" #: ../ops-planning.rst:198 msgid "Remote management" msgstr "" #: ../ops-planning.rst:200 msgid "" "In our experience, most operators don't sit right next to the servers " "running the cloud, and many don't necessarily enjoy visiting the data " "center. OpenStack should be entirely remotely configurable, but sometimes " "not everything goes according to plan." msgstr "" #: ../ops-planning.rst:205 msgid "" "In this instance, having an out-of-band access into nodes running OpenStack " "components is a boon. The IPMI protocol is the de facto standard here, and " "acquiring hardware that supports it is highly recommended to achieve that " "lights-out data center aim." msgstr "" #: ../ops-planning.rst:210 msgid "" "In addition, consider remote power control as well. While IPMI usually " "controls the server's power state, having remote access to the PDU that the " "server is plugged into can really be useful for situations when everything " "seems wedged." msgstr "" #: ../ops-planning.rst:216 msgid "Other considerations" msgstr "" #: ../ops-planning.rst:220 msgid "" "You can save time by understanding the use cases for the cloud you want to " "create. Use cases for OpenStack are varied. Some include object storage " "only; others require preconfigured compute resources to speed development-" "environment set up; and others need fast provisioning of compute resources " "that are already secured per tenant with private networks. Your users may " "have need for highly redundant servers to make sure their legacy " "applications continue to run. Perhaps a goal would be to architect these " "legacy applications so that they run on multiple instances in a cloudy, " "fault-tolerant way, but not make it a goal to add to those clusters over " "time. Your users may indicate that they need scaling considerations because " "of heavy Windows server use." msgstr "" #: ../ops-planning.rst:232 msgid "" "You can save resources by looking at the best fit for the hardware you have " "in place already. You might have some high-density storage hardware " "available. You could format and repurpose those servers for OpenStack Object " "Storage. All of these considerations and input from users help you build " "your use case and your deployment plan." msgstr "" #: ../ops-planning.rst:240 msgid "" "For further research about OpenStack deployment, investigate the supported " "and documented preconfigured, prepackaged installers for OpenStack from " "companies such as `Canonical `_, " "`Cisco `_, `Cloudscaling `_, `IBM `_, `Metacloud `_, `Mirantis `_, `Rackspace `_, `Red Hat " "`_, `SUSE `_, and `SwiftStack `_." msgstr "" #: ../ops-projects-users-summary.rst:5 msgid "" "One key element of systems administration that is often overlooked is that " "end users are the reason systems administrators exist. Don't go the BOFH " "route and terminate every user who causes an alert to go off. Work with " "users to understand what they're trying to accomplish and see how your " "environment can better assist them in achieving their goals. Meet your users " "needs by organizing your users into projects, applying policies, managing " "quotas, and working with them." msgstr "" #: ../ops-projects-users.rst:3 msgid "Managing Projects and Users" msgstr "" #: ../ops-projects-users.rst:12 msgid "" "An OpenStack cloud does not have much value without users. This chapter " "covers topics that relate to managing users, projects, and quotas. This " "chapter describes users and projects as described by version 2 of the " "OpenStack Identity API." msgstr "" #: ../ops-projects-users.rst:18 msgid "Projects or Tenants?" msgstr "" #: ../ops-projects-users.rst:20 msgid "" "In OpenStack user interfaces and documentation, a group of users is referred " "to as a :term:`project` or :term:`tenant`. These terms are interchangeable." msgstr "" #: ../ops-projects-users.rst:24 msgid "" "The initial implementation of OpenStack Compute had its own authentication " "system and used the term ``project``. When authentication moved into the " "OpenStack Identity (keystone) project, it used the term ``tenant`` to refer " "to a group of users. Because of this legacy, some of the OpenStack tools " "refer to projects and some refer to tenants." msgstr "" #: ../ops-projects-users.rst:32 msgid "" "This guide uses the term ``project``, unless an example shows interaction " "with a tool that uses the term ``tenant``." msgstr "" #: ../ops-projects.rst:3 msgid "Managing Projects" msgstr "" #: ../ops-projects.rst:5 msgid "" "Users must be associated with at least one project, though they may belong " "to many. Therefore, you should add at least one project before adding users." msgstr "" #: ../ops-projects.rst:10 msgid "Adding Projects" msgstr "" #: ../ops-projects.rst:12 msgid "To create a project through the OpenStack dashboard:" msgstr "" #: ../ops-projects.rst:14 msgid "Log in as an administrative user." msgstr "" #: ../ops-projects.rst:16 msgid "Select the :guilabel:`Identity` tab in the left navigation bar." msgstr "" #: ../ops-projects.rst:18 msgid "Under Identity tab, click :guilabel:`Projects`." msgstr "" #: ../ops-projects.rst:20 msgid "Click the :guilabel:`Create Project` button." msgstr "" #: ../ops-projects.rst:22 msgid "" "You are prompted for a project name and an optional, but recommended, " "description. Select the check box at the bottom of the form to enable this " "project. By default, it is enabled, as shown below:" msgstr "" #: ../ops-projects.rst:29 msgid "" "It is also possible to add project members and adjust the project quotas. " "We'll discuss those actions later, but in practice, it can be quite " "convenient to deal with all these operations at one time." msgstr "" #: ../ops-projects.rst:33 msgid "" "To add a project through the command line, you must use the OpenStack " "command line client." msgstr "" #: ../ops-projects.rst:40 msgid "" "This command creates a project named ``demo``. Optionally, you can add a " "description string by appending ``--description PROJECT_DESCRIPTION``, which " "can be very useful. You can also create a project in a disabled state by " "appending ``--disable`` to the command. By default, projects are created in " "an enabled state." msgstr "" #: ../ops-quotas.rst:3 msgid "Quotas" msgstr "" #: ../ops-quotas.rst:5 msgid "" "To prevent system capacities from being exhausted without notification, you " "can set up :term:`quotas `. Quotas are operational limits. For " "example, the number of gigabytes allowed per tenant can be controlled to " "ensure that a single tenant cannot consume all of the disk space. Quotas are " "currently enforced at the tenant (or project) level, rather than the user " "level." msgstr "" #: ../ops-quotas.rst:14 msgid "" "Because without sensible quotas a single tenant could use up all the " "available resources, default quotas are shipped with OpenStack. You should " "pay attention to which quota settings make sense for your hardware " "capabilities." msgstr "" #: ../ops-quotas.rst:19 msgid "" "Using the command-line interface, you can manage quotas for the OpenStack " "Compute service and the Block Storage service." msgstr "" #: ../ops-quotas.rst:22 msgid "" "Typically, default values are changed because a tenant requires more than " "the OpenStack default of 10 volumes per tenant, or more than the OpenStack " "default of 1 TB of disk space on a compute node." msgstr "" #: ../ops-quotas.rst:28 msgid "To view all tenants, run:" msgstr "" #: ../ops-quotas.rst:43 msgid "Set Image Quotas" msgstr "" #: ../ops-quotas.rst:45 msgid "" "You can restrict a project's image storage by total number of bytes. " "Currently, this quota is applied cloud-wide, so if you were to set an Image " "quota limit of 5 GB, then all projects in your cloud will be able to store " "only 5 GB of images and snapshots." msgstr "" #: ../ops-quotas.rst:50 msgid "" "To enable this feature, edit the ``/etc/glance/glance-api.conf`` file, and " "under the ``[DEFAULT]`` section, add:" msgstr "" #: ../ops-quotas.rst:57 msgid "For example, to restrict a project's image storage to 5 GB, do this:" msgstr "" #: ../ops-quotas.rst:65 msgid "" "There is a configuration option in ``/etc/glance/glance-api.conf`` that " "limits the number of members allowed per image, called " "``image_member_quota``, set to 128 by default. That setting is a different " "quota from the storage quota." msgstr "" #: ../ops-quotas.rst:71 msgid "Set Compute Service Quotas" msgstr "" #: ../ops-quotas.rst:73 msgid "" "As an administrative user, you can update the Compute service quotas for an " "existing tenant, as well as update the quota defaults for a new tenant. See :" "ref:`table_compute_quota`." msgstr "" #: ../ops-quotas.rst:79 msgid "Compute quota descriptions" msgstr "" #: ../ops-quotas.rst:83 msgid "Quota" msgstr "" #: ../ops-quotas.rst:84 ../ops-quotas.rst:359 ../ops-users.rst:22 msgid "Description" msgstr "" #: ../ops-quotas.rst:85 ../ops-quotas.rst:358 msgid "Property name" msgstr "" #: ../ops-quotas.rst:86 msgid "Fixed IPs" msgstr "" #: ../ops-quotas.rst:87 msgid "" "Number of fixed IP addresses allowed per project. This number must be equal " "to or greater than the number of allowed instances." msgstr "" #: ../ops-quotas.rst:90 msgid "``fixed-ips``" msgstr "" #: ../ops-quotas.rst:91 ../ops-user-facing-operations.rst:1910 msgid "Floating IPs" msgstr "" #: ../ops-quotas.rst:92 msgid "Number of floating IP addresses allowed per project." msgstr "" #: ../ops-quotas.rst:93 msgid "``floating-ips``" msgstr "" #: ../ops-quotas.rst:94 msgid "Injected file content bytes" msgstr "" #: ../ops-quotas.rst:95 msgid "Number of content bytes allowed per injected file." msgstr "" #: ../ops-quotas.rst:96 msgid "``injected-file-content-bytes``" msgstr "" #: ../ops-quotas.rst:97 msgid "Injected file path bytes" msgstr "" #: ../ops-quotas.rst:98 msgid "Number of bytes allowed per injected file path." msgstr "" #: ../ops-quotas.rst:99 msgid "``injected-file-path-bytes``" msgstr "" #: ../ops-quotas.rst:100 msgid "Injected files" msgstr "" #: ../ops-quotas.rst:101 msgid "Number of injected files allowed per project." msgstr "" #: ../ops-quotas.rst:102 msgid "``injected-files``" msgstr "" #: ../ops-quotas.rst:104 msgid "Number of instances allowed per project." msgstr "" #: ../ops-quotas.rst:106 msgid "Key pairs" msgstr "" #: ../ops-quotas.rst:107 msgid "Number of key pairs allowed per user." msgstr "" #: ../ops-quotas.rst:108 msgid "``key-pairs``" msgstr "" #: ../ops-quotas.rst:109 msgid "Metadata items" msgstr "" #: ../ops-quotas.rst:110 msgid "Number of metadata items allowed per instance." msgstr "" #: ../ops-quotas.rst:111 msgid "``metadata-items``" msgstr "" #: ../ops-quotas.rst:112 msgid "RAM" msgstr "" #: ../ops-quotas.rst:113 msgid "Megabytes of instance RAM allowed per project." msgstr "" #: ../ops-quotas.rst:114 msgid "``ram``" msgstr "" #: ../ops-quotas.rst:115 msgid "Security group rules" msgstr "" #: ../ops-quotas.rst:116 msgid "Number of security group rules per project." msgstr "" #: ../ops-quotas.rst:117 msgid "``security-group-rules``" msgstr "" #: ../ops-quotas.rst:118 msgid "Security groups" msgstr "" #: ../ops-quotas.rst:119 msgid "Number of security groups per project." msgstr "" #: ../ops-quotas.rst:120 msgid "``security-groups``" msgstr "" #: ../ops-quotas.rst:121 ../ops-user-facing-operations.rst:440 msgid "VCPUs" msgstr "" #: ../ops-quotas.rst:122 msgid "Number of instance cores allowed per project." msgstr "" #: ../ops-quotas.rst:123 msgid "``cores``" msgstr "" #: ../ops-quotas.rst:124 msgid "Server Groups" msgstr "" #: ../ops-quotas.rst:125 msgid "Number of server groups per project." msgstr "" #: ../ops-quotas.rst:126 msgid "``server_groups``" msgstr "" #: ../ops-quotas.rst:127 msgid "Server Group Members" msgstr "" #: ../ops-quotas.rst:128 msgid "Number of servers per server group." msgstr "" #: ../ops-quotas.rst:129 msgid "``server_group_members``" msgstr "" #: ../ops-quotas.rst:132 msgid "View and update compute quotas for a tenant (project)" msgstr "" #: ../ops-quotas.rst:134 msgid "" "As an administrative user, you can use the :command:`nova quota-*` commands, " "which are provided by the ``python-novaclient`` package, to view and update " "tenant quotas." msgstr "" #: ../ops-quotas.rst:138 msgid "**To view and update default quota values**" msgstr "" #: ../ops-quotas.rst:140 ../ops-quotas.rst:376 msgid "List all default quotas for all tenants, as follows:" msgstr "" #: ../ops-quotas.rst:170 msgid "Update a default value for a new tenant, as follows:" msgstr "" #: ../ops-quotas.rst:182 msgid "**To view quota values for a tenant (project)**" msgstr "" #: ../ops-quotas.rst:184 ../ops-quotas.rst:427 msgid "Place the tenant ID in a variable:" msgstr "" #: ../ops-quotas.rst:190 msgid "List the currently set quota values for a tenant, as follows:" msgstr "" #: ../ops-quotas.rst:220 msgid "**To update quota values for a tenant (project)**" msgstr "" #: ../ops-quotas.rst:222 ../ops-quotas.rst:382 msgid "Obtain the tenant ID, as follows:" msgstr "" #: ../ops-quotas.rst:228 ../ops-quotas.rst:433 msgid "Update a particular quota value, as follows:" msgstr "" #: ../ops-quotas.rst:261 msgid "To view a list of options for the ``nova quota-update`` command, run:" msgstr "" #: ../ops-quotas.rst:268 msgid "Set Object Storage Quotas" msgstr "" #: ../ops-quotas.rst:270 msgid "There are currently two categories of quotas for Object Storage:" msgstr "" #: ../ops-quotas.rst:273 msgid "" "Limit the total size (in bytes) or number of objects that can be stored in a " "single container." msgstr "" #: ../ops-quotas.rst:274 msgid "Container quotas" msgstr "" #: ../ops-quotas.rst:277 msgid "" "Limit the total size (in bytes) that a user has available in the Object " "Storage service." msgstr "" #: ../ops-quotas.rst:278 msgid "Account quotas" msgstr "" #: ../ops-quotas.rst:280 msgid "" "To take advantage of either container quotas or account quotas, your Object " "Storage proxy server must have ``container_quotas`` or ``account_quotas`` " "(or both) added to the ``[pipeline:main]`` pipeline. Each quota type also " "requires its own section in the ``proxy-server.conf`` file:" msgstr "" #: ../ops-quotas.rst:297 msgid "" "To view and update Object Storage quotas, use the :command:`swift` command " "provided by the ``python-swiftclient`` package. Any user included in the " "project can view the quotas placed on their project. To update Object " "Storage quotas on a project, you must have the role of ResellerAdmin in the " "project that the quota is being applied to." msgstr "" #: ../ops-quotas.rst:303 msgid "To view account quotas placed on a project:" msgstr "" #: ../ops-quotas.rst:317 msgid "To apply or update account quotas on a project:" msgstr "" #: ../ops-quotas.rst:324 msgid "For example, to place a 5 GB quota on an account:" msgstr "" #: ../ops-quotas.rst:331 msgid "To verify the quota, run the :command:`swift stat` command again:" msgstr "" #: ../ops-quotas.rst:346 msgid "Set Block Storage Quotas" msgstr "" #: ../ops-quotas.rst:348 msgid "" "As an administrative user, you can update the Block Storage service quotas " "for a tenant, as well as update the quota defaults for a new tenant. See :" "ref:`table_block_storage_quota`." msgstr "" #: ../ops-quotas.rst:354 msgid "Table: Block Storage quota descriptions" msgstr "" #: ../ops-quotas.rst:360 msgid "gigabytes" msgstr "" #: ../ops-quotas.rst:361 msgid "Number of volume gigabytes allowed per tenant" msgstr "" #: ../ops-quotas.rst:362 msgid "snapshots" msgstr "" #: ../ops-quotas.rst:363 msgid "Number of Block Storage snapshots allowed per tenant." msgstr "" #: ../ops-quotas.rst:364 msgid "volumes" msgstr "" #: ../ops-quotas.rst:365 msgid "Number of Block Storage volumes allowed per tenant" msgstr "" #: ../ops-quotas.rst:368 msgid "View and update Block Storage quotas for a tenant (project)" msgstr "" #: ../ops-quotas.rst:370 msgid "" "As an administrative user, you can use the :command:`cinder quota-*` " "commands, which are provided by the ``python-cinderclient`` package, to view " "and update tenant quotas." msgstr "" #: ../ops-quotas.rst:374 msgid "**To view and update default Block Storage quota values**" msgstr "" #: ../ops-quotas.rst:401 msgid "" "To update a default value for a new tenant, update the property in the ``/" "etc/cinder/cinder.conf`` file." msgstr "" #: ../ops-quotas.rst:404 msgid "**To view Block Storage quotas for a tenant (project)**" msgstr "" #: ../ops-quotas.rst:406 msgid "View quotas for the tenant, as follows:" msgstr "" #: ../ops-quotas.rst:425 msgid "**To update Block Storage quotas for a tenant (project)**" msgstr "" #: ../ops-uninstall.rst:3 msgid "Uninstalling" msgstr "" #: ../ops-uninstall.rst:5 msgid "" "While we'd always recommend using your automated deployment system to " "reinstall systems from scratch, sometimes you do need to remove OpenStack " "from a system the hard way. Here's how:" msgstr "" #: ../ops-uninstall.rst:9 msgid "Remove all packages." msgstr "" #: ../ops-uninstall.rst:10 msgid "Remove remaining files." msgstr "" #: ../ops-uninstall.rst:11 msgid "Remove databases." msgstr "" #: ../ops-uninstall.rst:13 msgid "" "These steps depend on your underlying distribution, but in general you " "should be looking for :command:`purge` commands in your package manager, " "like :command:`aptitude purge ~c $package`. Following this, you can look for " "orphaned files in the directories referenced throughout this guide. To " "uninstall the database properly, refer to the manual appropriate for the " "product in use." msgstr "" #: ../ops-upgrades.rst:3 msgid "Upgrades" msgstr "" #: ../ops-upgrades.rst:5 msgid "" "With the exception of Object Storage, upgrading from one version of " "OpenStack to another can take a great deal of effort. This chapter provides " "some guidance on the operational aspects that you should consider for " "performing an upgrade for an OpenStack environment." msgstr "" #: ../ops-upgrades.rst:11 msgid "Pre-upgrade considerations" msgstr "" #: ../ops-upgrades.rst:14 msgid "Upgrade planning" msgstr "" #: ../ops-upgrades.rst:16 msgid "" "Thoroughly review the `release notes `_ to " "learn about new, updated, and deprecated features. Find incompatibilities " "between versions." msgstr "" #: ../ops-upgrades.rst:21 msgid "" "Consider the impact of an upgrade to users. The upgrade process interrupts " "management of your environment including the dashboard. If you properly " "prepare for the upgrade, existing instances, networking, and storage should " "continue to operate. However, instances might experience intermittent " "network interruptions." msgstr "" #: ../ops-upgrades.rst:27 msgid "" "Consider the approach to upgrading your environment. You can perform an " "upgrade with operational instances, but this is a dangerous approach. You " "might consider using live migration to temporarily relocate instances to " "other compute nodes while performing upgrades. However, you must ensure " "database consistency throughout the process; otherwise your environment " "might become unstable. Also, don't forget to provide sufficient notice to " "your users, including giving them plenty of time to perform their own " "backups." msgstr "" #: ../ops-upgrades.rst:36 msgid "" "Consider adopting structure and options from the service configuration files " "and merging them with existing configuration files. The `OpenStack " "Configuration Reference `_ contains new, updated, and deprecated options for most services." msgstr "" #: ../ops-upgrades.rst:42 msgid "" "Like all major system upgrades, your upgrade could fail for one or more " "reasons. You can prepare for this situation by having the ability to roll " "back your environment to the previous release, including databases, " "configuration files, and packages. We provide an example process for rolling " "back your environment in :ref:`rolling_back_a_failed_upgrade`." msgstr "" #: ../ops-upgrades.rst:49 msgid "" "Develop an upgrade procedure and assess it thoroughly by using a test " "environment similar to your production environment." msgstr "" #: ../ops-upgrades.rst:53 msgid "Pre-upgrade testing environment" msgstr "" #: ../ops-upgrades.rst:55 msgid "" "The most important step is the pre-upgrade testing. If you are upgrading " "immediately after release of a new version, undiscovered bugs might hinder " "your progress. Some deployers prefer to wait until the first point release " "is announced. However, if you have a significant deployment, you might " "follow the development and testing of the release to ensure that bugs for " "your use cases are fixed." msgstr "" #: ../ops-upgrades.rst:62 msgid "" "Each OpenStack cloud is different even if you have a near-identical " "architecture as described in this guide. As a result, you must still test " "upgrades between versions in your environment using an approximate clone of " "your environment." msgstr "" #: ../ops-upgrades.rst:67 msgid "" "However, that is not to say that it needs to be the same size or use " "identical hardware as the production environment. It is important to " "consider the hardware and scale of the cloud that you are upgrading. The " "following tips can help you minimise the cost:" msgstr "" #: ../ops-upgrades.rst:73 msgid "" "The simplest place to start testing the next version of OpenStack is by " "setting up a new environment inside your own cloud. This might seem odd, " "especially the double virtualization used in running compute nodes. But it " "is a sure way to very quickly test your configuration." msgstr "" #: ../ops-upgrades.rst:77 msgid "Use your own cloud" msgstr "" #: ../ops-upgrades.rst:80 msgid "" "Consider using a public cloud to test the scalability limits of your cloud " "controller configuration. Most public clouds bill by the hour, which means " "it can be inexpensive to perform even a test with many nodes." msgstr "" #: ../ops-upgrades.rst:83 msgid "Use a public cloud" msgstr "" #: ../ops-upgrades.rst:86 msgid "" "If you use an external storage plug-in or shared file system with your " "cloud, you can test whether it works by creating a second share or endpoint. " "This allows you to test the system before entrusting the new version on to " "your storage." msgstr "" #: ../ops-upgrades.rst:89 msgid "Make another storage endpoint on the same system" msgstr "" #: ../ops-upgrades.rst:92 msgid "" "Even at smaller-scale testing, look for excess network packets to determine " "whether something is going horribly wrong in inter-component communication." msgstr "" #: ../ops-upgrades.rst:94 msgid "Watch the network" msgstr "" #: ../ops-upgrades.rst:96 msgid "To set up the test environment, you can use one of several methods:" msgstr "" #: ../ops-upgrades.rst:98 msgid "" "Do a full manual install by using the `Installation Tutorials and Guides " "`_ for your " "platform. Review the final configuration files and installed packages." msgstr "" #: ../ops-upgrades.rst:103 msgid "" "Create a clone of your automated configuration infrastructure with changed " "package repository URLs." msgstr "" #: ../ops-upgrades.rst:106 msgid "Alter the configuration until it works." msgstr "" #: ../ops-upgrades.rst:108 msgid "" "Either approach is valid. Use the approach that matches your experience." msgstr "" #: ../ops-upgrades.rst:110 msgid "" "An upgrade pre-testing system is excellent for getting the configuration to " "work. However, it is important to note that the historical use of the system " "and differences in user interaction can affect the success of upgrades." msgstr "" #: ../ops-upgrades.rst:115 msgid "" "If possible, we highly recommend that you dump your production database " "tables and test the upgrade in your development environment using this data. " "Several MySQL bugs have been uncovered during database migrations because of " "slight table differences between a fresh installation and tables that " "migrated from one version to another. This will have impact on large real " "datasets, which you do not want to encounter during a production outage." msgstr "" #: ../ops-upgrades.rst:123 msgid "" "Artificial scale testing can go only so far. After your cloud is upgraded, " "you must pay careful attention to the performance aspects of your cloud." msgstr "" #: ../ops-upgrades.rst:128 msgid "Upgrade Levels" msgstr "" #: ../ops-upgrades.rst:130 msgid "" "Upgrade levels are a feature added to OpenStack Compute since the Grizzly " "release to provide version locking on the RPC (Message Queue) communications " "between the various Compute services." msgstr "" #: ../ops-upgrades.rst:134 msgid "" "This functionality is an important piece of the puzzle when it comes to live " "upgrades and is conceptually similar to the existing API versioning that " "allows OpenStack services of different versions to communicate without issue." msgstr "" #: ../ops-upgrades.rst:139 msgid "" "Without upgrade levels, an X+1 version Compute service can receive and " "understand X version RPC messages, but it can only send out X+1 version RPC " "messages. For example, if a nova-conductor process has been upgraded to X+1 " "version, then the conductor service will be able to understand messages from " "X version nova-compute processes, but those compute services will not be " "able to understand messages sent by the conductor service." msgstr "" #: ../ops-upgrades.rst:147 msgid "" "During an upgrade, operators can add configuration options to ``nova.conf`` " "which lock the version of RPC messages and allow live upgrading of the " "services without interruption caused by version mismatch. The configuration " "options allow the specification of RPC version numbers if desired, but " "release name alias are also supported. For example:" msgstr "" #: ../ops-upgrades.rst:161 msgid "" "will keep the RPC version locked across the specified services to the RPC " "version used in X+1. As all instances of a particular service are upgraded " "to the newer version, the corresponding line can be removed from ``nova." "conf``." msgstr "" #: ../ops-upgrades.rst:166 msgid "" "Using this functionality, ideally one would lock the RPC version to the " "OpenStack version being upgraded from on nova-compute nodes, to ensure that, " "for example X+1 version nova-compute processes will continue to work with X " "version nova-conductor processes while the upgrade completes. Once the " "upgrade of nova-compute processes is complete, the operator can move onto " "upgrading nova-conductor and remove the version locking for nova-compute in " "``nova.conf``." msgstr "" #: ../ops-upgrades.rst:175 msgid "Upgrade process" msgstr "" #: ../ops-upgrades.rst:177 msgid "" "This section describes the process to upgrade a basic OpenStack deployment " "based on the basic two-node architecture in the `Installation Tutorials and " "Guides `_. All " "nodes must run a supported distribution of Linux with a recent kernel and " "the current release packages." msgstr "" #: ../ops-upgrades.rst:185 msgid "Service specific upgrade instructions" msgstr "" #: ../ops-upgrades.rst:187 msgid "" "Refer to the following upgrade notes for information on upgrading specific " "OpenStack services:" msgstr "" #: ../ops-upgrades.rst:190 msgid "" "`Networking service (neutron) upgrades `_" msgstr "" #: ../ops-upgrades.rst:192 msgid "" "`Compute service (nova) upgrades `_" msgstr "" #: ../ops-upgrades.rst:194 msgid "" "`Identity service (keystone) upgrades `_" msgstr "" #: ../ops-upgrades.rst:196 msgid "" "`Block Storage service (cinder) upgrades `_" msgstr "" #: ../ops-upgrades.rst:198 msgid "" "`Image service (glance) zero downtime database upgrades `_" msgstr "" #: ../ops-upgrades.rst:200 msgid "" "`Image service (glance) rolling upgrades `_" msgstr "" #: ../ops-upgrades.rst:202 msgid "" "`Bare Metal service (ironic) upgrades `_" msgstr "" #: ../ops-upgrades.rst:204 msgid "" "`Object Storage service (swift) upgrades `_" msgstr "" #: ../ops-upgrades.rst:206 msgid "" "`Telemetry service (ceilometer) upgrades `_" msgstr "" #: ../ops-upgrades.rst:210 msgid "Prerequisites" msgstr "" #: ../ops-upgrades.rst:212 msgid "" "Perform some cleaning of the environment prior to starting the upgrade " "process to ensure a consistent state. For example, instances not fully " "purged from the system after deletion might cause indeterminate behavior." msgstr "" #: ../ops-upgrades.rst:217 msgid "" "For environments using the OpenStack Networking service (neutron), verify " "the release version of the database. For example:" msgstr "" #: ../ops-upgrades.rst:226 msgid "Perform a backup" msgstr "" #: ../ops-upgrades.rst:228 msgid "Save the configuration files on all nodes. For example:" msgstr "" #: ../ops-upgrades.rst:241 msgid "" "You can modify this example script on each node to handle different services." msgstr "" #: ../ops-upgrades.rst:244 msgid "" "Make a full database backup of your production data. Since the Kilo release, " "database downgrades are not supported, and restoring from backup is the only " "method available to retrieve a previous database version." msgstr "" #: ../ops-upgrades.rst:254 msgid "" "Consider updating your SQL server configuration as described in the " "`Installation Tutorials and Guides `_." msgstr "" #: ../ops-upgrades.rst:259 msgid "Manage repositories" msgstr "" #: ../ops-upgrades.rst:261 msgid "On all nodes:" msgstr "" #: ../ops-upgrades.rst:263 msgid "Remove the repository for the previous release packages." msgstr "" #: ../ops-upgrades.rst:265 msgid "Add the repository for the new release packages." msgstr "" #: ../ops-upgrades.rst:267 msgid "Update the repository database." msgstr "" #: ../ops-upgrades.rst:270 msgid "Upgrade packages on each node" msgstr "" #: ../ops-upgrades.rst:272 msgid "" "Depending on your specific configuration, upgrading all packages might " "restart or break services supplemental to your OpenStack environment. For " "example, if you use the TGT iSCSI framework for Block Storage volumes and " "the upgrade includes new packages for it, the package manager might restart " "the TGT iSCSI services and impact connectivity to volumes." msgstr "" #: ../ops-upgrades.rst:279 msgid "" "If the package manager prompts you to update configuration files, reject the " "changes. The package manager appends a suffix to newer versions of " "configuration files. Consider reviewing and adopting content from these " "files." msgstr "" #: ../ops-upgrades.rst:286 msgid "" "You may need to explicitly install the ``ipset`` package if your " "distribution does not install it as a dependency." msgstr "" #: ../ops-upgrades.rst:290 msgid "Update services" msgstr "" #: ../ops-upgrades.rst:292 msgid "" "To update a service on each node, you generally modify one or more " "configuration files, stop the service, synchronize the database schema, and " "start the service. Some services require different steps. We recommend " "verifying operation of each service before proceeding to the next service." msgstr "" #: ../ops-upgrades.rst:298 msgid "" "The order you should upgrade services, and any changes from the general " "upgrade process is described below:" msgstr "" #: ../ops-upgrades.rst:301 msgid "**Controller node**" msgstr "" #: ../ops-upgrades.rst:303 msgid "" "Identity service - Clear any expired tokens before synchronizing the " "database." msgstr "" #: ../ops-upgrades.rst:306 msgid "Image service" msgstr "" #: ../ops-upgrades.rst:308 msgid "Compute service, including networking components." msgstr "" #: ../ops-upgrades.rst:310 msgid "Networking service" msgstr "" #: ../ops-upgrades.rst:312 msgid "Block Storage service" msgstr "" #: ../ops-upgrades.rst:314 msgid "" "Dashboard - In typical environments, updating Dashboard only requires " "restarting the Apache HTTP service." msgstr "" #: ../ops-upgrades.rst:317 msgid "Orchestration service" msgstr "" #: ../ops-upgrades.rst:319 msgid "" "Telemetry service - In typical environments, updating the Telemetry service " "only requires restarting the service." msgstr "" #: ../ops-upgrades.rst:322 msgid "Compute service - Edit the configuration file and restart the service." msgstr "" #: ../ops-upgrades.rst:324 ../ops-upgrades.rst:333 msgid "" "Networking service - Edit the configuration file and restart the service." msgstr "" #: ../ops-upgrades.rst:326 msgid "**Storage nodes**" msgstr "" #: ../ops-upgrades.rst:328 msgid "" "Block Storage service - Updating the Block Storage service only requires " "restarting the service." msgstr "" #: ../ops-upgrades.rst:331 msgid "**Compute nodes**" msgstr "" #: ../ops-upgrades.rst:336 msgid "Final steps" msgstr "" #: ../ops-upgrades.rst:338 msgid "" "On all distributions, you must perform some final tasks to complete the " "upgrade process." msgstr "" #: ../ops-upgrades.rst:341 msgid "" "Decrease DHCP timeouts by modifying the :file:`/etc/nova/nova.conf` file on " "the compute nodes back to the original value for your environment." msgstr "" #: ../ops-upgrades.rst:344 msgid "" "Update all ``.ini`` files to match passwords and pipelines as required for " "the OpenStack release in your environment." msgstr "" #: ../ops-upgrades.rst:347 msgid "" "After migration, users see different results from :command:`openstack image " "list` and :command:`glance image-list`. To ensure users see the same images " "in the list commands, edit the :file:`/etc/glance/policy.json` file and :" "file:`/etc/nova/policy.json` file to contain ``\"context_is_admin\": \"role:" "admin\"``, which limits access to private images for projects." msgstr "" #: ../ops-upgrades.rst:354 msgid "" "Verify proper operation of your environment. Then, notify your users that " "their cloud is operating normally again." msgstr "" #: ../ops-upgrades.rst:360 msgid "Rolling back a failed upgrade" msgstr "" #: ../ops-upgrades.rst:362 msgid "" "This section provides guidance for rolling back to a previous release of " "OpenStack. All distributions follow a similar procedure." msgstr "" #: ../ops-upgrades.rst:367 msgid "" "Rolling back your environment should be the final course of action since you " "are likely to lose any data added since the backup." msgstr "" #: ../ops-upgrades.rst:370 msgid "" "A common scenario is to take down production management services in " "preparation for an upgrade, completed part of the upgrade process, and " "discovered one or more problems not encountered during testing. As a " "consequence, you must roll back your environment to the original \"known good" "\" state. You also made sure that you did not make any state changes after " "attempting the upgrade process; no new instances, networks, storage volumes, " "and so on. Any of these new resources will be in a frozen state after the " "databases are restored from backup." msgstr "" #: ../ops-upgrades.rst:379 msgid "" "Within this scope, you must complete these steps to successfully roll back " "your environment:" msgstr "" #: ../ops-upgrades.rst:382 msgid "Roll back configuration files." msgstr "" #: ../ops-upgrades.rst:384 msgid "Restore databases from backup." msgstr "" #: ../ops-upgrades.rst:386 msgid "Roll back packages." msgstr "" #: ../ops-upgrades.rst:388 msgid "" "You should verify that you have the requisite backups to restore. Rolling " "back upgrades is a tricky process because distributions tend to put much " "more effort into testing upgrades than downgrades. Broken downgrades take " "significantly more effort to troubleshoot and, resolve than broken upgrades. " "Only you can weigh the risks of trying to push a failed upgrade forward " "versus rolling it back. Generally, consider rolling back as the very last " "option." msgstr "" #: ../ops-upgrades.rst:396 msgid "" "The following steps described for Ubuntu have worked on at least one " "production environment, but they might not work for all environments." msgstr "" #: ../ops-upgrades.rst:399 msgid "**To perform a rollback**" msgstr "" #: ../ops-upgrades.rst:401 msgid "Stop all OpenStack services." msgstr "" #: ../ops-upgrades.rst:403 msgid "" "Copy contents of configuration backup directories that you created during " "the upgrade process back to ``/etc/`` directory." msgstr "" #: ../ops-upgrades.rst:406 msgid "" "Restore databases from the ``RELEASE_NAME-db-backup.sql`` backup file that " "you created with the :command:`mysqldump` command during the upgrade process:" msgstr "" #: ../ops-upgrades.rst:414 msgid "Downgrade OpenStack packages." msgstr "" #: ../ops-upgrades.rst:418 msgid "" "Downgrading packages is by far the most complicated step; it is highly " "dependent on the distribution and the overall administration of the system." msgstr "" #: ../ops-upgrades.rst:422 msgid "" "Determine which OpenStack packages are installed on your system. Use the :" "command:`dpkg --get-selections` command. Filter for OpenStack packages, " "filter again to omit packages explicitly marked in the ``deinstall`` state, " "and save the final output to a file. For example, the following command " "covers a controller node with keystone, glance, nova, neutron, and cinder:" msgstr "" #: ../ops-upgrades.rst:469 msgid "" "Depending on the type of server, the contents and order of your package list " "might vary from this example." msgstr "" #: ../ops-upgrades.rst:472 msgid "" "You can determine the package versions available for reversion by using the " "``apt-cache policy`` command. For example:" msgstr "" #: ../ops-upgrades.rst:493 msgid "" "If you removed the release repositories, you must first reinstall them and " "run the :command:`apt-get update` command." msgstr "" #: ../ops-upgrades.rst:496 msgid "" "The command output lists the currently installed version of the package, " "newest candidate version, and all versions along with the repository that " "contains each version. Look for the appropriate release version— " "``2:14.0.1-0ubuntu1~cloud0`` in this case. The process of manually picking " "through this list of packages is rather tedious and prone to errors. You " "should consider using a script to help with this process. For example:" msgstr "" #: ../ops-upgrades.rst:541 msgid "" "Use the :command:`apt-get install` command to install specific versions of " "each package by specifying ``=``. The script in the " "previous step conveniently created a list of ``package=version`` pairs for " "you:" msgstr "" #: ../ops-upgrades.rst:550 msgid "" "This step completes the rollback procedure. You should remove the upgrade " "release repository and run :command:`apt-get update` to prevent accidental " "upgrades until you solve whatever issue caused you to roll back your " "environment." msgstr "" #: ../ops-user-facing-operations.rst:3 msgid "User-Facing Operations" msgstr "" #: ../ops-user-facing-operations.rst:5 msgid "" "This guide is for OpenStack operators and does not seek to be an exhaustive " "reference for users, but as an operator, you should have a basic " "understanding of how to use the cloud facilities. This chapter looks at " "OpenStack from a basic user perspective, which helps you understand your " "users' needs and determine, when you get a trouble ticket, whether it is a " "user issue or a service issue. The main concepts covered are images, " "flavors, security groups, block storage, shared file system storage, and " "instances." msgstr "" #: ../ops-user-facing-operations.rst:15 msgid "Images" msgstr "" #: ../ops-user-facing-operations.rst:17 msgid "" "OpenStack images can often be thought of as \"virtual machine templates.\" " "Images can also be standard installation media such as ISO images. " "Essentially, they contain bootable file systems that are used to launch " "instances." msgstr "" #: ../ops-user-facing-operations.rst:23 msgid "Adding Images" msgstr "" #: ../ops-user-facing-operations.rst:25 msgid "" "Several pre-made images exist and can easily be imported into the Image " "service. A common image to add is the CirrOS image, which is very small and " "used for testing purposes. To add this image, simply do:" msgstr "" #: ../ops-user-facing-operations.rst:36 msgid "" "The :command:`openstack image create` command provides a large set of " "options for working with your image. For example, the ``--min-disk`` option " "is useful for images that require root disks of a certain size (for example, " "large Windows images). To view these options, run:" msgstr "" #: ../ops-user-facing-operations.rst:45 msgid "Run the following command to view the properties of existing images:" msgstr "" #: ../ops-user-facing-operations.rst:52 msgid "Adding Signed Images" msgstr "" #: ../ops-user-facing-operations.rst:54 msgid "" "To provide a chain of trust from an end user to the Image service, and the " "Image service to Compute, an end user can import signed images that can be " "initially verified in the Image service, and later verified in the Compute " "service. Appropriate Image service properties need to be set to enable this " "signature feature." msgstr "" #: ../ops-user-facing-operations.rst:62 msgid "" "Prior to the steps below, an asymmetric keypair and certificate must be " "generated. In this example, these are called private_key.pem and new_cert." "crt, respectively, and both reside in the current directory. Also note that " "the image in this example is cirros-0.3.5-x86_64-disk.img, but any image can " "be used." msgstr "" #: ../ops-user-facing-operations.rst:68 msgid "" "The following are steps needed to create the signature used for the signed " "images:" msgstr "" #: ../ops-user-facing-operations.rst:71 msgid "Retrieve image for upload" msgstr "" #: ../ops-user-facing-operations.rst:77 msgid "Use private key to create a signature of the image" msgstr "" #: ../ops-user-facing-operations.rst:81 msgid "" "The following implicit values are being used to create the signature in this " "example:" msgstr "" #: ../ops-user-facing-operations.rst:84 msgid "Signature hash method = SHA-256" msgstr "" #: ../ops-user-facing-operations.rst:86 msgid "Signature key type = RSA-PSS" msgstr "" #: ../ops-user-facing-operations.rst:90 msgid "The following options are currently supported:" msgstr "" #: ../ops-user-facing-operations.rst:92 msgid "Signature hash methods: SHA-224, SHA-256, SHA-384, and SHA-512" msgstr "" #: ../ops-user-facing-operations.rst:94 msgid "" "Signature key types: DSA, ECC_SECT571K1, ECC_SECT409K1, ECC_SECT571R1, " "ECC_SECT409R1, ECC_SECP521R1, ECC_SECP384R1, and RSA-PSS" msgstr "" #: ../ops-user-facing-operations.rst:98 msgid "Generate signature of image and convert it to a base64 representation:" msgstr "" #: ../ops-user-facing-operations.rst:111 msgid "" "Using Image API v1 requires '-w 0' above, since multiline image properties " "are not supported." msgstr "" #: ../ops-user-facing-operations.rst:113 msgid "" "Image API v2 supports multiline properties, so this option is not required " "for v2 but it can still be used." msgstr "" #: ../ops-user-facing-operations.rst:117 msgid "Create context" msgstr "" #: ../ops-user-facing-operations.rst:133 msgid "Encode certificate in DER format" msgstr "" #: ../ops-user-facing-operations.rst:147 msgid "Upload Certificate in DER format to Castellan" msgstr "" #: ../ops-user-facing-operations.rst:159 msgid "Upload Image to Image service, with Signature Metadata" msgstr "" #: ../ops-user-facing-operations.rst:163 msgid "The following signature properties are used:" msgstr "" #: ../ops-user-facing-operations.rst:165 msgid "img_signature uses the signature called signature_64" msgstr "" #: ../ops-user-facing-operations.rst:167 msgid "" "img_signature_certificate_uuid uses the value from cert_uuid in section 5 " "above" msgstr "" #: ../ops-user-facing-operations.rst:170 msgid "img_signature_hash_method matches 'SHA-256' in section 2 above" msgstr "" #: ../ops-user-facing-operations.rst:172 msgid "img_signature_key_type matches 'RSA-PSS' in section 2 above" msgstr "" #: ../ops-user-facing-operations.rst:187 msgid "The maximum image signature character limit is 255." msgstr "" #: ../ops-user-facing-operations.rst:189 msgid "Verify the Keystone URL" msgstr "" #: ../ops-user-facing-operations.rst:193 msgid "" "The default Keystone configuration assumes that Keystone is in the local " "host, and it uses ``http://localhost:5000/v3`` as the endpoint URL, which is " "specified in ``glance-api.conf`` and ``nova-api.conf`` files:" msgstr "" #: ../ops-user-facing-operations.rst:205 msgid "" "If Keystone is located remotely instead, edit the ``glance-api.conf`` and " "``nova.conf`` files. In the ``[barbican]`` section, configre the " "``auth_endpoint`` option:" msgstr "" #: ../ops-user-facing-operations.rst:214 msgid "Signature verification will occur when Compute boots the signed image" msgstr "" #: ../ops-user-facing-operations.rst:218 msgid "nova-compute servers first need to be updated by the following steps:" msgstr "" #: ../ops-user-facing-operations.rst:220 msgid "" "Ensure that cryptsetup is installed, and ensure that ``pythin-" "barbicanclient`` Python package is installed" msgstr "" #: ../ops-user-facing-operations.rst:222 msgid "" "Set up the Key Manager service by editing /etc/nova/nova.conf and adding the " "entries in the codeblock below" msgstr "" #: ../ops-user-facing-operations.rst:224 msgid "" "The flag verify_glance_signatures enables Compute to automatically validate " "signed instances prior to its launch. This validation feature is enabled " "when the value is set to TRUE" msgstr "" #: ../ops-user-facing-operations.rst:237 msgid "" "The api_class [keymgr] is deprecated as of Newton, so it should not be " "included in this release or beyond." msgstr "" #: ../ops-user-facing-operations.rst:246 msgid "Sharing Images Between Projects" msgstr "" #: ../ops-user-facing-operations.rst:248 msgid "" "In a multi-tenant cloud environment, users sometimes want to share their " "personal images or snapshots with other projects. This can be done on the " "command line with the ``glance`` tool by the owner of the image." msgstr "" #: ../ops-user-facing-operations.rst:252 msgid "To share an image or snapshot with another project, do the following:" msgstr "" #: ../ops-user-facing-operations.rst:254 msgid "Obtain the UUID of the image:" msgstr "" #: ../ops-user-facing-operations.rst:260 msgid "" "Obtain the UUID of the project with which you want to share your image, " "let's call it target project. Unfortunately, non-admin users are unable to " "use the :command:`openstack` command to do this. The easiest solution is to " "obtain the UUID either from an administrator of the cloud or from a user " "located in the target project." msgstr "" #: ../ops-user-facing-operations.rst:267 msgid "" "Once you have both pieces of information, run the :command:`openstack image " "add project` command:" msgstr "" #: ../ops-user-facing-operations.rst:281 msgid "You now need to act in the target project scope." msgstr "" #: ../ops-user-facing-operations.rst:285 msgid "" "You will not see the shared image yet. Therefore the sharing needs to be " "accepted." msgstr "" #: ../ops-user-facing-operations.rst:288 msgid "To accept the sharing, you need to update the member status:" msgstr "" #: ../ops-user-facing-operations.rst:301 msgid "" "Project ``771ed149ef7e4b2b88665cc1c98f77ca`` will now have access to image " "``733d1c44-a2ea-414b-aca7-69decf20d810``." msgstr "" #: ../ops-user-facing-operations.rst:306 msgid "" "You can explicitly ask for pending member status to view shared images not " "yet accepted:" msgstr "" #: ../ops-user-facing-operations.rst:314 msgid "Deleting Images" msgstr "" #: ../ops-user-facing-operations.rst:316 msgid "To delete an image, just execute:" msgstr "" #: ../ops-user-facing-operations.rst:324 msgid "" "Generally, deleting an image does not affect instances or snapshots that " "were based on the image. However, some drivers may require the original " "image to be present to perform a migration. For example, XenAPI live-migrate " "will work fine if the image is deleted, but libvirt will fail." msgstr "" #: ../ops-user-facing-operations.rst:330 msgid "Other CLI Options" msgstr "" #: ../ops-user-facing-operations.rst:332 msgid "A full set of options can be found using:" msgstr "" #: ../ops-user-facing-operations.rst:338 msgid "" "or the `Command-Line Interface Reference `__." msgstr "" #: ../ops-user-facing-operations.rst:342 msgid "The Image service and the Database" msgstr "" #: ../ops-user-facing-operations.rst:344 msgid "" "The only thing the Image service does not store in a database is the image " "itself. The Image service database has two main tables:" msgstr "" #: ../ops-user-facing-operations.rst:348 msgid "``images``" msgstr "" #: ../ops-user-facing-operations.rst:349 msgid "``image_properties``" msgstr "" #: ../ops-user-facing-operations.rst:351 msgid "" "Working directly with the database and SQL queries can provide you with " "custom lists and reports of images. Technically, you can update properties " "about images through the database, although this is not generally " "recommended." msgstr "" #: ../ops-user-facing-operations.rst:357 msgid "Example Image service Database Queries" msgstr "" #: ../ops-user-facing-operations.rst:359 msgid "" "One interesting example is modifying the table of images and the owner of " "that image. This can be easily done if you simply display the unique ID of " "the owner. This example goes one step further and displays the readable name " "of the owner:" msgstr "" #: ../ops-user-facing-operations.rst:371 msgid "Another example is displaying all properties for a certain image:" msgstr "" #: ../ops-user-facing-operations.rst:379 msgid "Flavors" msgstr "" #: ../ops-user-facing-operations.rst:381 msgid "" "Virtual hardware templates are called \"flavors\" in OpenStack, defining " "sizes for RAM, disk, number of cores, and so on. The default install " "provides five flavors." msgstr "" #: ../ops-user-facing-operations.rst:385 msgid "" "These are configurable by admin users (the rights may also be delegated to " "other users by redefining the access controls for ``compute_extension:" "flavormanage`` in ``/etc/nova/policy.json`` on the ``nova-api`` server). To " "get the list of available flavors on your system, run:" msgstr "" #: ../ops-user-facing-operations.rst:404 msgid "" "The :command:`openstack flavor create` command allows authorized users to " "create new flavors. Additional flavor manipulation commands can be shown " "with the following command:" msgstr "" #: ../ops-user-facing-operations.rst:412 msgid "" "Flavors define a number of parameters, resulting in the user having a choice " "of what type of virtual machine to run—just like they would have if they " "were purchasing a physical server. :ref:`table_flavor_params` lists the " "elements that can be set. Note in particular ``extra_specs``, which can be " "used to define free-form characteristics, giving a lot of flexibility beyond " "just the size of RAM, CPU, and Disk." msgstr "" #: ../ops-user-facing-operations.rst:422 msgid "Table. Flavor parameters" msgstr "" #: ../ops-user-facing-operations.rst:426 msgid "**Column**" msgstr "" #: ../ops-user-facing-operations.rst:427 msgid "**Description**" msgstr "" #: ../ops-user-facing-operations.rst:428 msgid "ID" msgstr "" #: ../ops-user-facing-operations.rst:429 msgid "Unique ID (integer or UUID) for the flavor." msgstr "" #: ../ops-user-facing-operations.rst:431 msgid "" "A descriptive name, such as xx.size\\_name, is conventional but not " "required, though some third-party tools may rely on it." msgstr "" #: ../ops-user-facing-operations.rst:432 msgid "Memory\\_MB" msgstr "" #: ../ops-user-facing-operations.rst:433 msgid "Virtual machine memory in megabytes." msgstr "" #: ../ops-user-facing-operations.rst:435 msgid "" "Virtual root disk size in gigabytes. This is an ephemeral disk the base " "image is copied into. You don't use it when you boot from a persistent " "volume. The \"0\" size is a special case that uses the native base image " "size as the size of the ephemeral root volume." msgstr "" #: ../ops-user-facing-operations.rst:437 msgid "" "Specifies the size of a secondary ephemeral data disk. This is an empty, " "unformatted disk and exists only for the life of the instance." msgstr "" #: ../ops-user-facing-operations.rst:438 msgid "Swap" msgstr "" #: ../ops-user-facing-operations.rst:439 msgid "Optional swap space allocation for the instance." msgstr "" #: ../ops-user-facing-operations.rst:441 msgid "Number of virtual CPUs presented to the instance." msgstr "" #: ../ops-user-facing-operations.rst:442 msgid "RXTX_Factor" msgstr "" #: ../ops-user-facing-operations.rst:443 msgid "" "Optional property that allows created servers to have a different bandwidth " "cap from that defined in the network they are attached to. This factor is " "multiplied by the rxtx\\_base property of the network. Default value is 1.0 " "(that is, the same as the attached network)." msgstr "" #: ../ops-user-facing-operations.rst:448 msgid "Is_Public" msgstr "" #: ../ops-user-facing-operations.rst:449 msgid "" "Boolean value that indicates whether the flavor is available to all users or " "private. Private flavors do not get the current tenant assigned to them. " "Defaults to ``True``." msgstr "" #: ../ops-user-facing-operations.rst:452 msgid "extra_specs" msgstr "" #: ../ops-user-facing-operations.rst:453 msgid "" "Additional optional restrictions on which compute nodes the flavor can run " "on. This is implemented as key-value pairs that must match against the " "corresponding key-value pairs on compute nodes. Can be used to implement " "things like special resources (such as flavors that can run only on compute " "nodes with GPU hardware)." msgstr "" #: ../ops-user-facing-operations.rst:461 msgid "Private Flavors" msgstr "" #: ../ops-user-facing-operations.rst:463 msgid "" "A user might need a custom flavor that is uniquely tuned for a project she " "is working on. For example, the user might require 128 GB of memory. If you " "create a new flavor as described above, the user would have access to the " "custom flavor, but so would all other tenants in your cloud. Sometimes this " "sharing isn't desirable. In this scenario, allowing all users to have access " "to a flavor with 128 GB of memory might cause your cloud to reach full " "capacity very quickly. To prevent this, you can restrict access to the " "custom flavor using the :command:`nova flavor-access-add` command:" msgstr "" #: ../ops-user-facing-operations.rst:477 msgid "To view a flavor's access list, do the following:" msgstr "" #: ../ops-user-facing-operations.rst:485 msgid "" "Once access to a flavor has been restricted, no other projects besides the " "ones granted explicit access will be able to see the flavor. This includes " "the admin project. Make sure to add the admin project in addition to the " "original project." msgstr "" #: ../ops-user-facing-operations.rst:490 msgid "" "It's also helpful to allocate a specific numeric range for custom and " "private flavors. On UNIX-based systems, nonsystem accounts usually have a " "UID starting at 500. A similar approach can be taken with custom flavors. " "This helps you easily identify which flavors are custom, private, and public " "for the entire cloud." msgstr "" #: ../ops-user-facing-operations.rst:497 msgid "How Do I Modify an Existing Flavor?" msgstr "" #: ../ops-user-facing-operations.rst:499 msgid "" "The OpenStack dashboard simulates the ability to modify a flavor by deleting " "an existing flavor and creating a new one with the same name." msgstr "" #: ../ops-user-facing-operations.rst:503 msgid "Security Groups" msgstr "" #: ../ops-user-facing-operations.rst:505 msgid "" "A common new-user issue with OpenStack is failing to set an appropriate " "security group when launching an instance. As a result, the user is unable " "to contact the instance on the network." msgstr "" #: ../ops-user-facing-operations.rst:509 msgid "" "Security groups are sets of IP filter rules that are applied to an " "instance's networking. They are project specific, and project members can " "edit the default rules for their group and add new rules sets. All projects " "have a \"default\" security group, which is applied to instances that have " "no other security group defined. Unless changed, this security group denies " "all incoming traffic." msgstr "" #: ../ops-user-facing-operations.rst:518 msgid "" "As noted in the previous chapter, the number of rules per security group is " "controlled by the ``quota_security_group_rules``, and the number of allowed " "security groups per project is controlled by the ``quota_security_groups`` " "quota." msgstr "" #: ../ops-user-facing-operations.rst:524 msgid "End-User Configuration of Security Groups" msgstr "" #: ../ops-user-facing-operations.rst:526 msgid "" "Security groups for the current project can be found on the OpenStack " "dashboard under :guilabel:`Access & Security`. To see details of an existing " "group, select the :guilabel:`Edit Security Group` action for that security " "group. Obviously, modifying existing groups can be done from this edit " "interface. There is a :guilabel:`Create Security Group` button on the main :" "guilabel:`Access & Security` page for creating new groups. We discuss the " "terms used in these fields when we explain the command-line equivalents." msgstr "" #: ../ops-user-facing-operations.rst:535 msgid "**Setting with openstack command**" msgstr "" #: ../ops-user-facing-operations.rst:537 msgid "" "If your environment is using Neutron, you can configure security groups " "settings using the :command:`openstack` command. Get a list of security " "groups for the project you are acting in, by using following command:" msgstr "" #: ../ops-user-facing-operations.rst:554 msgid "To view the details of a security group:" msgstr "" #: ../ops-user-facing-operations.rst:580 msgid "" "These rules are all \"allow\" type rules, as the default is deny. This " "example shows the full port range for all protocols allowed from all IPs. " "This section describes the most common security group rule parameters:" msgstr "" #: ../ops-user-facing-operations.rst:586 msgid "" "The direction in which the security group rule is applied. Valid values are " "``ingress`` or ``egress``." msgstr "" #: ../ops-user-facing-operations.rst:587 msgid "direction" msgstr "" #: ../ops-user-facing-operations.rst:590 msgid "" "This attribute value matches the specified IP prefix as the source IP " "address of the IP packet." msgstr "" #: ../ops-user-facing-operations.rst:591 msgid "remote_ip_prefix" msgstr "" #: ../ops-user-facing-operations.rst:594 msgid "" "The protocol that is matched by the security group rule. Valid values are " "``null``, ``tcp``, ``udp``, ``icmp``, and ``icmpv6``." msgstr "" #: ../ops-user-facing-operations.rst:595 msgid "protocol" msgstr "" #: ../ops-user-facing-operations.rst:598 msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the ``port_range_max`` attribute value. If the protocol is ICMP or " "ICMPv6, this value must be an ICMP or ICMPv6 type, respectively." msgstr "" #: ../ops-user-facing-operations.rst:602 msgid "port_range_min" msgstr "" #: ../ops-user-facing-operations.rst:605 msgid "" "The maximum port number in the range that is matched by the security group " "rule. The ``port_range_min`` attribute constrains the ``port_range_max`` " "attribute. If the protocol is ICMP or ICMPv6, this value must be an ICMP or " "ICMPv6 type, respectively." msgstr "" #: ../ops-user-facing-operations.rst:608 msgid "port_range_max" msgstr "" #: ../ops-user-facing-operations.rst:611 msgid "" "Must be ``IPv4`` or ``IPv6``, and addresses represented in CIDR must match " "the ingress or egress rules." msgstr "" #: ../ops-user-facing-operations.rst:612 msgid "ethertype" msgstr "" #: ../ops-user-facing-operations.rst:614 msgid "" "When adding a new security group, you should pick a descriptive but brief " "name. This name shows up in brief descriptions of the instances that use it " "where the longer description field often does not. Seeing that an instance " "is using security group ``http`` is much easier to understand than " "``bobs_group`` or ``secgrp1``." msgstr "" #: ../ops-user-facing-operations.rst:620 msgid "" "This example creates a security group that allows web traffic anywhere on " "the Internet. We'll call this group ``global_http``, which is clear and " "reasonably concise, encapsulating what is allowed and from where. From the " "command line, do:" msgstr "" #: ../ops-user-facing-operations.rst:647 msgid "" "Immediately after create, the security group has only an allow egress rule. " "To make it do what we want, we need to add some rules:" msgstr "" #: ../ops-user-facing-operations.rst:694 msgid "" "Despite only outputting the newly added rule, this operation is additive:" msgstr "" #: ../ops-user-facing-operations.rst:719 msgid "" "The inverse operation is called :command:`openstack security group rule " "delete`, specifying security-group-rule ID. Whole security groups can be " "removed with :command:`openstack security group delete`." msgstr "" #: ../ops-user-facing-operations.rst:724 msgid "" "To create security group rules for a cluster of instances, use RemoteGroups." msgstr "" #: ../ops-user-facing-operations.rst:727 msgid "" "RemoteGroups are a dynamic way of defining the CIDR of allowed sources. The " "user specifies a RemoteGroup (security group name) and then all the users' " "other instances using the specified RemoteGroup are selected dynamically. " "This dynamic selection alleviates the need for individual rules to allow " "each new member of the cluster." msgstr "" #: ../ops-user-facing-operations.rst:733 msgid "" "The code is similar to the above example of :command:`openstack security " "group rule create`. To use RemoteGroup, specify ``--remote-group`` instead " "of ``--remote-ip``. For example:" msgstr "" #: ../ops-user-facing-operations.rst:744 msgid "" "The \"cluster\" rule allows SSH access from any other instance that uses the " "``global-http`` group." msgstr "" #: ../ops-user-facing-operations.rst:750 msgid "" "OpenStack volumes are persistent block-storage devices that may be attached " "and detached from instances, but they can be attached to only one instance " "at a time. Similar to an external hard drive, they do not provide shared " "storage in the way a network file system or object store does. It is left to " "the operating system in the instance to put a file system on the block " "device and mount it, or not." msgstr "" #: ../ops-user-facing-operations.rst:757 msgid "" "As with other removable disk technology, it is important that the operating " "system is not trying to make use of the disk before removing it. On Linux " "instances, this typically involves unmounting any file systems mounted from " "the volume. The OpenStack volume service cannot tell whether it is safe to " "remove volumes from an instance, so it does what it is told. If a user tells " "the volume service to detach a volume from an instance while it is being " "written to, you can expect some level of file system corruption as well as " "faults from whatever process within the instance was using the device." msgstr "" #: ../ops-user-facing-operations.rst:767 msgid "" "There is nothing OpenStack-specific in being aware of the steps needed to " "access block devices from within the instance operating system, potentially " "formatting them for first use and being cautious when removing them. What is " "specific is how to create new volumes and attach and detach them from " "instances. These operations can all be done from the :guilabel:`Volumes` " "page of the dashboard or by using the ``openstack`` command-line client." msgstr "" #: ../ops-user-facing-operations.rst:775 msgid "" "To add new volumes, you need only a volume size in gigabytes. Either put " "these into the :guilabel:`Create Volume` web form or use the command line:" msgstr "" #: ../ops-user-facing-operations.rst:783 msgid "" "This creates a 10 GB volume. To list existing volumes and the instances they " "are connected to, if any:" msgstr "" #: ../ops-user-facing-operations.rst:795 msgid "" "OpenStack Block Storage also allows creating snapshots of volumes. Remember " "that this is a block-level snapshot that is crash consistent, so it is best " "if the volume is not connected to an instance when the snapshot is taken and " "second best if the volume is not in use on the instance it is attached to. " "If the volume is under heavy use, the snapshot may have an inconsistent file " "system. In fact, by default, the volume service does not take a snapshot of " "a volume that is attached to an image, though it can be forced to. To take a " "volume snapshot, either select :guilabel:`Create Snapshot` from the actions " "column next to the volume name on the dashboard :guilabel:`Volumes` page, or " "run this from the command line:" msgstr "" #: ../ops-user-facing-operations.rst:857 msgid "" "For more information about updating Block Storage volumes (for example, " "resizing or transferring), see the `OpenStack End User Guide `__." msgstr "" #: ../ops-user-facing-operations.rst:862 msgid "Block Storage Creation Failures" msgstr "" #: ../ops-user-facing-operations.rst:864 msgid "" "If a user tries to create a volume and the volume immediately goes into an " "error state, the best way to troubleshoot is to grep the cinder log files " "for the volume's UUID. First try the log files on the cloud controller, and " "then try the storage node where the volume was attempted to be created:" msgstr "" #: ../ops-user-facing-operations.rst:875 msgid "Shared File Systems Service" msgstr "" #: ../ops-user-facing-operations.rst:877 msgid "" "Similar to Block Storage, the Shared File System is a persistent storage, " "called share, that can be used in multi-tenant environments. Users create " "and mount a share as a remote file system on any machine that allows " "mounting shares, and has network access to share exporter. This share can " "then be used for storing, sharing, and exchanging files. The default " "configuration of the Shared File Systems service depends on the back-end " "driver the admin chooses when starting the Shared File Systems service. For " "more information about existing back-end drivers, see `Share Backends " "`__ of Shared File Systems service Developer Guide. For example, in " "case of OpenStack Block Storage based back-end is used, the Shared File " "Systems service cares about everything, including VMs, networking, keypairs, " "and security groups. Other configurations require more detailed knowledge of " "shares functionality to set up and tune specific parameters and modes of " "shares functioning." msgstr "" #: ../ops-user-facing-operations.rst:894 msgid "" "Shares are a remote mountable file system, so users can mount a share to " "multiple hosts, and have it accessed from multiple hosts by multiple users " "at a time. With the Shared File Systems service, you can perform a large " "number of operations with shares:" msgstr "" #: ../ops-user-facing-operations.rst:899 msgid "Create, update, delete, and force-delete shares" msgstr "" #: ../ops-user-facing-operations.rst:900 msgid "Change access rules for shares, reset share state" msgstr "" #: ../ops-user-facing-operations.rst:901 msgid "Specify quotas for existing users or tenants" msgstr "" #: ../ops-user-facing-operations.rst:902 msgid "Create share networks" msgstr "" #: ../ops-user-facing-operations.rst:903 msgid "Define new share types" msgstr "" #: ../ops-user-facing-operations.rst:904 msgid "" "Perform operations with share snapshots: create, change name, create a share " "from a snapshot, delete" msgstr "" #: ../ops-user-facing-operations.rst:906 msgid "Operate with consistency groups" msgstr "" #: ../ops-user-facing-operations.rst:907 msgid "Use security services" msgstr "" #: ../ops-user-facing-operations.rst:909 msgid "" "For more information on share management see `Share management `__ of " "chapter “Shared File Systems” in OpenStack Administrator Guide. As to " "Security services, you should remember that different drivers support " "different authentication methods, while generic driver does not support " "Security Services at all (see section `Security services `__ of " "chapter “Shared File Systems” in OpenStack Administrator Guide)." msgstr "" #: ../ops-user-facing-operations.rst:918 msgid "" "You can create a share in a network, list shares, and show information for, " "update, and delete a specified share. You can also create snapshots of " "shares (see `Share snapshots `__ of chapter “Shared File Systems” in " "OpenStack Administrator Guide)." msgstr "" #: ../ops-user-facing-operations.rst:924 msgid "" "There are default and specific share types that allow you to filter or " "choose back-ends before you create a share. Functions and behaviour of share " "type is similar to Block Storage volume type (see `Share types `__ of " "chapter “Shared File Systems” in OpenStack Administrator Guide)." msgstr "" #: ../ops-user-facing-operations.rst:930 msgid "" "To help users keep and restore their data, Shared File Systems service " "provides a mechanism to create and operate snapshots (see `Share snapshots " "`__ of chapter “Shared File Systems” in OpenStack Administrator Guide)." msgstr "" #: ../ops-user-facing-operations.rst:935 msgid "" "A security service stores configuration information for clients for " "authentication and authorization. Inside Manila a share network can be " "associated with up to three security types (for detailed information see " "`Security services `__ of chapter “Shared File Systems” in " "OpenStack Administrator Guide):" msgstr "" #: ../ops-user-facing-operations.rst:942 msgid "LDAP" msgstr "" #: ../ops-user-facing-operations.rst:943 msgid "Kerberos" msgstr "" #: ../ops-user-facing-operations.rst:944 msgid "Microsoft Active Directory" msgstr "" #: ../ops-user-facing-operations.rst:946 msgid "" "Shared File Systems service differs from the principles implemented in Block " "Storage. Shared File Systems service can work in two modes:" msgstr "" #: ../ops-user-facing-operations.rst:949 msgid "" "Without interaction with share networks, in so called \"no share servers\" " "mode." msgstr "" #: ../ops-user-facing-operations.rst:951 msgid "Interacting with share networks." msgstr "" #: ../ops-user-facing-operations.rst:953 msgid "" "Networking service is used by the Shared File Systems service to directly " "operate with share servers. For switching interaction with Networking " "service on, create a share specifying a share network. To use \"share servers" "\" mode even being out of OpenStack, a network plugin called " "StandaloneNetworkPlugin is used. In this case, provide network information " "in the configuration: IP range, network type, and segmentation ID. Also you " "can add security services to a share network (see section `“Networking” " "`__ of chapter “Shared File Systems” in OpenStack Administrator Guide)." msgstr "" #: ../ops-user-facing-operations.rst:965 msgid "" "The main idea of consistency groups is to enable you to create snapshots at " "the exact same point in time from multiple file system shares. Those " "snapshots can be then used for restoring all shares that were associated " "with the consistency group (see section `“Consistency groups” `__ of chapter " "“Shared File Systems” in OpenStack Administrator Guide)." msgstr "" #: ../ops-user-facing-operations.rst:972 msgid "" "Shared File System storage allows administrators to set limits and quotas " "for specific tenants and users. Limits are the resource limitations that are " "allowed for each tenant or user. Limits consist of:" msgstr "" #: ../ops-user-facing-operations.rst:976 msgid "Rate limits" msgstr "" #: ../ops-user-facing-operations.rst:977 msgid "Absolute limits" msgstr "" #: ../ops-user-facing-operations.rst:979 msgid "" "Rate limits control the frequency at which users can issue specific API " "requests. Rate limits are configured by administrators in a config file. " "Also, administrator can specify quotas also known as max values of absolute " "limits per tenant. Whereas users can see only the amount of their consumed " "resources. Administrator can specify rate limits or quotas for the following " "resources:" msgstr "" #: ../ops-user-facing-operations.rst:986 msgid "Max amount of space available for all shares" msgstr "" #: ../ops-user-facing-operations.rst:987 msgid "Max number of shares" msgstr "" #: ../ops-user-facing-operations.rst:988 msgid "Max number of shared networks" msgstr "" #: ../ops-user-facing-operations.rst:989 msgid "Max number of share snapshots" msgstr "" #: ../ops-user-facing-operations.rst:990 msgid "Max total amount of all snapshots" msgstr "" #: ../ops-user-facing-operations.rst:991 msgid "" "Type and number of API calls that can be made in a specific time interval" msgstr "" #: ../ops-user-facing-operations.rst:993 msgid "" "User can see his rate limits and absolute limits by running commands :" "command:`manila rate-limits` and :command:`manila absolute-limits` " "respectively. For more details on limits and quotas see `Quotas and limits " "`__ " "of \"Share management\" section of OpenStack Administrator Guide document." msgstr "" #: ../ops-user-facing-operations.rst:999 msgid "" "This section lists several of the most important Use Cases that demonstrate " "the main functions and abilities of Shared File Systems service:" msgstr "" #: ../ops-user-facing-operations.rst:1003 msgid "Create share" msgstr "" #: ../ops-user-facing-operations.rst:1004 msgid "Operating with a share" msgstr "" #: ../ops-user-facing-operations.rst:1005 msgid "Manage access to shares" msgstr "" #: ../ops-user-facing-operations.rst:1006 msgid "Create snapshots" msgstr "" #: ../ops-user-facing-operations.rst:1007 msgid "Create a share network" msgstr "" #: ../ops-user-facing-operations.rst:1008 msgid "Manage a share network" msgstr "" #: ../ops-user-facing-operations.rst:1012 msgid "" "Shared File Systems service cannot warn you beforehand if it is safe to " "write a specific large amount of data onto a certain share or to remove a " "consistency group if it has a number of shares assigned to it. In such a " "potentially erroneous situations, if a mistake happens, you can expect some " "error message or even failing of shares or consistency groups into an " "incorrect status. You can also expect some level of system corruption if a " "user tries to unmount an unmanaged share while a process is using it for " "data transfer." msgstr "" #: ../ops-user-facing-operations.rst:1025 msgid "Create Share" msgstr "" #: ../ops-user-facing-operations.rst:1027 msgid "" "In this section, we examine the process of creating a simple share. It " "consists of several steps:" msgstr "" #: ../ops-user-facing-operations.rst:1030 msgid "" "Check if there is an appropriate share type defined in the Shared File " "Systems service" msgstr "" #: ../ops-user-facing-operations.rst:1033 msgid "" "If such a share type does not exist, an Admin should create it using :" "command:`manila type-create` command before other users are able to use it" msgstr "" #: ../ops-user-facing-operations.rst:1036 msgid "" "Using a share network is optional. However if you need one, check if there " "is an appropriate network defined in Shared File Systems service by using :" "command:`manila share-network-list` command. For the information on creating " "a share network, see :ref:`create_a_share_network` below in this chapter." msgstr "" #: ../ops-user-facing-operations.rst:1042 msgid "Create a public share using :command:`manila create`." msgstr "" #: ../ops-user-facing-operations.rst:1044 msgid "" "Make sure that the share has been created successfully and is ready to use " "(check the share status and see the share export location)" msgstr "" #: ../ops-user-facing-operations.rst:1047 msgid "" "Below is the same whole procedure described step by step and in more detail." msgstr "" #: ../ops-user-facing-operations.rst:1052 msgid "" "Before you start, make sure that Shared File Systems service is installed on " "your OpenStack cluster and is ready to use." msgstr "" #: ../ops-user-facing-operations.rst:1055 msgid "" "By default, there are no share types defined in Shared File Systems service, " "so you can check if a required one has been already created:" msgstr "" #: ../ops-user-facing-operations.rst:1067 msgid "" "If the share types list is empty or does not contain a type you need, create " "the required share type using this command:" msgstr "" #: ../ops-user-facing-operations.rst:1074 msgid "" "This command will create a public share with the following parameters: " "``name = netapp1``, ``spec_driver_handles_share_servers = False``" msgstr "" #: ../ops-user-facing-operations.rst:1077 msgid "" "You can now create a public share with my_share_net network, default share " "type, NFS shared file systems protocol, and 1 GB size:" msgstr "" #: ../ops-user-facing-operations.rst:1113 msgid "" "To confirm that creation has been successful, see the share in the share " "list:" msgstr "" #: ../ops-user-facing-operations.rst:1125 msgid "" "Check the share status and see the share export location. After creation, " "the share status should become ``available``:" msgstr "" #: ../ops-user-facing-operations.rst:1171 msgid "" "The value ``is_public`` defines the level of visibility for the share: " "whether other tenants can or cannot see the share. By default, the share is " "private. Now you can mount the created share like a remote file system and " "use it for your purposes." msgstr "" #: ../ops-user-facing-operations.rst:1178 msgid "" "See `Share Management `__ of “Shared File Systems” section of " "OpenStack Administrator Guide document for the details on share management " "operations." msgstr "" #: ../ops-user-facing-operations.rst:1184 msgid "Manage Access To Shares" msgstr "" #: ../ops-user-facing-operations.rst:1186 msgid "" "Currently, you have a share and would like to control access to this share " "for other users. For this, you have to perform a number of steps and " "operations. Before getting to manage access to the share, pay attention to " "the following important parameters. To grant or deny access to a share, " "specify one of these supported share access levels:" msgstr "" #: ../ops-user-facing-operations.rst:1192 msgid "``rw``: read and write (RW) access. This is the default value." msgstr "" #: ../ops-user-facing-operations.rst:1194 msgid "``ro:`` read-only (RO) access." msgstr "" #: ../ops-user-facing-operations.rst:1196 msgid "" "Additionally, you should also specify one of these supported authentication " "methods:" msgstr "" #: ../ops-user-facing-operations.rst:1199 msgid "" "``ip``: authenticates an instance through its IP address. A valid format is " "XX.XX.XX.XX orXX.XX.XX.XX/XX. For example 0.0.0.0/0." msgstr "" #: ../ops-user-facing-operations.rst:1202 msgid "" "``cert``: authenticates an instance through a TLS certificate. Specify the " "TLS identity as the IDENTKEY. A valid value is any string up to 64 " "characters long in the common name (CN) of the certificate. The meaning of a " "string depends on its interpretation." msgstr "" #: ../ops-user-facing-operations.rst:1207 msgid "" "``user``: authenticates by a specified user or group name. A valid value is " "an alphanumeric string that can contain some special characters and is from " "4 to 32 characters long." msgstr "" #: ../ops-user-facing-operations.rst:1213 msgid "" "Do not mount a share without an access rule! This can lead to an exception." msgstr "" #: ../ops-user-facing-operations.rst:1216 msgid "" "Allow access to the share with IP access type and 10.254.0.4 IP address:" msgstr "" #: ../ops-user-facing-operations.rst:1232 msgid "Mount the Share:" msgstr "" #: ../ops-user-facing-operations.rst:1238 msgid "" "Then check if the share mounted successfully and according to the specified " "access rules:" msgstr "" #: ../ops-user-facing-operations.rst:1253 msgid "" "Different share features are supported by different share drivers. In these " "examples there was used generic (Cinder as a back-end) driver that does not " "support ``user`` and ``cert`` authentication methods." msgstr "" #: ../ops-user-facing-operations.rst:1260 msgid "" "For the details of features supported by different drivers see `Manila share " "features support mapping `__ of Manila Developer Guide " "document." msgstr "" #: ../ops-user-facing-operations.rst:1266 msgid "Manage Shares" msgstr "" #: ../ops-user-facing-operations.rst:1268 msgid "" "There are several other useful operations you would perform when working " "with shares." msgstr "" #: ../ops-user-facing-operations.rst:1272 msgid "Update Share" msgstr "" #: ../ops-user-facing-operations.rst:1274 msgid "" "To change the name of a share, or update its description, or level of " "visibility for other tenants, use this command:" msgstr "" #: ../ops-user-facing-operations.rst:1281 msgid "Check the attributes of the updated Share1:" msgstr "" #: ../ops-user-facing-operations.rst:1327 msgid "Reset Share State" msgstr "" #: ../ops-user-facing-operations.rst:1329 msgid "" "Sometimes a share may appear and then hang in an erroneous or a transitional " "state. Unprivileged users do not have the appropriate access rights to " "correct this situation. However, having cloud administrator's permissions, " "you can reset the share's state by using" msgstr "" #: ../ops-user-facing-operations.rst:1338 msgid "" "command to reset share state, where state indicates which state to assign " "the share to. Options include: ``available, error, creating, deleting, " "error_deleting`` states." msgstr "" #: ../ops-user-facing-operations.rst:1342 msgid "After running" msgstr "" #: ../ops-user-facing-operations.rst:1348 msgid "check the share's status:" msgstr "" #: ../ops-user-facing-operations.rst:1382 msgid "Delete Share" msgstr "" #: ../ops-user-facing-operations.rst:1384 msgid "" "If you do not need a share any more, you can delete it using :command:" "`manila delete share_name_or_ID` command like:" msgstr "" #: ../ops-user-facing-operations.rst:1393 msgid "" "If you specified the consistency group while creating a share, you should " "provide the --consistency-group parameter to delete the share:" msgstr "" #: ../ops-user-facing-operations.rst:1402 msgid "" "Sometimes it appears that a share hangs in one of transitional states (i.e. " "``creating, deleting, managing, unmanaging, extending, and shrinking``). In " "that case, to delete it, you need :command:`manila force-delete " "share_name_or_ID` command and administrative permissions to run it:" msgstr "" #: ../ops-user-facing-operations.rst:1415 msgid "" "For more details and additional information about other cases, features, API " "commands etc, see `Share Management `__ of “Shared File Systems” " "section of OpenStack Administrator Guide document." msgstr "" #: ../ops-user-facing-operations.rst:1421 msgid "Create Snapshots" msgstr "" #: ../ops-user-facing-operations.rst:1423 msgid "" "The Shared File Systems service provides a mechanism of snapshots to help " "users to restore their own data. To create a snapshot, use :command:`manila " "snapshot-create` command like:" msgstr "" #: ../ops-user-facing-operations.rst:1445 msgid "" "Then, if needed, update the name and description of the created snapshot:" msgstr "" #: ../ops-user-facing-operations.rst:1452 msgid "To make sure that the snapshot is available, run:" msgstr "" #: ../ops-user-facing-operations.rst:1474 msgid "" "For more details and additional information on snapshots, see `Share " "Snapshots `__ of “Shared File Systems” section of “OpenStack " "Administrator Guide” document." msgstr "" #: ../ops-user-facing-operations.rst:1483 msgid "Create a Share Network" msgstr "" #: ../ops-user-facing-operations.rst:1485 msgid "" "To control a share network, Shared File Systems service requires interaction " "with Networking service to manage share servers on its own. If the selected " "driver runs in a mode that requires such kind of interaction, you need to " "specify the share network when a share is created. For the information on " "share creation, see :ref:`create_share` earlier in this chapter. Initially, " "check the existing share networks type list by:" msgstr "" #: ../ops-user-facing-operations.rst:1501 msgid "" "If share network list is empty or does not contain a required network, just " "create, for example, a share network with a private network and subnetwork." msgstr "" #: ../ops-user-facing-operations.rst:1528 msgid "" "The ``segmentation_id``, ``cidr``, ``ip_version``, and ``network_type`` " "share network attributes are automatically set to the values determined by " "the network provider." msgstr "" #: ../ops-user-facing-operations.rst:1532 msgid "" "Then check if the network became created by requesting the networks list " "once again:" msgstr "" #: ../ops-user-facing-operations.rst:1544 msgid "" "Finally, to create a share that uses this share network, get to Create Share " "use case described earlier in this chapter." msgstr "" #: ../ops-user-facing-operations.rst:1549 msgid "" "See `Share Networks `__ of “Shared File Systems” section of " "OpenStack Administrator Guide document for more details." msgstr "" #: ../ops-user-facing-operations.rst:1555 msgid "Manage a Share Network" msgstr "" #: ../ops-user-facing-operations.rst:1557 msgid "" "There is a pair of useful commands that help manipulate share networks. To " "start, check the network list:" msgstr "" #: ../ops-user-facing-operations.rst:1569 msgid "" "If you configured the back-end with ``driver_handles_share_servers = True`` " "(with the share servers) and had already some operations in the Shared File " "Systems service, you can see ``manila_service_network`` in the neutron list " "of networks. This network was created by the share driver for internal usage." msgstr "" #: ../ops-user-facing-operations.rst:1588 msgid "" "You also can see detailed information about the share network including " "``network_type, segmentation_id`` fields:" msgstr "" #: ../ops-user-facing-operations.rst:1620 msgid "You also can add and remove the security services to the share network." msgstr "" #: ../ops-user-facing-operations.rst:1624 msgid "" "For details, see subsection `Security Services `__ of “Shared File " "Systems” section of OpenStack Administrator Guide document." msgstr "" #: ../ops-user-facing-operations.rst:1631 msgid "" "Instances are the running virtual machines within an OpenStack cloud. This " "section deals with how to work with them and their underlying images, their " "network properties, and how they are represented in the database." msgstr "" #: ../ops-user-facing-operations.rst:1637 msgid "Starting Instances" msgstr "" #: ../ops-user-facing-operations.rst:1639 msgid "" "To launch an instance, you need to select an image, a flavor, and a name. " "The name needn't be unique, but your life will be simpler if it is because " "many tools will use the name in place of the UUID so long as the name is " "unique. You can start an instance from the dashboard from the :guilabel:" "`Launch Instance` button on the :guilabel:`Instances` page or by selecting " "the :guilabel:`Launch` action next to an image or a snapshot on the :" "guilabel:`Images` page." msgstr "" #: ../ops-user-facing-operations.rst:1647 msgid "On the command line, do this:" msgstr "" #: ../ops-user-facing-operations.rst:1653 msgid "" "There are a number of optional items that can be specified. You should read " "the rest of this section before trying to start an instance, but this is the " "base command that later details are layered upon." msgstr "" #: ../ops-user-facing-operations.rst:1657 msgid "" "To delete instances from the dashboard, select the :guilabel:`Delete " "Instance` action next to the instance on the :guilabel:`Instances` page." msgstr "" #: ../ops-user-facing-operations.rst:1663 msgid "" "In releases prior to Mitaka, select the equivalent :guilabel:`Terminate " "instance` action." msgstr "" #: ../ops-user-facing-operations.rst:1666 msgid "From the command line, do this:" msgstr "" #: ../ops-user-facing-operations.rst:1672 msgid "" "It is important to note that powering off an instance does not terminate it " "in the OpenStack sense." msgstr "" #: ../ops-user-facing-operations.rst:1676 msgid "Instance Boot Failures" msgstr "" #: ../ops-user-facing-operations.rst:1678 msgid "" "If an instance fails to start and immediately moves to an error state, there " "are a few different ways to track down what has gone wrong. Some of these " "can be done with normal user access, while others require access to your log " "server or compute nodes." msgstr "" #: ../ops-user-facing-operations.rst:1683 msgid "" "The simplest reasons for nodes to fail to launch are quota violations or the " "scheduler being unable to find a suitable compute node on which to run the " "instance. In these cases, the error is apparent when you run a :command:" "`openstack server show` on the faulted instance:" msgstr "" #: ../ops-user-facing-operations.rst:1727 msgid "" "In this case, looking at the ``fault`` message shows ``NoValidHost``, " "indicating that the scheduler was unable to match the instance requirements." msgstr "" #: ../ops-user-facing-operations.rst:1731 msgid "" "If :command:`openstack server show` does not sufficiently explain the " "failure, searching for the instance UUID in the ``nova-compute.log`` on the " "compute node it was scheduled on or the ``nova-scheduler.log`` on your " "scheduler hosts is a good place to start looking for lower-level problems." msgstr "" #: ../ops-user-facing-operations.rst:1736 msgid "" "Using :command:`openstack server show` as an admin user will show the " "compute node the instance was scheduled on as ``hostId``. If the instance " "failed during scheduling, this field is blank." msgstr "" #: ../ops-user-facing-operations.rst:1741 msgid "Using Instance-Specific Data" msgstr "" #: ../ops-user-facing-operations.rst:1743 msgid "" "There are two main types of instance-specific data: metadata and user data." msgstr "" #: ../ops-user-facing-operations.rst:1747 msgid "Instance metadata" msgstr "" #: ../ops-user-facing-operations.rst:1749 msgid "" "For Compute, instance metadata is a collection of key-value pairs associated " "with an instance. Compute reads and writes to these key-value pairs any time " "during the instance lifetime, from inside and outside the instance, when the " "end user uses the Compute API to do so. However, you cannot query the " "instance-associated key-value pairs with the metadata service that is " "compatible with the Amazon EC2 metadata service." msgstr "" #: ../ops-user-facing-operations.rst:1756 msgid "" "For an example of instance metadata, users can generate and register SSH " "keys using the :command:`openstack keypair create` command:" msgstr "" #: ../ops-user-facing-operations.rst:1763 msgid "" "This creates a key named ``mykey``, which you can associate with instances. " "The file ``mykey.pem`` is the private key, which should be saved to a secure " "location because it allows root access to instances the ``mykey`` key is " "associated with." msgstr "" #: ../ops-user-facing-operations.rst:1768 msgid "Use this command to register an existing key with OpenStack:" msgstr "" #: ../ops-user-facing-operations.rst:1776 msgid "" "You must have the matching private key to access instances associated with " "this key." msgstr "" #: ../ops-user-facing-operations.rst:1779 msgid "" "To associate a key with an instance on boot, add ``--key-name mykey`` to " "your command line. For example:" msgstr "" #: ../ops-user-facing-operations.rst:1787 msgid "" "When booting a server, you can also add arbitrary metadata so that you can " "more easily identify it among other running instances. Use the ``--" "property`` option with a key-value pair, where you can make up the string " "for both the key and the value. For example, you could add a description and " "also the creator of the server:" msgstr "" #: ../ops-user-facing-operations.rst:1798 msgid "" "When viewing the server information, you can see the metadata included on " "the metadata line:" msgstr "" #: ../ops-user-facing-operations.rst:1840 msgid "Instance user data" msgstr "" #: ../ops-user-facing-operations.rst:1842 msgid "" "The ``user-data`` key is a special key in the metadata service that holds a " "file that cloud-aware applications within the guest instance can access. For " "example, `cloudinit `__ is an " "open source package from Ubuntu, but available in most distributions, that " "handles early initialization of a cloud instance that makes use of this user " "data." msgstr "" #: ../ops-user-facing-operations.rst:1850 msgid "" "This user data can be put in a file on your local system and then passed in " "at instance creation with the flag ``--user-data ``." msgstr "" #: ../ops-user-facing-operations.rst:1854 msgid "For example" msgstr "" #: ../ops-user-facing-operations.rst:1861 msgid "" "To understand the difference between user data and metadata, realize that " "user data is created before an instance is started. User data is accessible " "from within the instance when it is running. User data can be used to store " "configuration, a script, or anything the tenant wants." msgstr "" #: ../ops-user-facing-operations.rst:1867 msgid "File injection" msgstr "" #: ../ops-user-facing-operations.rst:1869 msgid "" "Arbitrary local files can also be placed into the instance file system at " "creation time by using the ``--file `` option. You may " "store up to five files." msgstr "" #: ../ops-user-facing-operations.rst:1873 msgid "" "For example, let's say you have a special ``authorized_keys`` file named " "special_authorized_keysfile that for some reason you want to put on the " "instance instead of using the regular SSH key injection. In this case, you " "can use the following command:" msgstr "" #: ../ops-user-facing-operations.rst:1885 msgid "Associating Security Groups" msgstr "" #: ../ops-user-facing-operations.rst:1887 msgid "" "Security groups, as discussed earlier, are typically required to allow " "network traffic to an instance, unless the default security group for a " "project has been modified to be more permissive." msgstr "" #: ../ops-user-facing-operations.rst:1891 msgid "" "Adding security groups is typically done on instance boot. When launching " "from the dashboard, you do this on the :guilabel:`Access & Security` tab of " "the :guilabel:`Launch Instance` dialog. When launching from the command " "line, append ``--security-groups`` with a comma-separated list of security " "groups." msgstr "" #: ../ops-user-facing-operations.rst:1897 msgid "" "It is also possible to add and remove security groups when an instance is " "running. Currently this is only available through the command-line tools. " "Here is an example:" msgstr "" #: ../ops-user-facing-operations.rst:1912 msgid "" "Where floating IPs are configured in a deployment, each project will have a " "limited number of floating IPs controlled by a quota. However, these need to " "be allocated to the project from the central pool prior to their use—usually " "by the administrator of the project. To allocate a floating IP to a project, " "use the :guilabel:`Allocate IP To Project` button on the :guilabel:`Floating " "IPs` tab of the :guilabel:`Access & Security` page of the dashboard. The " "command line can also be used:" msgstr "" #: ../ops-user-facing-operations.rst:1924 msgid "" "Once allocated, a floating IP can be assigned to running instances from the " "dashboard either by selecting :guilabel:`Associate` from the actions drop-" "down next to the IP on the :guilabel:`Floating IPs` tab of the :guilabel:" "`Access & Security` page or by making this selection next to the instance " "you want to associate it with on the Instances page. The inverse action, " "Dissociate Floating IP, is available from the :guilabel:`Floating IPs` tab " "of the :guilabel:`Access & Security` page and from the :guilabel:`Instances` " "page." msgstr "" #: ../ops-user-facing-operations.rst:1933 msgid "" "To associate or disassociate a floating IP with a server from the command " "line, use the following commands:" msgstr "" #: ../ops-user-facing-operations.rst:1945 msgid "Attaching Block Storage" msgstr "" #: ../ops-user-facing-operations.rst:1947 msgid "" "You can attach block storage to instances from the dashboard on the :" "guilabel:`Volumes` page. Click the :guilabel:`Manage Attachments` action " "next to the volume you want to attach." msgstr "" #: ../ops-user-facing-operations.rst:1951 msgid "To perform this action from command line, run the following command:" msgstr "" #: ../ops-user-facing-operations.rst:1957 msgid "" "You can also specify block deviceblock device mapping at instance boot time " "through the nova command-line client with this option set:" msgstr "" #: ../ops-user-facing-operations.rst:1964 msgid "" "The block device mapping format is ``=:::" "``, where:" msgstr "" #: ../ops-user-facing-operations.rst:1969 msgid "" "A device name where the volume is attached in the system at ``/dev/dev_name``" msgstr "" #: ../ops-user-facing-operations.rst:1970 msgid "dev-name" msgstr "" #: ../ops-user-facing-operations.rst:1973 msgid "" "The ID of the volume to boot from, as shown in the output of :command:" "`openstack volume list`" msgstr "" #: ../ops-user-facing-operations.rst:1974 msgid "id" msgstr "" #: ../ops-user-facing-operations.rst:1977 msgid "" "Either ``snap``, which means that the volume was created from a snapshot, or " "anything other than ``snap`` (a blank string is valid). In the preceding " "example, the volume was not created from a snapshot, so we leave this field " "blank in our following example." msgstr "" #: ../ops-user-facing-operations.rst:1980 msgid "type" msgstr "" #: ../ops-user-facing-operations.rst:1983 msgid "" "The size of the volume in gigabytes. It is safe to leave this blank and have " "the Compute Service infer the size." msgstr "" #: ../ops-user-facing-operations.rst:1984 msgid "size (GB)" msgstr "" #: ../ops-user-facing-operations.rst:1987 msgid "" "A boolean to indicate whether the volume should be deleted when the instance " "is terminated. True can be specified as ``True`` or ``1``. False can be " "specified as ``False`` or ``0``." msgstr "" #: ../ops-user-facing-operations.rst:1989 msgid "delete-on-terminate" msgstr "" #: ../ops-user-facing-operations.rst:1991 msgid "" "The following command will boot a new instance and attach a volume at the " "same time. The volume of ID 13 will be attached as ``/dev/vdc``. It is not a " "snapshot, does not specify a size, and will not be deleted when the instance " "is terminated:" msgstr "" #: ../ops-user-facing-operations.rst:2002 msgid "" "If you have previously prepared block storage with a bootable file system " "image, it is even possible to boot from persistent block storage. The " "following command boots an image from the specified volume. It is similar to " "the previous command, but the image is omitted and the volume is now " "attached as ``/dev/vda``:" msgstr "" #: ../ops-user-facing-operations.rst:2013 msgid "" "Read more detailed instructions for launching an instance from a bootable " "volume in the `OpenStack End User Guide `__." msgstr "" #: ../ops-user-facing-operations.rst:2017 msgid "" "To boot normally from an image and attach block storage, map to a device " "other than vda. You can find instructions for launching an instance and " "attaching a volume to the instance and for copying the image to the attached " "volume in the `OpenStack End User Guide `__." msgstr "" #: ../ops-user-facing-operations.rst:2024 msgid "Taking Snapshots" msgstr "" #: ../ops-user-facing-operations.rst:2026 msgid "" "The OpenStack snapshot mechanism allows you to create new images from " "running instances. This is very convenient for upgrading base images or for " "taking a published image and customizing it for local use. To snapshot a " "running instance to an image using the CLI, do this:" msgstr "" #: ../ops-user-facing-operations.rst:2035 msgid "" "The dashboard interface for snapshots can be confusing because the snapshots " "and images are displayed in the :guilabel:`Images` page. However, an " "instance snapshot *is* an image. The only difference between an image that " "you upload directly to the Image Service and an image that you create by " "snapshot is that an image created by snapshot has additional properties in " "the glance database. These properties are found in the ``image_properties`` " "table and include:" msgstr "" #: ../ops-user-facing-operations.rst:2047 msgid "Value" msgstr "" #: ../ops-user-facing-operations.rst:2048 msgid "``image_type``" msgstr "" #: ../ops-user-facing-operations.rst:2049 #: ../ops-user-facing-operations.rst:2055 msgid "snapshot" msgstr "" #: ../ops-user-facing-operations.rst:2050 msgid "``instance_uuid``" msgstr "" #: ../ops-user-facing-operations.rst:2051 msgid "" msgstr "" #: ../ops-user-facing-operations.rst:2052 msgid "``base_image_ref``" msgstr "" #: ../ops-user-facing-operations.rst:2053 msgid "" msgstr "" #: ../ops-user-facing-operations.rst:2054 msgid "``image_location``" msgstr "" #: ../ops-user-facing-operations.rst:2058 msgid "Live Snapshots" msgstr "" #: ../ops-user-facing-operations.rst:2060 msgid "" "Live snapshots is a feature that allows users to snapshot the running " "virtual machines without pausing them. These snapshots are simply disk-only " "snapshots. Snapshotting an instance can now be performed with no downtime " "(assuming QEMU 1.3+ and libvirt 1.0+ are used)." msgstr "" #: ../ops-user-facing-operations.rst:2067 msgid "" "If you use libvirt version ``1.2.2``, you may experience intermittent " "problems with live snapshot creation." msgstr "" #: ../ops-user-facing-operations.rst:2070 msgid "" "To effectively disable the libvirt live snapshotting, until the problem is " "resolved, add the below setting to nova.conf." msgstr "" #: ../ops-user-facing-operations.rst:2078 msgid "**Ensuring Snapshots of Linux Guests Are Consistent**" msgstr "" #: ../ops-user-facing-operations.rst:2080 msgid "" "The following section is from Sébastien Han's `OpenStack: Perform Consistent " "Snapshots blog entry `__." msgstr "" #: ../ops-user-facing-operations.rst:2084 msgid "" "A snapshot captures the state of the file system, but not the state of the " "memory. Therefore, to ensure your snapshot contains the data that you want, " "before your snapshot you need to ensure that:" msgstr "" #: ../ops-user-facing-operations.rst:2088 msgid "Running programs have written their contents to disk" msgstr "" #: ../ops-user-facing-operations.rst:2090 msgid "" "The file system does not have any \"dirty\" buffers: where programs have " "issued the command to write to disk, but the operating system has not yet " "done the write" msgstr "" #: ../ops-user-facing-operations.rst:2094 msgid "" "To ensure that important services have written their contents to disk (such " "as databases), we recommend that you read the documentation for those " "applications to determine what commands to issue to have them sync their " "contents to disk. If you are unsure how to do this, the safest approach is " "to simply stop these running services normally." msgstr "" #: ../ops-user-facing-operations.rst:2100 msgid "" "To deal with the \"dirty\" buffer issue, we recommend using the sync command " "before snapshotting:" msgstr "" #: ../ops-user-facing-operations.rst:2107 msgid "" "Running ``sync`` writes dirty buffers (buffered blocks that have been " "modified but not written yet to the disk block) to disk." msgstr "" #: ../ops-user-facing-operations.rst:2110 msgid "" "Just running ``sync`` is not enough to ensure that the file system is " "consistent. We recommend that you use the ``fsfreeze`` tool, which halts new " "access to the file system, and create a stable image on disk that is " "suitable for snapshotting. The ``fsfreeze`` tool supports several file " "systems, including ext3, ext4, and XFS. If your virtual machine instance is " "running on Ubuntu, install the util-linux package to get ``fsfreeze``:" msgstr "" #: ../ops-user-facing-operations.rst:2120 msgid "" "In the very common case where the underlying snapshot is done via LVM, the " "filesystem freeze is automatically handled by LVM." msgstr "" #: ../ops-user-facing-operations.rst:2127 msgid "" "If your operating system doesn't have a version of ``fsfreeze`` available, " "you can use ``xfs_freeze`` instead, which is available on Ubuntu in the " "xfsprogs package. Despite the \"xfs\" in the name, xfs_freeze also works on " "ext3 and ext4 if you are using a Linux kernel version 2.6.29 or greater, " "since it works at the virtual file system (VFS) level starting at 2.6.29. " "The xfs_freeze version supports the same command-line arguments as " "``fsfreeze``." msgstr "" #: ../ops-user-facing-operations.rst:2135 msgid "" "Consider the example where you want to take a snapshot of a persistent block " "storage volume, detected by the guest operating system as ``/dev/vdb`` and " "mounted on ``/mnt``. The fsfreeze command accepts two arguments:" msgstr "" #: ../ops-user-facing-operations.rst:2141 msgid "Freeze the system" msgstr "" #: ../ops-user-facing-operations.rst:2144 msgid "Thaw (unfreeze) the system" msgstr "" #: ../ops-user-facing-operations.rst:2146 msgid "" "To freeze the volume in preparation for snapshotting, you would do the " "following, as root, inside the instance:" msgstr "" #: ../ops-user-facing-operations.rst:2153 msgid "" "You *must mount the file system* before you run the :command:`fsfreeze` " "command." msgstr "" #: ../ops-user-facing-operations.rst:2156 msgid "" "When the :command:`fsfreeze -f` command is issued, all ongoing transactions " "in the file system are allowed to complete, new write system calls are " "halted, and other calls that modify the file system are halted. Most " "importantly, all dirty data, metadata, and log information are written to " "disk." msgstr "" #: ../ops-user-facing-operations.rst:2162 msgid "" "Once the volume has been frozen, do not attempt to read from or write to the " "volume, as these operations hang. The operating system stops every I/O " "operation and any I/O attempts are delayed until the file system has been " "unfrozen." msgstr "" #: ../ops-user-facing-operations.rst:2167 msgid "" "Once you have issued the :command:`fsfreeze` command, it is safe to perform " "the snapshot. For example, if the volume of your instance was named ``mon-" "volume`` and you wanted to snapshot it to an image named ``mon-snapshot``, " "you could now run the following:" msgstr "" #: ../ops-user-facing-operations.rst:2176 msgid "" "When the snapshot is done, you can thaw the file system with the following " "command, as root, inside of the instance:" msgstr "" #: ../ops-user-facing-operations.rst:2183 msgid "" "If you want to back up the root file system, you can't simply run the " "preceding command because it will freeze the prompt. Instead, run the " "following one-liner, as root, inside the instance:" msgstr "" #: ../ops-user-facing-operations.rst:2191 msgid "" "After this command it is common practice to call :command:`openstack image " "create` from your workstation, and once done press enter in your instance " "shell to unfreeze it. Obviously you could automate this, but at least it " "will let you properly synchronize." msgstr "" #: ../ops-user-facing-operations.rst:2198 msgid "**Ensuring Snapshots of Windows Guests Are Consistent**" msgstr "" #: ../ops-user-facing-operations.rst:2200 msgid "" "Obtaining consistent snapshots of Windows VMs is conceptually similar to " "obtaining consistent snapshots of Linux VMs, although it requires additional " "utilities to coordinate with a Windows-only subsystem designed to facilitate " "consistent backups." msgstr "" #: ../ops-user-facing-operations.rst:2205 msgid "" "Windows XP and later releases include a Volume Shadow Copy Service (VSS) " "which provides a framework so that compliant applications can be " "consistently backed up on a live filesystem. To use this framework, a VSS " "requestor is run that signals to the VSS service that a consistent backup is " "needed. The VSS service notifies compliant applications (called VSS writers) " "to quiesce their data activity. The VSS service then tells the copy provider " "to create a snapshot. Once the snapshot has been made, the VSS service " "unfreezes VSS writers and normal I/O activity resumes." msgstr "" #: ../ops-user-facing-operations.rst:2215 msgid "" "QEMU provides a guest agent that can be run in guests running on KVM " "hypervisors. This guest agent, on Windows VMs, coordinates with the Windows " "VSS service to facilitate a workflow which ensures consistent snapshots. " "This feature requires at least QEMU 1.7. The relevant guest agent commands " "are:" msgstr "" #: ../ops-user-facing-operations.rst:2222 msgid "" "Write out \"dirty\" buffers to disk, similar to the Linux ``sync`` operation." msgstr "" #: ../ops-user-facing-operations.rst:2223 msgid "guest-file-flush" msgstr "" #: ../ops-user-facing-operations.rst:2226 msgid "" "Suspend I/O to the disks, similar to the Linux ``fsfreeze -f`` operation." msgstr "" #: ../ops-user-facing-operations.rst:2227 msgid "guest-fsfreeze" msgstr "" #: ../ops-user-facing-operations.rst:2230 msgid "" "Resume I/O to the disks, similar to the Linux ``fsfreeze -u`` operation." msgstr "" #: ../ops-user-facing-operations.rst:2231 msgid "guest-fsfreeze-thaw" msgstr "" #: ../ops-user-facing-operations.rst:2233 msgid "" "To obtain snapshots of a Windows VM these commands can be scripted in " "sequence: flush the filesystems, freeze the filesystems, snapshot the " "filesystems, then unfreeze the filesystems. As with scripting similar " "workflows against Linux VMs, care must be used when writing such a script to " "ensure error handling is thorough and filesystems will not be left in a " "frozen state." msgstr "" #: ../ops-user-facing-operations.rst:2241 msgid "Instances in the Database" msgstr "" #: ../ops-user-facing-operations.rst:2243 msgid "" "While instance information is stored in a number of database tables, the " "table you most likely need to look at in relation to user instances is the " "instances table." msgstr "" #: ../ops-user-facing-operations.rst:2247 msgid "" "The instances table carries most of the information related to both running " "and deleted instances. It has a bewildering array of fields; for an " "exhaustive list, look at the database. These are the most useful fields for " "operators looking to form queries:" msgstr "" #: ../ops-user-facing-operations.rst:2252 msgid "" "The ``deleted`` field is set to ``1`` if the instance has been deleted and " "``NULL`` if it has not been deleted. This field is important for excluding " "deleted instances from your queries." msgstr "" #: ../ops-user-facing-operations.rst:2256 msgid "" "The ``uuid`` field is the UUID of the instance and is used throughout other " "tables in the database as a foreign key. This ID is also reported in logs, " "the dashboard, and command-line tools to uniquely identify an instance." msgstr "" #: ../ops-user-facing-operations.rst:2261 msgid "" "A collection of foreign keys are available to find relations to the " "instance. The most useful of these — ``user_id`` and ``project_id`` are the " "UUIDs of the user who launched the instance and the project it was launched " "in." msgstr "" #: ../ops-user-facing-operations.rst:2266 msgid "The ``host`` field tells which compute node is hosting the instance." msgstr "" #: ../ops-user-facing-operations.rst:2268 msgid "" "The ``hostname`` field holds the name of the instance when it is launched. " "The display-name is initially the same as hostname but can be reset using " "the nova rename command." msgstr "" #: ../ops-user-facing-operations.rst:2272 msgid "" "A number of time-related fields are useful for tracking when state changes " "happened on an instance:" msgstr "" #: ../ops-user-facing-operations.rst:2275 msgid "``created_at``" msgstr "" #: ../ops-user-facing-operations.rst:2277 msgid "``updated_at``" msgstr "" #: ../ops-user-facing-operations.rst:2279 msgid "``deleted_at``" msgstr "" #: ../ops-user-facing-operations.rst:2281 msgid "``scheduled_at``" msgstr "" #: ../ops-user-facing-operations.rst:2283 msgid "``launched_at``" msgstr "" #: ../ops-user-facing-operations.rst:2285 msgid "``terminated_at``" msgstr "" #: ../ops-user-facing-operations.rst:2288 msgid "Good Luck!" msgstr "" #: ../ops-user-facing-operations.rst:2290 msgid "" "This section was intended as a brief introduction to some of the most useful " "of many OpenStack commands. For an exhaustive list, please refer to the " "`OpenStack Administrator Guide `__. " "We hope your users remain happy and recognize your hard work! (For more hard " "work, turn the page to the next chapter, where we discuss the system-facing " "operations: maintenance, failures and debugging.)" msgstr "" #: ../ops-users.rst:3 msgid "User Management" msgstr "" #: ../ops-users.rst:5 msgid "" "The OpenStack Dashboard provides a graphical interface to manage users. This " "section describes user management with the Dashboard." msgstr "" #: ../ops-users.rst:8 msgid "" "You can also `manage projects, users, and roles `_ from the command-" "line clients." msgstr "" #: ../ops-users.rst:12 msgid "" "In addition, many sites write custom tools for local needs to enforce local " "policies and provide levels of self-service to users that are not currently " "available with packaged tools." msgstr "" #: ../ops-users.rst:17 msgid "Creating New Users" msgstr "" #: ../ops-users.rst:19 msgid "To create a user, you need the following information:" msgstr "" #: ../ops-users.rst:21 msgid "Username" msgstr "" #: ../ops-users.rst:23 msgid "Email address" msgstr "" #: ../ops-users.rst:24 msgid "Password" msgstr "" #: ../ops-users.rst:25 msgid "Primary project" msgstr "" #: ../ops-users.rst:26 msgid "Role" msgstr "" #: ../ops-users.rst:27 msgid "Enabled" msgstr "" #: ../ops-users.rst:29 msgid "" "Username and email address are self-explanatory, though your site may have " "local conventions you should observe. The primary project is simply the " "first project the user is associated with and must exist prior to creating " "the user. Role is almost always going to be \"member.\" Out of the box, " "OpenStack comes with two roles defined:" msgstr "" #: ../ops-users.rst:36 msgid "A typical user" msgstr "" #: ../ops-users.rst:36 msgid "member" msgstr "" #: ../ops-users.rst:39 msgid "" "An administrative super user, which has full permissions across all projects " "and should be used with great care" msgstr "" #: ../ops-users.rst:40 msgid "admin" msgstr "" #: ../ops-users.rst:42 msgid "It is possible to define other roles, but doing so is uncommon." msgstr "" #: ../ops-users.rst:44 msgid "" "Once you've gathered this information, creating the user in the dashboard is " "just another web form similar to what we've seen before and can be found by " "clicking the :guilabel:`Users` link in the :guilabel:`Identity` navigation " "bar and then clicking the :guilabel:`Create User` button at the top right." msgstr "" #: ../ops-users.rst:50 msgid "" "Modifying users is also done from this :guilabel:`Users` page. If you have a " "large number of users, this page can get quite crowded. The :guilabel:" "`Filter` search box at the top of the page can be used to limit the users " "listing. A form very similar to the user creation dialog can be pulled up by " "selecting :guilabel:`Edit` from the actions drop-down menu at the end of the " "line for the user you are modifying." msgstr "" #: ../ops-users.rst:58 msgid "Associating Users with Projects" msgstr "" #: ../ops-users.rst:60 msgid "" "Many sites run with users being associated with only one project. This is a " "more conservative and simpler choice both for administration and for users. " "Administratively, if a user reports a problem with an instance or quota, it " "is obvious which project this relates to. Users needn't worry about what " "project they are acting in if they are only in one project. However, note " "that, by default, any user can affect the resources of any other user within " "their project. It is also possible to associate users with multiple projects " "if that makes sense for your organization." msgstr "" #: ../ops-users.rst:70 msgid "" "Associating existing users with an additional project or removing them from " "an older project is done from the :guilabel:`Projects` page of the dashboard " "by selecting :guilabel:`Manage Members` from the :guilabel:`Actions` column, " "as shown in the screenshot below." msgstr "" #: ../ops-users.rst:75 msgid "" "From this view, you can do a number of useful things, as well as a few " "dangerous ones." msgstr "" #: ../ops-users.rst:78 msgid "" "The first column of this form, named :guilabel:`All Users`, includes a list " "of all the users in your cloud who are not already associated with this " "project. The second column shows all the users who are. These lists can be " "quite long, but they can be limited by typing a substring of the username " "you are looking for in the filter field at the top of the column." msgstr "" #: ../ops-users.rst:85 msgid "" "From here, click the :guilabel:`+` icon to add users to the project. Click " "the :guilabel:`-` to remove them." msgstr "" #: ../ops-users.rst:91 msgid "" "The dangerous possibility comes with the ability to change member roles. " "This is the dropdown list below the username in the :guilabel:`Project " "Members` list. In virtually all cases, this value should be set to :guilabel:" "`Member`. This example purposefully shows an administrative user where this " "value is ``admin``." msgstr "" #: ../ops-users.rst:99 msgid "" "The admin is global, not per project, so granting a user the ``admin`` role " "in any project gives the user administrative rights across the whole cloud." msgstr "" #: ../ops-users.rst:103 msgid "" "Typical use is to only create administrative users in a single project, by " "convention the admin project, which is created by default during cloud " "setup. If your administrative users also use the cloud to launch and manage " "instances, it is strongly recommended that you use separate user accounts " "for administrative access and normal operations and that they be in distinct " "projects." msgstr "" #: ../ops-users.rst:111 msgid "Customizing Authorization" msgstr "" #: ../ops-users.rst:113 msgid "" "The default :term:`authorization` settings allow administrative users only " "to create resources on behalf of a different project. OpenStack handles two " "kinds of authorization policies:" msgstr "" #: ../ops-users.rst:118 msgid "" "Policies specify access criteria for specific operations, possibly with fine-" "grained control over specific attributes." msgstr "" #: ../ops-users.rst:119 msgid "Operation based" msgstr "" #: ../ops-users.rst:122 msgid "" "Whether access to a specific resource might be granted or not according to " "the permissions configured for the resource (currently available only for " "the network resource). The actual authorization policies enforced in an " "OpenStack service vary from deployment to deployment." msgstr "" #: ../ops-users.rst:126 msgid "Resource based" msgstr "" #: ../ops-users.rst:128 msgid "" "The policy engine reads entries from the ``policy.json`` file. The actual " "location of this file might vary from distribution to distribution: for " "nova, it is typically in ``/etc/nova/policy.json``. You can update entries " "while the system is running, and you do not have to restart services. " "Currently, the only way to update such policies is to edit the policy file." msgstr "" #: ../ops-users.rst:135 msgid "" "The OpenStack service's policy engine matches a policy directly. A rule " "indicates evaluation of the elements of such policies. For instance, in a " "``compute:create: \"rule:admin_or_owner\"`` statement, the policy is " "``compute:create``, and the rule is ``admin_or_owner``." msgstr "" #: ../ops-users.rst:140 msgid "" "Policies are triggered by an OpenStack policy engine whenever one of them " "matches an OpenStack API operation or a specific attribute being used in a " "given operation. For instance, the engine tests the ``create:compute`` " "policy every time a user sends a ``POST /v2/{tenant_id}/servers`` request to " "the OpenStack Compute API server. Policies can be also related to specific :" "term:`API extensions `. For instance, if a user needs an " "extension like ``compute_extension:rescue``, the attributes defined by the " "provider extensions trigger the rule test for that operation." msgstr "" #: ../ops-users.rst:150 msgid "" "An authorization policy can be composed by one or more rules. If more rules " "are specified, evaluation policy is successful if any of the rules evaluates " "successfully; if an API operation matches multiple policies, then all the " "policies must evaluate successfully. Also, authorization rules are " "recursive. Once a rule is matched, the rule(s) can be resolved to another " "rule, until a terminal rule is reached. These are the rules defined:" msgstr "" #: ../ops-users.rst:159 msgid "" "Evaluate successfully if the user submitting the request has the specified " "role. For instance, ``\"role:admin\"`` is successful if the user submitting " "the request is an administrator." msgstr "" #: ../ops-users.rst:161 msgid "Role-based rules" msgstr "" #: ../ops-users.rst:164 msgid "" "Evaluate successfully if a field of the resource specified in the current " "request matches a specific value. For instance, ``\"field:networks:" "shared=True\"`` is successful if the attribute shared of the network " "resource is set to ``true``." msgstr "" #: ../ops-users.rst:167 msgid "Field-based rules" msgstr "" #: ../ops-users.rst:170 msgid "" "Compare an attribute in the resource with an attribute extracted from the " "user's security credentials and evaluates successfully if the comparison is " "successful. For instance, ``\"tenant_id:%(tenant_id)s\"`` is successful if " "the tenant identifier in the resource is equal to the tenant identifier of " "the user submitting the request." msgstr "" #: ../ops-users.rst:175 msgid "Generic rules" msgstr "" #: ../ops-users.rst:177 msgid "Here are snippets of the default nova ``policy.json`` file:" msgstr "" #: ../ops-users.rst:204 msgid "" "Shows a rule that evaluates successfully if the current user is an " "administrator or the owner of the resource specified in the request (tenant " "identifier is equal)." msgstr "" #: ../ops-users.rst:208 msgid "" "Shows the default policy, which is always evaluated if an API operation does " "not match any of the policies in ``policy.json``." msgstr "" #: ../ops-users.rst:211 msgid "" "Shows a policy restricting the ability to manipulate flavors to " "administrators using the Admin API only." msgstr "" #: ../ops-users.rst:214 msgid "" "In some cases, some operations should be restricted to administrators only. " "Therefore, as a further example, let us consider how this sample policy file " "could be modified in a scenario where we enable users to create their own " "flavors:" msgstr "" #: ../ops-users.rst:224 msgid "Users Who Disrupt Other Users" msgstr "" #: ../ops-users.rst:226 msgid "" "Users on your cloud can disrupt other users, sometimes intentionally and " "maliciously and other times by accident. Understanding the situation allows " "you to make a better decision on how to handle the disruption." msgstr "" #: ../ops-users.rst:231 msgid "" "For example, a group of users have instances that are utilizing a large " "amount of compute resources for very compute-intensive tasks. This is " "driving the load up on compute nodes and affecting other users. In this " "situation, review your user use cases. You may find that high compute " "scenarios are common, and should then plan for proper segregation in your " "cloud, such as host aggregation or regions." msgstr "" #: ../ops-users.rst:238 msgid "" "Another example is a user consuming a very large amount of bandwidth. Again, " "the key is to understand what the user is doing. If she naturally needs a " "high amount of bandwidth, you might have to limit her transmission rate as " "to not affect other users or move her to an area with more bandwidth " "available. On the other hand, maybe her instance has been hacked and is part " "of a botnet launching DDOS attacks. Resolution of this issue is the same as " "though any other server on your network has been hacked. Contact the user " "and give her time to respond. If she doesn't respond, shut down the instance." msgstr "" #: ../ops-users.rst:249 msgid "" "A final example is if a user is hammering cloud resources repeatedly. " "Contact the user and learn what he is trying to do. Maybe he doesn't " "understand that what he's doing is inappropriate, or maybe there is an issue " "with the resource he is trying to access that is causing his requests to " "queue or lag." msgstr "" #: ../preface.rst:3 msgid "Preface" msgstr "" #: ../preface.rst:5 msgid "" "OpenStack is an open source platform that lets you build an :term:" "`Infrastructure-as-a-Service (IaaS)` cloud that runs on commodity hardware." msgstr "" #: ../preface.rst:10 msgid "Introduction to OpenStack" msgstr "" #: ../preface.rst:12 msgid "" "OpenStack believes in open source, open design, and open development, all in " "an open community that encourages participation by anyone. The long-term " "vision for OpenStack is to produce a ubiquitous open source cloud computing " "platform that meets the needs of public and private cloud providers " "regardless of size. OpenStack services control large pools of compute, " "storage, and networking resources throughout a data center." msgstr "" #: ../preface.rst:20 msgid "" "The technology behind OpenStack consists of a series of interrelated " "projects delivering various components for a cloud infrastructure solution. " "Each service provides an open API so that all of these resources can be " "managed through a dashboard that gives administrators control while " "empowering users to provision resources through a web interface, a command-" "line client, or software development kits that support the API. Many " "OpenStack APIs are extensible, meaning you can keep compatibility with a " "core set of calls while providing access to more resources and innovating " "through API extensions. The OpenStack project is a global collaboration of " "developers and cloud computing technologists. The project produces an open " "standard cloud computing platform for both public and private clouds. By " "focusing on ease of implementation, massive scalability, a variety of rich " "features, and tremendous extensibility, the project aims to deliver a " "practical and reliable cloud solution for all types of organizations." msgstr "" #: ../preface.rst:37 msgid "Getting Started with OpenStack" msgstr "" #: ../preface.rst:39 msgid "" "As an open source project, one of the unique aspects of OpenStack is that it " "has many different levels at which you can begin to engage with it—you don't " "have to do everything yourself." msgstr "" #: ../preface.rst:44 msgid "Using OpenStack" msgstr "" #: ../preface.rst:46 msgid "" "You could ask, \"Do I even need to build a cloud?\" If you want to start " "using a compute or storage service by just swiping your credit card, you can " "go to eNovance, HP, Rackspace, or other organizations to start using their " "public OpenStack clouds. Using their OpenStack cloud resources is similar to " "accessing the publicly available Amazon Web Services Elastic Compute Cloud " "(EC2) or Simple Storage Solution (S3)." msgstr "" #: ../preface.rst:54 msgid "Plug and Play OpenStack" msgstr "" #: ../preface.rst:56 msgid "" "However, the enticing part of OpenStack might be to build your own private " "cloud, and there are several ways to accomplish this goal. Perhaps the " "simplest of all is an appliance-style solution. You purchase an appliance, " "unpack it, plug in the power and the network, and watch it transform into an " "OpenStack cloud with minimal additional configuration." msgstr "" #: ../preface.rst:62 msgid "" "However, hardware choice is important for many applications, so if that " "applies to you, consider that there are several software distributions " "available that you can run on servers, storage, and network products of your " "choosing. Canonical (where OpenStack replaced Eucalyptus as the default " "cloud option in 2011), Red Hat, and SUSE offer enterprise OpenStack " "solutions and support. You may also want to take a look at some of the " "specialized distributions, such as those from Rackspace, Piston, SwiftStack, " "or Cloudscaling." msgstr "" #: ../preface.rst:71 msgid "" "Alternatively, if you want someone to help guide you through the decisions " "about the underlying hardware or your applications, perhaps adding in a few " "features or integrating components along the way, consider contacting one of " "the system integrators with OpenStack experience, such as Mirantis or " "Metacloud." msgstr "" #: ../preface.rst:77 msgid "" "If your preference is to build your own OpenStack expertise internally, a " "good way to kick-start that might be to attend or arrange a training " "session. The OpenStack Foundation has a `Training Marketplace `_ where you can look for nearby events. " "Also, the OpenStack community is `working to produce `_ open source training materials." msgstr "" #: ../preface.rst:86 msgid "Roll Your Own OpenStack" msgstr "" #: ../preface.rst:88 msgid "" "However, this guide has a different audience—those seeking flexibility from " "the OpenStack framework by deploying do-it-yourself solutions." msgstr "" #: ../preface.rst:91 msgid "" "OpenStack is designed for horizontal scalability, so you can easily add new " "compute, network, and storage resources to grow your cloud over time. In " "addition to the pervasiveness of massive OpenStack public clouds, many " "organizations, such as PayPal, Intel, and Comcast, build large-scale private " "clouds. OpenStack offers much more than a typical software package because " "it lets you integrate a number of different technologies to construct a " "cloud. This approach provides great flexibility, but the number of options " "might be daunting at first." msgstr "" #: ../preface.rst:101 msgid "Who This Book Is For" msgstr "" #: ../preface.rst:103 msgid "" "This book is for those of you starting to run OpenStack clouds as well as " "those of you who were handed an operational one and want to keep it running " "well. Perhaps you're on a DevOps team, perhaps you are a system " "administrator starting to dabble in the cloud, or maybe you want to get on " "the OpenStack cloud team at your company. This book is for all of you." msgstr "" #: ../preface.rst:110 msgid "" "This guide assumes that you are familiar with a Linux distribution that " "supports OpenStack, SQL databases, and virtualization. You must be " "comfortable administering and configuring multiple Linux machines for " "networking. You must install and maintain an SQL database and occasionally " "run queries against it." msgstr "" #: ../preface.rst:116 msgid "" "One of the most complex aspects of an OpenStack cloud is the networking " "configuration. You should be familiar with concepts such as DHCP, Linux " "bridges, VLANs, and iptables. You must also have access to a network " "hardware expert who can configure the switches and routers required in your " "OpenStack cloud." msgstr "" #: ../preface.rst:124 msgid "" "Cloud computing is quite an advanced topic, and this book requires a lot of " "background knowledge. However, if you are fairly new to cloud computing, we " "recommend that you make use of the :doc:`common/glossary` at the back of the " "book, as well as the online documentation for OpenStack and additional " "resources mentioned in this book in :doc:`app-resources`." msgstr "" #: ../preface.rst:131 msgid "Further Reading" msgstr "" #: ../preface.rst:133 msgid "" "There are other books on the `OpenStack documentation website `_ that can help you get the job done." msgstr "" #: ../preface.rst:138 msgid "" "Describes a manual installation process, as in, by hand, without automation, " "for multiple distributions based on a packaging system:" msgstr "" #: ../preface.rst:141 msgid "" "`OpenStack Installation Tutorial for openSUSE and SUSE Linux Enterprise " "`_" msgstr "" #: ../preface.rst:147 msgid "" "`OpenStack Installation Tutorial for Ubuntu `_" msgstr "" #: ../preface.rst:148 msgid "Installation Tutorials and Guides" msgstr "" #: ../preface.rst:151 msgid "" "Contains a reference listing of all configuration options for core and " "integrated OpenStack services by release version" msgstr "" #: ../preface.rst:152 msgid "" "`OpenStack Configuration Reference `_" msgstr "" #: ../preface.rst:155 msgid "Contains guidelines for designing an OpenStack cloud" msgstr "" #: ../preface.rst:155 msgid "" "`OpenStack Architecture Design Guide `_" msgstr "" #: ../preface.rst:158 msgid "" "Contains how-to information for managing an OpenStack cloud as needed for " "your use cases, such as storage, computing, or software-defined-networking" msgstr "" #: ../preface.rst:163 msgid "" "Describes potential strategies for making your OpenStack services and " "related controllers and data stores highly available" msgstr "" #: ../preface.rst:164 msgid "" "`OpenStack High Availability Guide `_" msgstr "" #: ../preface.rst:167 msgid "" "Provides best practices and conceptual information about securing an " "OpenStack cloud" msgstr "" #: ../preface.rst:168 msgid "" "`OpenStack Security Guide `_" msgstr "" #: ../preface.rst:171 msgid "" "Shows you how to obtain, create, and modify virtual machine images that are " "compatible with OpenStack" msgstr "" #: ../preface.rst:172 msgid "" "`Virtual Machine Image Guide `_" msgstr "" #: ../preface.rst:175 msgid "" "Shows OpenStack end users how to create and manage resources in an OpenStack " "cloud with the OpenStack dashboard and OpenStack client commands" msgstr "" #: ../preface.rst:177 msgid "`OpenStack End User Guide `_" msgstr "" #: ../preface.rst:180 msgid "" "This guide targets OpenStack administrators seeking to deploy and manage " "OpenStack Networking (neutron)." msgstr "" #: ../preface.rst:181 msgid "" "`OpenStack Networking Guide `_" msgstr "" #: ../preface.rst:184 msgid "" "A brief overview of how to send REST API requests to endpoints for OpenStack " "services" msgstr "" #: ../preface.rst:185 msgid "" "`OpenStack API Guide `_" msgstr "" #: ../preface.rst:188 msgid "How This Book Is Organized" msgstr "" #: ../preface.rst:190 msgid "" "This book contains several parts to show best practices and tips for the " "repeated operations for running OpenStack clouds." msgstr "" #: ../preface.rst:194 msgid "" "This chapter is written to let you get your hands wrapped around your " "OpenStack cloud through command-line tools and understanding what is already " "set up in your cloud." msgstr "" #: ../preface.rst:196 msgid ":doc:`ops-lay-of-the-land`" msgstr "" #: ../preface.rst:199 msgid "" "This chapter walks through user-enabling processes that all admins must face " "to manage users, give them quotas to parcel out resources, and so on." msgstr "" #: ../preface.rst:201 msgid ":doc:`ops-projects-users`" msgstr "" #: ../preface.rst:204 msgid "" "This chapter shows you how to use OpenStack cloud resources and how to train " "your users." msgstr "" #: ../preface.rst:205 msgid ":doc:`ops-user-facing-operations`" msgstr "" #: ../preface.rst:208 msgid "" "This chapter goes into the common failures that the authors have seen while " "running clouds in production, including troubleshooting." msgstr "" #: ../preface.rst:209 msgid ":doc:`ops-maintenance`" msgstr "" #: ../preface.rst:212 msgid "" "Because network troubleshooting is especially difficult with virtual " "resources, this chapter is chock-full of helpful tips and tricks for tracing " "network traffic, finding the root cause of networking failures, and " "debugging related services, such as DHCP and DNS." msgstr "" #: ../preface.rst:215 msgid ":doc:`ops-network-troubleshooting`" msgstr "" #: ../preface.rst:218 msgid "" "This chapter shows you where OpenStack places logs and how to best read and " "manage logs for monitoring purposes." msgstr "" #: ../preface.rst:219 msgid ":doc:`ops-logging-monitoring`" msgstr "" #: ../preface.rst:222 msgid "" "This chapter describes what you need to back up within OpenStack as well as " "best practices for recovering backups." msgstr "" #: ../preface.rst:223 msgid ":doc:`ops-backup-recovery`" msgstr "" #: ../preface.rst:226 msgid "" "For readers who need to get a specialized feature into OpenStack, this " "chapter describes how to use DevStack to write custom middleware or a custom " "scheduler to rebalance your resources." msgstr "" #: ../preface.rst:228 msgid ":doc:`ops-customize`" msgstr "" #: ../preface.rst:231 msgid "" "Much of OpenStack is driver-oriented, so you can plug in different solutions " "to the base set of services. This chapter describes some advanced " "configuration topics." msgstr "" #: ../preface.rst:233 msgid ":doc:`ops-advanced-configuration`" msgstr "" #: ../preface.rst:236 msgid "" "This chapter provides upgrade information based on the architectures used in " "this book." msgstr "" #: ../preface.rst:237 msgid ":doc:`ops-upgrades`" msgstr "" #: ../preface.rst:239 msgid "**Back matter:**" msgstr "" #: ../preface.rst:242 msgid "" "You can read a small selection of use cases from the OpenStack community " "with some technical details and further resources." msgstr "" #: ../preface.rst:243 msgid ":doc:`app-usecases`" msgstr "" #: ../preface.rst:246 msgid "" "These are shared legendary tales of image disappearances, VM massacres, and " "crazy troubleshooting techniques that result in hard-learned lessons and " "wisdom." msgstr "" #: ../preface.rst:248 msgid ":doc:`app-crypt`" msgstr "" #: ../preface.rst:251 msgid "" "Read about how to track the OpenStack roadmap through the open and " "transparent development processes." msgstr "" #: ../preface.rst:252 msgid ":doc:`app-roadmaps`" msgstr "" #: ../preface.rst:255 msgid "" "So many OpenStack resources are available online because of the fast-moving " "nature of the project, but there are also resources listed here that the " "authors found helpful while learning themselves." msgstr "" #: ../preface.rst:258 msgid ":doc:`app-resources`" msgstr "" #: ../preface.rst:261 msgid "" "A list of terms used in this book is included, which is a subset of the " "larger OpenStack glossary available online." msgstr "" #: ../preface.rst:262 msgid ":doc:`common/glossary`" msgstr "" #: ../preface.rst:265 msgid "Why and How We Wrote This Book" msgstr "" #: ../preface.rst:267 msgid "" "We wrote this book because we have deployed and maintained OpenStack clouds " "for at least a year and we wanted to share this knowledge with others. After " "months of being the point people for an OpenStack cloud, we also wanted to " "have a document to hand to our system administrators so that they'd know how " "to operate the cloud on a daily basis—both reactively and pro-actively. We " "wanted to provide more detailed technical information about the decisions " "that deployers make along the way." msgstr "" #: ../preface.rst:276 msgid "We wrote this book to help you:" msgstr "" #: ../preface.rst:278 msgid "" "Design and create an architecture for your first nontrivial OpenStack cloud. " "After you read this guide, you'll know which questions to ask and how to " "organize your compute, networking, and storage resources and the associated " "software packages." msgstr "" #: ../preface.rst:283 msgid "Perform the day-to-day tasks required to administer a cloud." msgstr "" #: ../preface.rst:285 msgid "" "We wrote this book in a book sprint, which is a facilitated, rapid " "development production method for books. For more information, see the " "`BookSprints site `_. Your authors cobbled this " "book together in five days during February 2013, fueled by caffeine and the " "best takeout food that Austin, Texas, could offer." msgstr "" #: ../preface.rst:291 msgid "" "On the first day, we filled white boards with colorful sticky notes to start " "to shape this nebulous book about how to architect and operate clouds:" msgstr "" #: ../preface.rst:298 msgid "" "We wrote furiously from our own experiences and bounced ideas between each " "other. At regular intervals we reviewed the shape and organization of the " "book and further molded it, leading to what you see today." msgstr "" #: ../preface.rst:302 msgid "The team includes:" msgstr "" #: ../preface.rst:305 msgid "" "After learning about scalability in computing from particle physics " "experiments, such as ATLAS at the Large Hadron Collider (LHC) at CERN, Tom " "worked on OpenStack clouds in production to support the Australian public " "research sector. Tom currently serves as an OpenStack community manager and " "works on OpenStack documentation in his spare time." msgstr "" #: ../preface.rst:310 msgid "Tom Fifield" msgstr "" #: ../preface.rst:313 msgid "" "Diane works on the OpenStack API documentation tirelessly. She helped out " "wherever she could on this project." msgstr "" #: ../preface.rst:314 msgid "Diane Fleming" msgstr "" #: ../preface.rst:317 msgid "" "Anne is the documentation coordinator for OpenStack and also served as an " "individual contributor to the Google Documentation Summit in 2011, working " "with the Open Street Maps team. She has worked on book sprints in the past, " "with FLOSS Manuals’ Adam Hyde facilitating. Anne lives in Austin, Texas." msgstr "" #: ../preface.rst:321 msgid "Anne Gentle" msgstr "" #: ../preface.rst:324 msgid "" "An academic turned software-developer-slash-operator, Lorin worked as the " "lead architect for Cloud Services at Nimbis Services, where he deploys " "OpenStack for technical computing applications. He has been working with " "OpenStack since the Cactus release. Previously, he worked on high-" "performance computing extensions for OpenStack at University of Southern " "California's Information Sciences Institute (USC-ISI)." msgstr "" #: ../preface.rst:330 msgid "Lorin Hochstein" msgstr "" #: ../preface.rst:333 msgid "" "Adam facilitated this book sprint. He also founded the book sprint " "methodology and is the most experienced book-sprint facilitator around. See " "`BookSprints `_ for more information. Adam " "founded FLOSS Manuals—a community of some 3,000 individuals developing Free " "Manuals about Free Software. He is also the founder and project manager for " "Booktype, an open source project for writing, editing, and publishing books " "online and in print." msgstr "" #: ../preface.rst:339 msgid "Adam Hyde" msgstr "" #: ../preface.rst:342 msgid "" "Jon has been piloting an OpenStack cloud as a senior technical architect at " "the MIT Computer Science and Artificial Intelligence Lab for his researchers " "to have as much computing power as they need. He started contributing to " "OpenStack documentation and reviewing the documentation so that he could " "accelerate his learning." msgstr "" #: ../preface.rst:347 msgid "Jonathan Proulx" msgstr "" #: ../preface.rst:350 msgid "" "Everett is a developer advocate at Rackspace making OpenStack and the " "Rackspace Cloud easy to use. Sometimes developer, sometimes advocate, and " "sometimes operator, he's built web applications, taught workshops, given " "presentations around the world, and deployed OpenStack for production use by " "academia and business." msgstr "" #: ../preface.rst:354 msgid "Everett Toews" msgstr "" #: ../preface.rst:357 msgid "" "Joe has designed and deployed several clouds at Cybera, a nonprofit where " "they are building e-infrastructure to support entrepreneurs and local " "researchers in Alberta, Canada. He also actively maintains and operates " "these clouds as a systems architect, and his experiences have generated a " "wealth of troubleshooting skills for cloud environments." msgstr "" #: ../preface.rst:362 msgid "Joe Topjian" msgstr "" #: ../preface.rst:365 msgid "" "Many individual efforts keep a community book alive. Our community members " "updated content for this book year-round. Also, a year after the first " "sprint, Jon Proulx hosted a second two-day mini-sprint at MIT with the goal " "of updating the book for the latest release. Since the book's inception, " "more than 30 contributors have supported this book. We have a tool chain for " "reviews, continuous builds, and translations. Writers and developers " "continuously review patches, enter doc bugs, edit content, and fix doc bugs. " "We want to recognize their efforts!" msgstr "" #: ../preface.rst:375 msgid "" "The following people have contributed to this book: Akihiro Motoki, " "Alejandro Avella, Alexandra Settle, Andreas Jaeger, Andy McCallum, Benjamin " "Stassart, Chandan Kumar, Chris Ricker, David Cramer, David Wittman, Denny " "Zhang, Emilien Macchi, Gauvain Pocentek, Ignacio Barrio, James E. Blair, Jay " "Clark, Jeff White, Jeremy Stanley, K Jonathan Harker, KATO Tomoyuki, Lana " "Brindley, Laura Alves, Lee Li, Lukasz Jernas, Mario B. Codeniera, Matthew " "Kassawara, Michael Still, Monty Taylor, Nermina Miller, Nigel Williams, Phil " "Hopkins, Russell Bryant, Sahid Orentino Ferdjaoui, Sandy Walsh, Sascha " "Peilicke, Sean M. Collins, Sergey Lukjanov, Shilla Saebi, Stephen Gordon, " "Summer Long, Uwe Stuehler, Vaibhav Bhatkar, Veronica Musso, Ying Chun \"Daisy" "\" Guo, Zhengguang Ou, and ZhiQiang Fan." msgstr "" #: ../preface.rst:386 msgid "OpenStack community members" msgstr "" #: ../preface.rst:389 msgid "How to Contribute to This Book" msgstr "" #: ../preface.rst:391 msgid "" "The genesis of this book was an in-person event, but now that the book is in " "your hands, we want you to contribute to it. OpenStack documentation follows " "the coding principles of iterative work, with bug logging, investigating, " "and fixing. We also store the source content on GitHub and invite " "collaborators through the OpenStack Gerrit installation, which offers " "reviews. For the O'Reilly edition of this book, we are using the company's " "Atlas system, which also stores source content on GitHub and enables " "collaboration among contributors." msgstr "" #: ../preface.rst:400 msgid "" "Learn more about how to contribute to the OpenStack docs at `OpenStack " "Documentation Contributor Guide `_." msgstr "" #: ../preface.rst:404 msgid "" "If you find a bug and can't fix it or aren't sure it's really a doc bug, log " "a bug at `OpenStack Manuals `_. Tag the bug under Extra options with the ``ops-guide`` tag to " "indicate that the bug is in this guide. You can assign the bug to yourself " "if you know how to fix it. Also, a member of the OpenStack doc-core team can " "triage the doc bug." msgstr ""