高性能记算协作平台之OpenStack计算服务nova
一、nova简介
nova是openstack中的计算服务,其主要作用是帮助我们在计算节点上管理虚拟机的核心服务;这里的计算节点就是指用于提供运行虚拟机实例的主机,通常像这种计算节点有很多台,那么虚拟机到底在哪个server上启动?如何启动?这就是nova需要做的;对于openstack用户来讲,底层到底在哪台server上启动虚拟机以及怎么启动的,我们可以不关心;因为nova服务帮我们搞定;
nova架构图
nova服务有很多组件,其中核心组件有nova-api、nova-scheduler、nova-conductor、nova-console、nova-novncproxy、nova-placement和nova-compute ;其中nava-api主要用来接收客户端请求,并将请求信息放到对应的消息队列中,同时将用户的请求写入到nova数据库中,相当于服务的入口;nova-scheduler主要用于调度用户的请求,比如创建虚拟机需要调度到哪台物理server上创建都是由nova-scheduler来决策;它会将其调度结果放到对应的消息队列中同时它也会把调度信息写入nova数据库中;nova-conductor主要用来帮助其他组件修改虚拟机后的信息,将其写入到nova 数据库中的;所有队列中有关写数据库的请求都会先丢给nova-conductor所订阅的消息队列中,然后nova-conductor会按照一定的速度向数据库中写;这样做主要是减少数据库的压力,避免数据库压力过大而出现异常;nova-console主要用来给虚拟机提供控制台服务,并将其控制台地址写入到nova数据库中;nova-novncproxy主要作用是代理用户通过novnc访问虚拟机控制台;nova-placement主要作用是跟踪每个数据节点的资源使用情况;nova-computer主要用来调用数据节点的hypervisor,来管理虚拟机;这些组件都是基于一个消息队列服务来相互调用的;从而实现各组件解耦;所以nova服务是严重依赖消息队列服务的;
nova核心工作流程
当nova-api接收到用户的请求,比如创建一个虚拟机实例,nova-api会把这个请求放到消息队列中,并把用户的请求信息写入到nova数据库中,然后继续接收其他用户的请求;nova-api把用户请求放到未调度的消息队列中,nova-scheduler会从未调度的消息队列中取出用户的请求进行调度,把调度结果又返回给对应计算节点所订阅的消息队列中,同时它也会把调度结果写到nova数据库中,然后由对应的数据节点nova-computer取出调度后的消息进行处理;nova-computer的处理就是调用本地的hypervisor来创建虚拟机,最后把创建成功的消息,丢给消息队列,然后由nova-api到消息队列中取得虚拟机实例创建成功的消息,nova-api再把消息返回给用户;对于其他组件的工作原理也是类似,他们都是把处理的结果放到对应的消息队列中,然后由其他组件去消息队列中取结果,从而完成各组件间的互相调用;
二、nova服务的安装、配置、测试
1、创建数据库、用户、授权用户
[root@node02 ~]# mysqlWelcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.1.20-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.01 sec)
MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE DATABASE nova_cell0;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE DATABASE placement;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'nova123';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| glance |
| information_schema |
| keystone |
| mysql |
| nova |
| nova_api |
| nova_cell0 |
| performance_schema |
| placement |
| test |
+--------------------+
10 rows in set (0.05 sec)
MariaDB [(none)]>
提示:以上主要创建了四个数据库,分别是nova_api,nova,nova_cell0,placement;然后创建了两个用户,一个是nova用户,并授权它能够从任意主机连接到数据库,并对nova_api,nova,nova_cell0这三个库下的有所有表有增删查改的权限;一个用户是placement,并授权该用户能够从任意主机连接到placement数据库对placment库下的所有表增删查改的权限;
验证:用其他主机使用nova用户连接mariadb,看看是否能够正常连接?是否能够看到nova_api,nova,nova_cell0这三个库?
[root@node01 ~]# mysql -unova -pnova123 -hnode02Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 10.1.20-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| nova |
| nova_api |
| nova_cell0 |
| test |
+--------------------+
5 rows in set (0.00 sec)
MariaDB [(none)]> exit
Bye
[root@node01 ~]#
提示:使用nova用户和nova用户的密码连接数据能够看到我们之前授权的三个库,说明我们创建nova用户并授权的操作没有问题;
验证:用其他主机使用placement用户连接mariadb,看看是否可正常连接?是否能够看到placement这个库?
[root@node01 ~]# mysql -uplacement -pnova123 -hnode02Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.1.20-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| placement |
| test |
+--------------------+
3 rows in set (0.00 sec)
MariaDB [(none)]> exit
Bye
[root@node01 ~]#
说明:能够看到placement库就说明placement账号没有问题;
2、在控制节点上安装、配置nova服务
导出admin用户的环境变量,创建nova用户,设置其密码为nova
[root@node01 ~]# source admin.sh[root@node01 ~]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 47c0915c914c49bb8670703e4315a80f |
| enabled | True |
| id | 8e0ed287f92749e098a913a3edb90c74 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@node01 ~]#
将nova用户授权为admin角色,并指明是一个service项目
[root@node01 ~]# openstack role add --project service --user nova admin[root@node01 ~]#
创建nova服务,并将其类型设置为compute
[root@node01 ~]# openstack service create --name nova \> --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 8e002dd8e3ba4bd98a15b433dede19a3 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
[root@node01 ~]#
创建compute API endport (服务端点,注册服务)
创建公共端点
[root@node01 ~]# openstack endpoint create --region RegionOne \> compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 7524a1aa1c6f4c21ac4917c1865667f3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8e002dd8e3ba4bd98a15b433dede19a3 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@node01 ~]#
创建私有端点
[root@node01 ~]# openstack endpoint create --region RegionOne \> compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1473a41427174c24b8d84c62b25262f6 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8e002dd8e3ba4bd98a15b433dede19a3 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@node01 ~]#
创建管理端点
[root@node01 ~]# openstack endpoint create --region RegionOne \> compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3427fe37f3564252bffe0ee2f6bc766c |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8e002dd8e3ba4bd98a15b433dede19a3 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@node01 ~]#
创建placement用户,并设置密码为placement
[root@node01 ~]# openstack user create --domain default --password-prompt placementUser Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 47c0915c914c49bb8670703e4315a80f |
| enabled | True |
| id | a75c42cd405b4ea4885141df228b4caf |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@node01 ~]#
将placement用户授权为admin角色,并指明是一个service项目
[root@node01 ~]# openstack role add --project service --user placement admin[root@node01 ~]#
创建placement服务,并将其类型设置为placement
[root@node01 ~]# openstack service create --name placement \> --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | de21b8c49adb4a8d88c38a08d5db2d59 |
| name | placement |
| type | placement |
+-------------+----------------------------------+
[root@node01 ~]#
创建placement API endport (服务端点,注册服务)
公共端点
[root@node01 ~]# openstack endpoint create --region RegionOne \> placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 222b6f91a2674ea993524c94e41a5757 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | de21b8c49adb4a8d88c38a08d5db2d59 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@node01 ~]#
私有端点
[root@node01 ~]# openstack endpoint create --region RegionOne \> placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 04fa958200a943f4905893c6063389ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | de21b8c49adb4a8d88c38a08d5db2d59 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@node01 ~]#
管理端点
[root@node01 ~]# openstack endpoint create --region RegionOne \> placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6ddf51b6d9d8467e92cbf22c40e1ba1c |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | de21b8c49adb4a8d88c38a08d5db2d59 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@node01 ~]#
验证:在控制节点上查看是端点列表,看看nova和placement服务端点是否都创建成功?
[root@node01 ~]# openstack endpoint list+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------+
| 04cd3747614b42a3ba086cef39a1acd9 | RegionOne | glance | image | True | admin | http://controller:9292 |
| 04fa958200a943f4905893c6063389ab | RegionOne | placement | placement | True | internal | http://controller:8778 |
| 09f5ec434ea24d4c8dc9efe2bbb62b01 | RegionOne | glance | image | True | internal | http://controller:9292 |
| 1473a41427174c24b8d84c62b25262f6 | RegionOne | nova | compute | True | internal | http://controller:8774/v2.1 |
| 222b6f91a2674ea993524c94e41a5757 | RegionOne | placement | placement | True | public | http://controller:8778 |
| 3427fe37f3564252bffe0ee2f6bc766c | RegionOne | nova | compute | True | admin | http://controller:8774/v2.1 |
| 358ccfc245264b60a9d1a0c113dfa628 | RegionOne | glance | image | True | public | http://controller:9292 |
| 3bd05493999b462eb4b4af8d5e5c1fa9 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3 |
| 5293ad18db674ea1b01d8f401cb2cf14 | RegionOne | keystone | identity | True | public | http://controller:5000/v3 |
| 6593f8d808094b01a6311828f2ef72bd | RegionOne | keystone | identity | True | internal | http://controller:5000/v3 |
| 6ddf51b6d9d8467e92cbf22c40e1ba1c | RegionOne | placement | placement | True | admin | http://controller:8778 |
| 7524a1aa1c6f4c21ac4917c1865667f3 | RegionOne | nova | compute | True | public | http://controller:8774/v2.1 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------+
[root@node01 ~]#
提示:如果在端点列表中有3个nova和3个placement的端点,分别对应public、internal和admin接口,说明我们配置nova和placement服务端端点注册没有问题;
安装nova服务组件包
[root@node01 ~]# yum install openstack-nova-api openstack-nova-conductor \> openstack-nova-console openstack-nova-novncproxy \
> openstack-nova-scheduler openstack-nova-placement-api
Loaded plugins: fastestmirror
base | 3.6 kB 00:00:00
centos-ceph-luminous | 3.0 kB 00:00:00
centos-openstack-rocky | 3.0 kB 00:00:00
centos-qemu-ev | 3.0 kB 00:00:00
epel | 4.7 kB 00:00:00
extras | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/2): epel/x86_64/updateinfo | 1.0 MB 00:00:00
(2/2): epel/x86_64/primary_db | 6.9 MB 00:00:00
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* centos-qemu-ev: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package openstack-nova-api.noarch 1:18.3.0-1.el7 will be installed
--> Processing Dependency: openstack-nova-common = 1:18.3.0-1.el7 for package: 1:openstack-nova-api-18.3.0-1.el7.noarch
---> Package openstack-nova-conductor.noarch 1:18.3.0-1.el7 will be installed
---> Package openstack-nova-console.noarch 1:18.3.0-1.el7 will be installed
--> Processing Dependency: python-websockify >= 0.8.0 for package: 1:openstack-nova-console-18.3.0-1.el7.noarch
---> Package openstack-nova-novncproxy.noarch 1:18.3.0-1.el7 will be installed
……省略部分内容……
Installed:
openstack-nova-api.noarch 1:18.3.0-1.el7 openstack-nova-conductor.noarch 1:18.3.0-1.el7
openstack-nova-console.noarch 1:18.3.0-1.el7 openstack-nova-novncproxy.noarch 1:18.3.0-1.el7
openstack-nova-placement-api.noarch 1:18.3.0-1.el7 openstack-nova-scheduler.noarch 1:18.3.0-1.el7
Dependency Installed:
novnc.noarch 0:0.5.1-2.el7 openstack-nova-common.noarch 1:18.3.0-1.el7
python-kazoo.noarch 0:2.2.1-1.el7 python-nova.noarch 1:18.3.0-1.el7
python-oslo-versionedobjects-lang.noarch 0:1.33.3-1.el7 python-paramiko.noarch 0:2.1.1-9.el7
python-websockify.noarch 0:0.8.0-1.el7 python2-microversion-parse.noarch 0:0.2.1-1.el7
python2-os-traits.noarch 0:0.9.0-1.el7 python2-os-vif.noarch 0:1.11.2-1.el7
python2-oslo-reports.noarch 0:1.28.0-1.el7 python2-oslo-versionedobjects.noarch 0:1.33.3-1.el7
python2-psutil.x86_64 0:5.6.7-1.el7 python2-pyroute2.noarch 0:0.5.2-4.el7
python2-redis.noarch 0:2.10.6-1.el7 python2-tooz.noarch 0:1.62.1-1.el7
python2-voluptuous.noarch 0:0.11.5-1.el7.1 python2-zake.noarch 0:0.2.2-2.el7
Complete!
[root@node01 ~]#
编辑配置/etc/nova/nova.conf文件,在【DEFAULT】配置段配置仅启用计算和元数据api和rabbitmq地址信息
在【api_daabase
】配置段配置连接nova_api数据库相关信息
在【database】配置段配置连接nova数据库的相关信息
在【placement_database】配置段配置连接placlement数据库相关信息
在【api】配置段配置使用keystone验证
在【keystone_authtoken】配置段配置keystone相关信息
在【DEFAULT】配置段配置支持使用neutron以及相关驱动
在【vnc】配置段配置启用vnc,并设置vnc监听地址和客户端代理使用的ip地址,这里都用controller的解析地址即可;
在【glance】配置段配置连接glance的地址
在【oslo_concurrency】配置段配置锁文件存放路径
在【placement】配置段配置plancement api服务相关信息
/etc/nova/nova.conf最终配置
[root@node01 ~]# grep -i ^"[a-z\[]" /etc/nova/nova.conf[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack123@node02
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:nova123@node02/nova_api
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:nova123@node02/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = node02:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement
[placement_database]
connection = mysql+pymysql://placement:nova123@node02/placement
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = controller
server_proxyclient_address = controller
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
[root@node01 ~]#
编辑/etc/httpd/conf.d/00-nova-placement-api.conf配置文件,添加对placement api 的访问控制,在配置文件末尾添加
<Directory /usr/bin><IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
重启httpd
提示:重启httpd服务后,确保5000和8778端口能够正常监听;
初始化数据库
初始化nova_api数据库和placement数据库
[root@node01 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova[root@node01 ~]#
验证:查看nova-api库和placement库是否有表生成?
MariaDB [(none)]> use nova_apiReading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [nova_api]> show tables;
+------------------------------+
| Tables_in_nova_api |
+------------------------------+
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| build_requests |
| cell_mappings |
| consumers |
| flavor_extra_specs |
| flavor_projects |
| flavors |
| host_mappings |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_mappings |
| inventories |
| key_pairs |
| migrate_version |
| placement_aggregates |
| project_user_quotas |
| projects |
| quota_classes |
| quota_usages |
| quotas |
| request_specs |
| reservations |
| resource_classes |
| resource_provider_aggregates |
| resource_provider_traits |
| resource_providers |
| traits |
| users |
+------------------------------+
32 rows in set (0.00 sec)
MariaDB [nova_api]> use placement
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [placement]> show tables;
+------------------------------+
| Tables_in_placement |
+------------------------------+
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| build_requests |
| cell_mappings |
| consumers |
| flavor_extra_specs |
| flavor_projects |
| flavors |
| host_mappings |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_mappings |
| inventories |
| key_pairs |
| migrate_version |
| placement_aggregates |
| project_user_quotas |
| projects |
| quota_classes |
| quota_usages |
| quotas |
| request_specs |
| reservations |
| resource_classes |
| resource_provider_aggregates |
| resource_provider_traits |
| resource_providers |
| traits |
| users |
+------------------------------+
32 rows in set (0.00 sec)
MariaDB [placement]>
注册cell0
[root@node01 ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova[root@node01 ~]#
创建cell1
[root@node01 ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova2ad18452-0e55-4505-ba5e-76cbf071b0d6
[root@node01 ~]#
验证cell0和cell1是否注册正确
[root@node01 ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova+-------+--------------------------------------+--------------------------------+---------------------------------------------+----------+
| Name | UUID | Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+--------------------------------+---------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@node02/nova_cell0 | False |
| cell1 | 2ad18452-0e55-4505-ba5e-76cbf071b0d6 | rabbit://openstack:****@node02 | mysql+pymysql://nova:****@node02/nova | False |
+-------+--------------------------------------+--------------------------------+---------------------------------------------+----------+
[root@node01 ~]#
提示:能够看到以上信息就表示cell0和cell1注册没有问题;
初始化nova数据库
[root@node01 ~]# su -s /bin/sh -c "nova-manage db sync" nova/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
[root@node01 ~]#
提示:这里提示两个警告信息,说两个指令在未来的版本中不允许这样使用;我们可以忽略这些警告信息;
验证:查看nova数据库中是否有表生成?
MariaDB [placement]> use novaDatabase changed
MariaDB [nova]> show tables;
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_auth_tokens |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_extra |
| instance_faults |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| inventories |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| resource_provider_aggregates |
| resource_providers |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_extra |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
| shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| snapshot_id_mappings |
| snapshots |
| tags |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
+--------------------------------------------+
110 rows in set (0.00 sec)
MariaDB [nova]>
提示:可以看到nova数据库中生成了很多张表,说明初始nova数据库没有问题;
启动nova相关服务,并将其设置为开机启动
[root@node01 ~]# systemctl start openstack-nova-api.service \> openstack-nova-consoleauth openstack-nova-scheduler.service \
> openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@node01 ~]# systemctl enable openstack-nova-api.service \
> openstack-nova-consoleauth openstack-nova-scheduler.service \
> openstack-nova-conductor.service openstack-nova-novncproxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service.
[root@node01 ~]#
验证对应服务的端口是否处于监听状态?
提示:6080是nova-novncproxy服务所监听的端口;8774和8775是nova-api所监听的端口;8778是placement服务所监听的端口;如果能够看到这四个端口启动起来了,说明在控制节点的nova服务配置就没有什么问题;
到此nova服务在控制节点上就安装配置完毕
3、在计算节点上安装配置nova服务
安装nova-compute包
[root@node03 ~]# yum install openstack-nova-compute -y
编辑/etc/nova/nova.conf配置文件,在【DEFAULT】配置段配置仅启用计算和元数据api和rabbitmq地址信息
在【api】配置段配置使用keystone服务进行验证
在【keystone_authtoken】配置段配置keystone服务相关信息
在【DEFAULT】配置段配置支持使用neutron以及相关驱动
在【vnc】配置段配置启用vpn,以及vncserver的地址和novncproxy的接口地址
提示:server_proxyclient_address这个可以写ip地址或者主机名,如果是主机名请将其解析到对应计算节点的ip上;
在【glance】配置段配置连接glance服务端相关信息
在【oslo_concurrency】配置段配置锁文件存放路径
在【placement】配置段配置placement服务相关信息
验证计算节点是否支持硬件虚拟化
[root@node03 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo[root@node03 ~]#
提示:如果以上命令运行返回0,表示该计算节点不支持硬件虚拟化,如果返回非0,表示该计算节点支持硬件虚拟化;如果计算节点支持硬件虚拟化,到此计算节点上的nova配置就完成了;如果不支持硬件虚拟化,我们需要在【libvirt】配置段明确指明使用的virt_type为qemu,而不是kvm;
在【libvirt】配置段明确指明使用qemu
nova.conf最终配置
[root@node03 ~]# grep -i ^"[a-z\[]" /etc/nova/nova.conf[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack123@node02
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = node02:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova
[libvirt]
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement
[placement_database]
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = node03
novncproxy_base_url = http://controller:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
[root@node03 ~]#
启动nova-compute和libvirtd服务,并将其设置为开机启动
[root@node03 ~]# systemctl start libvirtd.service openstack-nova-compute.service[root@node03 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
[root@node03 ~]#
在控制节点上导出admin用户的环境变量,将计算节点信息添加到cell数据库中
[root@node01 ~]# source admin.sh[root@node01 ~]# openstack compute service list --service nova-compute
+----+--------------+-----------------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+-----------------+------+---------+-------+----------------------------+
| 9 | nova-compute | node03.test.org | nova | enabled | up | 2020-10-29T16:46:34.000000 |
+----+--------------+-----------------+------+---------+-------+----------------------------+
[root@node01 ~]#
手动扫描发现计算节点
[root@node01 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" novaFound 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 2ad18452-0e55-4505-ba5e-76cbf071b0d6
Checking host mapping for compute host 'node03.test.org': 24beeca9-7c6e-4025-ada4-f6cfffb89b5d
Creating host mapping for compute host 'node03.test.org': 24beeca9-7c6e-4025-ada4-f6cfffb89b5d
Found 1 unmapped computes in cell: 2ad18452-0e55-4505-ba5e-76cbf071b0d6
[root@node01 ~]#
设置自动发现计算节点,并自动完成计算节点注册的间隔时间
提示:这个配置要在控制节点的nova.conf中配置,上述配置表示每隔300秒自动扫描一下有没有新的计算节点加入;
到此,计算节点上的nova服务就安装配置完成了
验证:在控制节点导出admin用户的环境变量,列出服务组件,验证每个流程的成功启动和注册
[root@node01 ~]# source admin.sh[root@node01 ~]# openstack compute service list
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | node01.test.org | internal | enabled | up | 2020-10-29T16:57:33.000000 |
| 2 | nova-scheduler | node01.test.org | internal | enabled | up | 2020-10-29T16:57:33.000000 |
| 6 | nova-conductor | node01.test.org | internal | enabled | up | 2020-10-29T16:57:34.000000 |
| 9 | nova-compute | node03.test.org | nova | enabled | up | 2020-10-29T16:57:34.000000 |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
[root@node01 ~]#
提示:能够看到controller节点上启用的三个服务组件和compute节点上启用的一个服务组件。能够看到上述信息,表示nova服务工作正常;
验证:列出通过keystone验证的API端点
[root@node01 ~]# openstack catalog list+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| nova | compute | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
| keystone | identity | RegionOne |
| | | admin: http://controller:5000/v3 |
| | | RegionOne |
| | | public: http://controller:5000/v3 |
| | | RegionOne |
| | | internal: http://controller:5000/v3 |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | |
| placement | placement | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | |
+-----------+-----------+-----------------------------------------+
[root@node01 ~]#
验证:检查cell和placement是否工作正常
[root@node01 ~]# nova-status upgrade check+--------------------------------+
| Upgrade Check Results |
+--------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: API Service Version |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Request Spec Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Console Auths |
| Result: Success |
| Details: None |
+--------------------------------+
[root@node01 ~]#
提示:这个检查必须要全部都是成功的才没有问题;到此nova服务的安装配置和测试就完了;后续我们还差一个neutron网络服务,就可以在openstack上启动虚拟机了;
以上是 高性能记算协作平台之OpenStack计算服务nova 的全部内容, 来源链接: utcz.com/a/64522.html