高性能计算协作平台之OpenStack端网络neutron

  一、简介

  neutron的主要作用是在openstack中为启动虚拟机实例提供网络服务,对于neutron来讲,它可以提供两种类型的网络;第一种是provider network,这种网络就是我们常说的桥接网络,虚拟机内部网络通常是通过bridge的方式直接桥接到宿主机的某块物理网卡上,从而实现虚拟机可以正常的访问外部网络,同时虚拟机外部网络也可以访问虚拟机的内部网络;第二种是self-service networks,这种网络就是nat网络;nat网络的实现是通过在虚拟机和宿主机之间实现了虚拟路由器,在虚拟机内部可以是一个私有地址连接至虚拟路由器的一个接口上,而虚拟路由器的另外一端通过网桥桥接到宿主机的某一张物理网卡;所以nat网络很好的隐藏了虚拟机的地址,它能够实现虚拟机访问外部网络,而外网用户是不能够直接访问虚拟机的;但在openstack中,它能够实现虚拟机和外部的网络做一对一nat绑定,从而实现从虚拟机外部网络访问虚拟机;

  self-service network 示意图

高性能计算协作平台之OpenStack端网络neutron

  提示:self-service network 和provide network最大的区别是自服务网络中有虚拟路由器;有路由器就意味着虚拟机要和外网通信,网络报文要走三层,而对于provide network 来讲,它的网络报文就可以直接走二层网络;所以在openstack上这两种类型的网络实现方式和对应的组件也有所不同;

  provide network 实现所需组件

高性能计算协作平台之OpenStack端网络neutron

  Provider networks - Overview

高性能计算协作平台之OpenStack端网络neutron

  Provider networks 连接示意图

高性能计算协作平台之OpenStack端网络neutron

  提示:桥接网络也叫共享网络,虚拟机实例网络是通过桥接的方式直接共享宿主机网络;虚拟机和宿主机通信,就类似宿主机同局域网的其他主机通信一样;所以虚拟机和宿主机通信报文都不会到三层,所以这里面就不涉及三层网络相关的操作和配置;

  self-service network实现所需组件

高性能计算协作平台之OpenStack端网络neutron

  Self-service networks - Overview

高性能计算协作平台之OpenStack端网络neutron

  Self-service networks连接示意图

高性能计算协作平台之OpenStack端网络neutron

  对比上面两种网络的实现所需组件,我们可以发现self-service network的实现要比provide network要多一个networking L3 Agent插件;这个插件用作实现3层网络功能,比如,提供或管理虚拟路由器;从上面的两种网络连接示意图也可以看出,self-service network是包含provide network,也就是说我们选择使用self-service network这种类型的网络结构,我们即可以 创建自服务网络,也可以创建桥接网络;对于自服务网络来讲,我们在计算节点启动的虚拟机,虚拟机想要访问外部网络,它会通过计算节点的vxlan接口,这里的vxlan我们可以理解为在计算节点内部实现的虚拟交换机,各虚拟机实例通过连接不同的vni(网络标识符,类似vlan id一样)的vxlan来实现网络的隔离,同时vxlan这个虚拟接口通常是桥接在本地管理网络接口上,这个管理网络一般是不能够和外部网络通信;虚拟机访问外部网络,通过vxlan接口实现的vxlan隧道,这个隧道是一头是和计算节点的管理网络接口连接,一头是和控制节点的管理网络接口连接;虚拟机访问外部网络是通过vxlan隧道,再通过控制节点中的虚拟路由器,将请求通过路由规则,路由到控制节点能够上外网的接口上,然后发出去,从而实现虚拟机能够和外部网络进行交互;而对于外部网络要访问虚拟机,在openstack上是通过一对一nat绑定实现;也就说在控制节点能够上外网的接口上配置很多ip地址,这些IP地址都是可以正常访问外部网络的,在虚拟机访问外部网络时,在控制节点的虚拟机路由器上就固定的把计算节点的某个虚拟机的流量通过固定SNAT的方式进行数据发送,对于这个固定地址在控制节点上再做固定的DNAT,从而实现外部网络访问控制节点上的这个固定ip,通过DNAT规则把外部流量引入到虚拟机,从而实现外部网络和虚拟机通信;

  neutron工作流程

高性能计算协作平台之OpenStack端网络neutron

  neutron服务主要由neutron-server、neutron agents、neutron plugins这三个组件组成,这三者都依赖消息队列服务;其中neutron server主要用来接收用户的请求,比如创建或管理网络;当neutron server接收到客户端(openstack其他服务,如nova,neutron专有客户端)请求后,它会把请求丢到消息队列中去,然后neutron agents负责从消息队列中取出客户端的请求,在本地完成网络创建或管理,并把对应的操作的结果写到neutron 数据库中进行保存;这里需要说明一点neutron agents是指很多agent,每个agent都负责完成一件事,比如DHCP agent负责分配ip地址,network manage agent负责管理网络;而对于neutron plugins 主要用来借助外部插件的方式提供某种服务;比如ML2 plugin用来提供2层虚拟网络服务的;如果neutron agents在创建或管理网络需要用到某个插件服务时,它会把请求插件的消息丢到消息队列,然后neutron plugins 从消息队列取出消息,并响应请求,把结果丢到消息队列,同时也会写到数据库中;

  二、neutron服务的安装、配置

  1、准备neutron 数据库、用户以及授权用户对neutron数据库下的所有表有所有权限;

[root@node02 ~]# mysql

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 184

Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE neutron;

Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]> flush privileges;

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]>

  验证:用其他主机用neutron用户,看看是否可以正常连接数据库?

[root@node01 ~]# mysql -uneutron -pneutron -hnode02

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 185

Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;

+--------------------+

| Database |

+--------------------+

| information_schema |

| neutron |

| test |

+--------------------+

3 rows in set (0.00 sec)

MariaDB [(none)]>

  2、在控制节点安装配置neutron

  导出admin环境变量,创建neutron用户,设置其密码为neutron

[root@node01 ~]# source admin.sh

[root@node01 ~]# openstack user create --domain default --password-prompt neutron

User Password:

Repeat User Password:

+---------------------+----------------------------------+

| Field | Value |

+---------------------+----------------------------------+

| domain_id | 47c0915c914c49bb8670703e4315a80f |

| enabled | True |

| id | e7d0eae696914cc19fb8ebb24f4b5b0f |

| name | neutron |

| options | {} |

| password_expires_at | None |

+---------------------+----------------------------------+

[root@node01 ~]#

  将neutron用户添加至service项目,并授权为admin角色

[root@node01 ~]# openstack role add --project service --user neutron admin

[root@node01 ~]#

  创建neutron服务

[root@node01 ~]# openstack service create --name neutron \

> --description "OpenStack Networking" network

+-------------+----------------------------------+

| Field | Value |

+-------------+----------------------------------+

| description | OpenStack Networking |

| enabled | True |

| id | 3dc79e6a21e2484e8f92869e8745122c |

| name | neutron |

| type | network |

+-------------+----------------------------------+

[root@node01 ~]#

  创建neutron服务端点(注册neutron服务)

  公共端点

[root@node01 ~]# openstack endpoint create --region RegionOne \

> network public http://controller:9696

+--------------+----------------------------------+

| Field | Value |

+--------------+----------------------------------+

| enabled | True |

| id | 4a8c9c97417f4764a0e61b5a7a1f3a5f |

| interface | public |

| region | RegionOne |

| region_id | RegionOne |

| service_id | 3dc79e6a21e2484e8f92869e8745122c |

| service_name | neutron |

| service_type | network |

| url | http://controller:9696 |

+--------------+----------------------------------+

[root@node01 ~]#

  私有端点

[root@node01 ~]# openstack endpoint create --region RegionOne \

> network internal http://controller:9696

+--------------+----------------------------------+

| Field | Value |

+--------------+----------------------------------+

| enabled | True |

| id | 1269653296e14406920bc43db65fd8af |

| interface | internal |

| region | RegionOne |

| region_id | RegionOne |

| service_id | 3dc79e6a21e2484e8f92869e8745122c |

| service_name | neutron |

| service_type | network |

| url | http://controller:9696 |

+--------------+----------------------------------+

[root@node01 ~]#

  管理端点

[root@node01 ~]# openstack endpoint create --region RegionOne \

> network admin http://controller:9696

+--------------+----------------------------------+

| Field | Value |

+--------------+----------------------------------+

| enabled | True |

| id | 8bed1c51ed6d4f0185762edc2d5afd8a |

| interface | admin |

| region | RegionOne |

| region_id | RegionOne |

| service_id | 3dc79e6a21e2484e8f92869e8745122c |

| service_name | neutron |

| service_type | network |

| url | http://controller:9696 |

+--------------+----------------------------------+

[root@node01 ~]#

  安装neutron服务组件包

[root@node01 ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

  编辑neutron服务的配置文件/etc/neutron/neutron.conf的【DEFAULT】配置段配置连接rabbitmq相关信息以及核心插件和网络插件等;

高性能计算协作平台之OpenStack端网络neutron

  提示:我这里选择使用自服务网络类型;所以这里要配置service_plugins = router 并且启用叠加网络选项;

  在【database】配置段配置连接neutron数据库相关信息

高性能计算协作平台之OpenStack端网络neutron

  在【keystone_authtoken】配置段配置使用keystone做认证的相关信息

高性能计算协作平台之OpenStack端网络neutron

  在【DEFAULT】配置段配置网络通知相关选项

高性能计算协作平台之OpenStack端网络neutron

  在【nova】配置段配置nova服务相关信息

高性能计算协作平台之OpenStack端网络neutron

  在【oslo_concurrency】配置段配置锁路径

高性能计算协作平台之OpenStack端网络neutron

   neutron.conf的最终配置

[root@node01 ~]# grep -i ^"[a-z\[]" /etc/neutron/neutron.conf

[DEFAULT]

transport_url = rabbit://openstack:openstack123@node02

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = true

auth_strategy = keystone

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[agent]

[cors]

[database]

connection = mysql+pymysql://neutron:neutron@node02/neutron

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = node02:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = neutron

[matchmaker_redis]

[nova]

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = nova

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[quotas]

[ssl]

[root@node01 ~]#

  配置ML2插件

  编辑配置文件/etc/neutron/plugins/ml2/ml2_conf.ini ,在【ml2】配置段配置支持flat(平面网络),vlan和vxlan

高性能计算协作平台之OpenStack端网络neutron

  提示:配置ML2插件之后,删除type_drivers选项中的值可能会导致数据库不一致;意思是初始化数据库后,如果在删除上面的值,可能导致数据库不一致的情况;

  在【ml2】配置段开启租户网络类型为vxlan

高性能计算协作平台之OpenStack端网络neutron

  在【ml2】配置段启用Linux桥接和二层填充机制

高性能计算协作平台之OpenStack端网络neutron

  在【ml2】配置段中启用端口安全扩展驱动程序

高性能计算协作平台之OpenStack端网络neutron

  在【ml2_type_flat】配置段配置flat_networks = provider

高性能计算协作平台之OpenStack端网络neutron

  提示:这里主要是指定平面网络的名称,就是虚拟机内部网络叫什么名,这个名称可以自定义,但后面会用到把该网络桥接到物理网卡中的配置,以及后续的创建网络都要用到这名称,请确保后续的名称和这里的名称保持一致;

  在【ml2_type_vxlan】配置段中配置vxlan的标识范围

高性能计算协作平台之OpenStack端网络neutron

  在【securitygroup】配置段启用ipset

高性能计算协作平台之OpenStack端网络neutron

   ml2_conf.ini的最终配置

[root@node01 ~]# grep -i ^"[a-z\[]" /etc/neutron/plugins/ml2/ml2_conf.ini

[DEFAULT]

[l2pop]

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_geneve]

[ml2_type_gre]

[ml2_type_vlan]

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = true

[root@node01 ~]#

  配置linux bridge agent

  编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini,在【linux_bridge】配置段配置provider网络映射到物理的那个接口

高性能计算协作平台之OpenStack端网络neutron

  提示:这里主要是配置把把虚拟机内部的那个网络和物理接口的桥接映射,请确保虚拟机内部网络名称和这里配置的保持一致;冒号前指定虚拟机内部网络名称,冒号后面指定要桥接的物理网卡接口名称;

  在【vxlan】配置段配置启用vxlan,并配置本地管理ip地址和开启l2_population

高性能计算协作平台之OpenStack端网络neutron

  提示:local_ip写控制节点的管理ip地址(如果有多个ip地址的话);

  在【securitygroup】配置段配置启用安全组并配置Linux bridge iptables防火墙驱动程序

高性能计算协作平台之OpenStack端网络neutron

   linuxbridge_agent.ini的最终配置

[root@node01 ~]# grep -i ^"[a-z\[]" /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = provider:ens33

[network_log]

[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan = true

local_ip = 192.168.0.41

l2_population = true

[root@node01 ~]#

  确定br_netfilter内核模块是加载启用,若没加载,加载内核模块并配置相关内核参数

[root@node01 ~]# lsmod |grep br_netfilter

[root@node01 ~]# modprobe br_netfilter

[root@node01 ~]# lsmod |grep br_netfilter

br_netfilter 22209 0

bridge 136173 1 br_netfilter

[root@node01 ~]#

  配置相关内核参数

高性能计算协作平台之OpenStack端网络neutron

[root@node01 ~]# sysctl -p

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

[root@node01 ~]#

  配置L3 agent

  编辑/etc/neutron/l3_agent.ini配置文件,在【DEFAULT】配置段网络接口驱动为linuxbridge

[DEFAULT]

interface_driver = linuxbridge

高性能计算协作平台之OpenStack端网络neutron

  配置DHCP agent

  编辑/etc/neutron/dhcp_agent.ini配置文件,在【DEFAULT】配置段配置网络接口驱动为linuxbridge,启用元数据隔离,并配置dhcp驱动程序

[DEFAULT]

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

高性能计算协作平台之OpenStack端网络neutron

  配置metadata agent

  编辑/etc/neutron/metadata_agent.ini配置文件,在【DEFAULT】配置段配置metadata server地址和共享密钥

[DEFAULT]

nova_metadata_host = controller

metadata_proxy_shared_secret = METADATA_SECRET

高性能计算协作平台之OpenStack端网络neutron

  提示:metadata_proxy_shared_secret 这个是配置共享密钥的参数,后面的密钥可以随机生成,也可以设定任意字符串;

  配置nova服务使用neutron服务

  编辑/etc/nova/nova.conf配置文件,在【neutron】配置段配置neutron相关信息

[neutron]

url = http://controller:9696

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

service_metadata_proxy = true

metadata_proxy_shared_secret = METADATA_SECRET

高性能计算协作平台之OpenStack端网络neutron

  提示:这里的metadata_proxy_shared_secret要和上面配置的metadata agent中配置的密钥保持一致即可;

  将ml2的配置文件软连接到/etc/neutron/plugin.ini

[root@node01 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

[root@node01 ~]# ll /etc/neutron/

total 132

drwxr-xr-x 11 root root 260 Oct 31 00:03 conf.d

-rw-r----- 1 root neutron 10867 Oct 31 01:23 dhcp_agent.ini

-rw-r----- 1 root neutron 14466 Oct 31 01:23 l3_agent.ini

-rw-r----- 1 root neutron 11394 Oct 31 01:30 metadata_agent.ini

-rw-r----- 1 root neutron 72285 Oct 31 00:25 neutron.conf

lrwxrwxrwx 1 root root 37 Oct 31 01:36 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini

drwxr-xr-x 3 root root 17 Oct 31 00:03 plugins

-rw-r----- 1 root neutron 12689 Feb 28 2020 policy.json

-rw-r--r-- 1 root root 1195 Feb 28 2020 rootwrap.conf

[root@node01 ~]#

  初始化neutron数据库

[root@node01 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \

> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

INFO [alembic.runtime.migration] Context impl MySQLImpl.

INFO [alembic.runtime.migration] Will assume non-transactional DDL.

Running upgrade for neutron ...

INFO [alembic.runtime.migration] Context impl MySQLImpl.

INFO [alembic.runtime.migration] Will assume non-transactional DDL.

INFO [alembic.runtime.migration] Running upgrade -> kilo

INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225

INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151

INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf

INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee

INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f

INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773

INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592

INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7

INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79

INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051

INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136

INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59

INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d

INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a

INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25

INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee

INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9

INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4

INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664

INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5

INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f

INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821

INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4

INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81

INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6

INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532

INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f

INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a

INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b

INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73

INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502

INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee

INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048

INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4

INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99

INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada

INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016

INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3

INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d

INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d

INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297

INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c

INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39

INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b

INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050

INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9

INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada

INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc

INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53

INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70

INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90

INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4

INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426

INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524

INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37

INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa

INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf

INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4

INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e

INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc

INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d

INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70

INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c

INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c

INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da

INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192

INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9

INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6

INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f

INFO [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee

INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c

INFO [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding

INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a

INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad

INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab

INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0

INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62

INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353

INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586

INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d

OK

[root@node01 ~]#

  验证:连接neutron数据库中是否有表生成?

MariaDB [(none)]> use neutron

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

Database changed

MariaDB [neutron]> show tables;

+-----------------------------------------+

| Tables_in_neutron |

+-----------------------------------------+

| address_scopes |

| agents |

| alembic_version |

| allowedaddresspairs |

| arista_provisioned_nets |

| arista_provisioned_tenants |

| arista_provisioned_vms |

| auto_allocated_topologies |

| bgp_peers |

| bgp_speaker_dragent_bindings |

| bgp_speaker_network_bindings |

| bgp_speaker_peer_bindings |

| bgp_speakers |

| brocadenetworks |

| brocadeports |

| cisco_csr_identifier_map |

| cisco_hosting_devices |

| cisco_ml2_apic_contracts |

| cisco_ml2_apic_host_links |

| cisco_ml2_apic_names |

| cisco_ml2_n1kv_network_bindings |

| cisco_ml2_n1kv_network_profiles |

| cisco_ml2_n1kv_policy_profiles |

| cisco_ml2_n1kv_port_bindings |

| cisco_ml2_n1kv_profile_bindings |

| cisco_ml2_n1kv_vlan_allocations |

| cisco_ml2_n1kv_vxlan_allocations |

| cisco_ml2_nexus_nve |

| cisco_ml2_nexusport_bindings |

| cisco_port_mappings |

| cisco_router_mappings |

| consistencyhashes |

| default_security_group |

| dnsnameservers |

| dvr_host_macs |

| externalnetworks |

| extradhcpopts |

| firewall_policies |

| firewall_rules |

| firewalls |

| flavors |

| flavorserviceprofilebindings |

| floatingipdnses |

| floatingips |

| ha_router_agent_port_bindings |

| ha_router_networks |

| ha_router_vrid_allocations |

| healthmonitors |

| ikepolicies |

| ipallocationpools |

| ipallocations |

| ipamallocationpools |

| ipamallocations |

| ipamsubnets |

| ipsec_site_connections |

| ipsecpeercidrs |

| ipsecpolicies |

| logs |

| lsn |

| lsn_port |

| maclearningstates |

| members |

| meteringlabelrules |

| meteringlabels |

| ml2_brocadenetworks |

| ml2_brocadeports |

| ml2_distributed_port_bindings |

| ml2_flat_allocations |

| ml2_geneve_allocations |

| ml2_geneve_endpoints |

| ml2_gre_allocations |

| ml2_gre_endpoints |

| ml2_nexus_vxlan_allocations |

| ml2_nexus_vxlan_mcast_groups |

| ml2_port_binding_levels |

| ml2_port_bindings |

| ml2_ucsm_port_profiles |

| ml2_vlan_allocations |

| ml2_vxlan_allocations |

| ml2_vxlan_endpoints |

| multi_provider_networks |

| networkconnections |

| networkdhcpagentbindings |

| networkdnsdomains |

| networkgatewaydevicereferences |

| networkgatewaydevices |

| networkgateways |

| networkqueuemappings |

| networkrbacs |

| networks |

| networksecuritybindings |

| networksegments |

| neutron_nsx_network_mappings |

| neutron_nsx_port_mappings |

| neutron_nsx_router_mappings |

| neutron_nsx_security_group_mappings |

| nexthops |

| nsxv_edge_dhcp_static_bindings |

| nsxv_edge_vnic_bindings |

| nsxv_firewall_rule_bindings |

| nsxv_internal_edges |

| nsxv_internal_networks |

| nsxv_port_index_mappings |

| nsxv_port_vnic_mappings |

| nsxv_router_bindings |

| nsxv_router_ext_attributes |

| nsxv_rule_mappings |

| nsxv_security_group_section_mappings |

| nsxv_spoofguard_policy_network_mappings |

| nsxv_tz_network_bindings |

| nsxv_vdr_dhcp_bindings |

| nuage_net_partition_router_mapping |

| nuage_net_partitions |

| nuage_provider_net_bindings |

| nuage_subnet_l2dom_mapping |

| poolloadbalanceragentbindings |

| poolmonitorassociations |

| pools |

| poolstatisticss |

| portbindingports |

| portdataplanestatuses |

| portdnses |

| portforwardings |

| portqueuemappings |

| ports |

| portsecuritybindings |

| providerresourceassociations |

| provisioningblocks |

| qos_bandwidth_limit_rules |

| qos_dscp_marking_rules |

| qos_fip_policy_bindings |

| qos_minimum_bandwidth_rules |

| qos_network_policy_bindings |

| qos_policies |

| qos_policies_default |

| qos_port_policy_bindings |

| qospolicyrbacs |

| qosqueues |

| quotas |

| quotausages |

| reservations |

| resourcedeltas |

| router_extra_attributes |

| routerl3agentbindings |

| routerports |

| routerroutes |

| routerrules |

| routers |

| securitygroupportbindings |

| securitygrouprules |

| securitygroups |

| segmenthostmappings |

| serviceprofiles |

| sessionpersistences |

| standardattributes |

| subnet_service_types |

| subnetpoolprefixes |

| subnetpools |

| subnetroutes |

| subnets |

| subports |

| tags |

| trunks |

| tz_network_bindings |

| vcns_router_bindings |

| vips |

| vpnservices |

+-----------------------------------------+

167 rows in set (0.00 sec)

MariaDB [neutron]>

  重启nova-api服务

[root@node01 ~]# systemctl restart openstack-nova-api.service

[root@node01 ~]# ss -tnl

State Recv-Q Send-Q Local Address:Port Peer Address:Port

LISTEN 0 128 *:9292 *:*

LISTEN 0 128 *:22 *:*

LISTEN 0 100 127.0.0.1:25 *:*

LISTEN 0 100 *:6080 *:*

LISTEN 0 128 *:8774 *:*

LISTEN 0 128 *:8775 *:*

LISTEN 0 128 *:9191 *:*

LISTEN 0 128 :::80 :::*

LISTEN 0 128 :::22 :::*

LISTEN 0 100 ::1:25 :::*

LISTEN 0 128 :::5000 :::*

LISTEN 0 128 :::8778 :::*

[root@node01 ~]#

  提示:重启确保nova-api服务的8774和8775端口正常监听;

  启动neutron相关服务,并将其设置为开机启动

[root@node01 ~]#  systemctl start neutron-server.service \

> neutron-linuxbridge-agent.service neutron-dhcp-agent.service \

> neutron-metadata-agent.service

[root@node01 ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.

[root@node01 ~]# ss -tnl

State Recv-Q Send-Q Local Address:Port Peer Address:Port

LISTEN 0 128 *:9292 *:*

LISTEN 0 128 *:22 *:*

LISTEN 0 100 127.0.0.1:25 *:*

LISTEN 0 128 *:9696 *:*

LISTEN 0 100 *:6080 *:*

LISTEN 0 128 *:8774 *:*

LISTEN 0 128 *:8775 *:*

LISTEN 0 128 *:9191 *:*

LISTEN 0 128 :::80 :::*

LISTEN 0 128 :::22 :::*

LISTEN 0 100 ::1:25 :::*

LISTEN 0 128 :::5000 :::*

LISTEN 0 128 :::8778 :::*

[root@node01 ~]#

  提示:请确保9696端口正常监听;

  如果我们选用的是self-service network 我们还需要启动L3 agent 服务,并将其设置为开机启动

[root@node01 ~]# systemctl start neutron-l3-agent.service

[root@node01 ~]# systemctl enable neutron-l3-agent.service

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service to /usr/lib/systemd/system/neutron-l3-agent.service.

[root@node01 ~]#

  到此控制节点的neutron服务就配置好了

  3、在计算节点安装配置neutron服务

   安装neutron相关服务包

[root@node03 ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y

  编辑/etc/neutron/neutron.conf,在【DEFAULT】配置段配置连接rabbitmq相关信息,以及配置认证策略为keystone

高性能计算协作平台之OpenStack端网络neutron

  在【keystone_authtoken】配置段配置keystone认证相关信息

高性能计算协作平台之OpenStack端网络neutron

  在【oslo_concurrency】配置段配置锁路径

高性能计算协作平台之OpenStack端网络neutron

  neutron.conf最终配置

[root@node03 ~]# grep -i ^"[a-z\[]" /etc/neutron/neutron.conf

[DEFAULT]

transport_url = rabbit://openstack:openstack123@node02

auth_strategy = keystone

[agent]

[cors]

[database]

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = node02:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = neutron

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[quotas]

[ssl]

[root@node03 ~]#

  配置linux bridge agent

  编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini配置文件,在【linux_bridge】配置段配置provider网络映射到物理的那个接口

高性能计算协作平台之OpenStack端网络neutron

  提示:这里冒号前边的是虚拟机内部网络名称,这个名称请确保和控制节点上配置的虚拟机内部网络名称相同;冒号后面的是配置要桥接的物理接口名称;

  在【vxlan】配置段配置启用vxlan,并配置本地管理ip地址和开启l2_population

高性能计算协作平台之OpenStack端网络neutron

  在【securitygroup】配置段配置启用安全组并配置Linux bridge iptables防火墙驱动程序

高性能计算协作平台之OpenStack端网络neutron

  linuxbridge_agent.ini最终配置

[root@node03 ~]# grep -i ^"[a-z\[]" /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = provider:ens33

[network_log]

[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan = true

local_ip = 192.168.0.43

l2_population = true

[root@node03 ~]#

  确保br_netfilter模块是否加载,若未加载,加载模块

[root@node03 ~]# lsmod |grep br_netfilter

[root@node03 ~]# modprobe br_netfilter

[root@node03 ~]# lsmod |grep br_netfilter

br_netfilter 22209 0

bridge 136173 1 br_netfilter

[root@node03 ~]#

  编辑/etc/sysctl.conf配置内核参数

高性能计算协作平台之OpenStack端网络neutron

[root@node03 ~]# sysctl -p

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

[root@node03 ~]#

  配置nova服务使用neutron服务

  编辑/etc/nova/nova.conf配置文件,在【neutron】配置段配置neutron服务相关配置

[neutron]

url = http://controller:9696

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

高性能计算协作平台之OpenStack端网络neutron

  重启nova-compute服务

[root@node03 ~]# systemctl restart openstack-nova-compute.service

[root@node03 ~]#

  启动neutron-linuxbridge-agent服务,并将其设置为开机启动

[root@node03 ~]# systemctl start neutron-linuxbridge-agent.service

[root@node03 ~]# systemctl enable neutron-linuxbridge-agent.service

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

[root@node03 ~]#

  到此,计算节点的网络服务就安装配置完成;

  验证:在控制节点上导出admin环境变量,列出加载的扩展,以验证成功启动neutron服务器进程

[root@node01 ~]# openstack extension list --network

+-----------------------------------------------------------------------------------------------------------------------------------------+--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+

| Name | Alias | Description |

+-----------------------------------------------------------------------------------------------------------------------------------------+--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+

| Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default. |

| Availability Zone | availability_zone | The availability zone extension. |

| Network Availability Zone | network_availability_zone | Availability zone support for network. |

| Auto Allocated Topology Services | auto-allocated-topology | Auto Allocated Topology Services. |

| Neutron L3 Configurable external gateway mode | ext-gw-mode | Extension of the router abstraction for specifying whether SNAT should occur on the external gateway |

| Port Binding | binding | Expose port bindings of a virtual port to external application |

| agent | agent | The agent management extension. |

| Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool |

| L3 Agent Scheduler | l3_agent_scheduler | Schedule routers among l3 agents |

| Neutron external network | external-net | Adds external network attribute to network resource. |

| Tag support for resources with standard attribute: subnet, trunk, router, network, policy, subnetpool, port, security_group, floatingip | standard-attr-tag | Enables to set tag on resources with standard attribute. |

| Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services. |

| Network MTU | net-mtu | Provides MTU attribute for a network resource. |

| Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. |

| Quota management support | quotas | Expose functions for quotas management per tenant |

| If-Match constraints based on revision_number | revision-if-match | Extension indicating that If-Match based on revision_number is supported. |

| Availability Zone Filter Extension | availability_zone_filter | Add filter parameters to AvailabilityZone resource |

| HA Router extension | l3-ha | Adds HA capability to routers. |

| Filter parameters validation | filter-validation | Provides validation on filter parameters. |

| Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks |

| Quota details management support | quota_details | Expose functions for quotas usage statistics per project |

| Address scope | address-scope | Address scopes extension. |

| Neutron Extra Route | extraroute | Extra routes configuration for L3 router |

| Network MTU (writable) | net-mtu-writable | Provides a writable MTU attribute for a network resource. |

| Empty String Filtering Extension | empty-string-filtering | Allow filtering by attributes with empty string value |

| Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field |

| Neutron Port MAC address regenerate | port-mac-address-regenerate | Network port MAC address regenerate |

| Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes. |

| Provider Network | provider | Expose mapping of virtual networks to physical networks |

| Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced services |

| Router Flavor Extension | l3-flavors | Flavor support for routers. |

| Port Security | port-security | Provides port security |

| Neutron Extra DHCP options | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) |

| Port filtering on security groups | port-security-groups-filtering | Provides security groups filtering when listing ports |

| Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron resources. |

| Pagination support | pagination | Extension that indicates that pagination is enabled. |

| Sorting support | sorting | Extension that indicates that sorting is enabled. |

| security-group | security-group | The security groups extension. |

| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents |

| Floating IP Port Details Extension | fip-port-details | Add port_details attribute to Floating IP resource |

| Router Availability Zone | router_availability_zone | Availability zone support for router. |

| RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant access to resources. |

| standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes |

| IP address substring filtering | ip-substring-filtering | Provides IP address substring filtering when listing ports |

| Neutron L3 Router | router | Router abstraction for basic L3 forwarding between L2 Neutron networks and access to external networks via a NAT gateway. |

| Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs |

| Port Bindings Extended | binding-extended | Expose port bindings of a virtual port to external application |

| project_id field enabled | project-id | Extension that indicates that project_id field is enabled. |

| Distributed Virtual Router | dvr | Enables configuration of Distributed Virtual Routers. |

+-----------------------------------------------------------------------------------------------------------------------------------------+--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+

[root@node01 ~]#

  提示:如果能够看上面显示的内容,则表示neutron服务启动了很多进程,服务没有问题;

  进一步验证:列出网络agent列表,看看各agent是否都是up状态?

[root@node01 ~]# source admin.sh

[root@node01 ~]# openstack network agent list

+--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+

| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |

+--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+

| 749a9639-e85c-4cfd-a936-6c379ee85aac | L3 agent | node01.test.org | nova | :-) | UP | neutron-l3-agent |

| ab400ecf-0488-4710-87ff-9405e84ba444 | Linux bridge agent | node01.test.org | None | :-) | UP | neutron-linuxbridge-agent |

| b08152c8-c3ef-4230-ac50-c7ab445dade2 | DHCP agent | node01.test.org | nova | :-) | UP | neutron-dhcp-agent |

| bea011e8-4302-44c5-9b91-56fbc282e990 | Metadata agent | node01.test.org | None | :-) | UP | neutron-metadata-agent |

| ec25d21e-f197-4eb1-95aa-bd9ec0d1d43f | Linux bridge agent | node03.test.org | None | :-) | UP | neutron-linuxbridge-agent |

+--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+

[root@node01 ~]#

  提示:能够看到node01上有4个agent,node03有一个agent都处于up状态,说明我们配置neutron agent没有问题,都正常运行着;

  ok,到此neutron网络服务在控制节点和计算节点的安装配置验证就完成了;

以上是 高性能计算协作平台之OpenStack端网络neutron 的全部内容, 来源链接: utcz.com/a/64271.html

回到顶部