这是搭建Openstack云平台的第九堂课,主要讲述集成云存储ceph的计算节点的扩容方法,因在第八课的时候已经搭建了控制节点集群,因此在扩展计算节点的时候需要注意配置文件中的url/uri地址。
扩展计算节点时,需要先检查以下配置是否正确,具体配置方法在Openstack云平台搭建课程一·环境配置中有详细介绍。
- 关闭系统防火墙(Selinux/Firewalld/Iptables等)
- 配置Openstack的yum源
- 配置计算节点的时钟同步
- 安装Openstack组件
- 配置主机解析(如果不配置主机解析,则将配置文件中的controller改为VIP地址)
计算节点需要安装的组件列表如下:
- centos-release-openstack-train
- python-openstackclient
- openstack-neutron-linuxbridge
- openstack-nova-compute
- ceph (集成ceph)
- ceph-radosgw (集成ceph)
依次安装以上组件后,开始对新计算节点上的服务进行配置。
配置计算节点neutron
配置 neutron
修改配置/etc/neutron/neutron.conf:
[DEFAULT] bind_host = 10.10.100.152 auth_strategy = keystone transport_url = rabbit://openstack:Openstack123@controller:5672 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = Neutron123 [oslo_concurrency] lock_path = /var/lib/neutron/tmp
配置 linuxbridge agent
修改配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini:
[linux_bridge] physical_interface_mappings = provider:ens34 [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] #如果不使用vxlan,注释以下三项配置,或enable_vxlan = False enable_vxlan = true local_ip = 10.10.100.152 l2_population = true
配置 nova
修改配置/etc/nova/nova.conf:
[neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password region_name = RegionOne project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = Neutron123
如果之前配置了网桥过滤,则新同样需要确保操作系统内核支持网桥过滤器(参照控制节点中的方法)。
启动应用
systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent
在控制节点检查结果
# source /opt/scripts/admin openstack network agent list
配置计算节点nova
配置 nova
修改配置/etc/nova/nova.conf:
[DEFAULT] my_ip = 10.10.100.152 enabled_apis = osapi_compute,metadata use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver transport_url = rabbit://openstack:Openstack123@controller:5672 [api] auth_strategy = keystone [glance] api_servers = http://controller:9292 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = Nova123 [libvirt] #如果CPU不支持虚拟化 #virt_type = qemu [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] os_region_name = RegionOne auth_url = http://controller:35357/v3 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = Placement123 [vnc] enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://$my_ip:6080/vnc_auto.html
启动应用
systemctl enable libvirtd openstack-nova-compute systemctl start libvirtd openstack-nova-compute
在控制节点检查结果
openstack compute service list --service nova-compute
配置计算节点ceph
Ceph集群部署方法请前往分布式云存储Ceph部署方法。
配置用户认证
同步密钥,在ceph-deploy节点执行如下命令:
ceph auth get-key client.cinder | ssh 10.10.100.152 tee clinet.cinder.key
从原计算节点(10.10.100.151)上copy如下文件至新节点:
scp /root/secret.xml root@10.10.100.152:/root scp /etc/ceph/ceph.client.cinder.keyring root@10.10.100.152:/etc/ceph/
或者在新计算节点重新生成secret.xml文件至root目录,uuid必须统一:
cat > secret.xml << EOF <secret ephemeral='no' private='no'> <uuid>d9de3482-448c-4fc4-8ccc-f32e00b8764e</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF
然后在新计算节点下执行如下命令(注意文件路径):
virsh secret-define --file secret.xml virsh secret-set-value --secret d9de3482-448c-4fc4-8ccc-f32e00b8764e --base64 $(cat clinet.cinder)
配置 nova-compute
修改配置/etc/nova/nova.conf:
[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = d9de3482-448c-4fc4-8ccc-f32e00b8764e
配置完毕后,重启nova-compute即可:
systemctl restart openstack-nova-compute
原创文章禁止转载:技术学堂 » Openstack云平台搭建课程九·扩展计算节点