Configure GigaVUE Fabric Components in OpenStack (gigamon.com)
GigaVUE Cloud Suite > GigaVUE Cloud Suite for Third Party Orchestration > Deploy GigaVUE Cloud Suite for Third Party Orchestration > Configure GigaVUE Fabric Components in OpenStack
Configure GigaVUE Fabric Components in OpenStack
This section provides step-by-step information on how to register GigaVUE fabric components using OpenStack or a configuration file.
Keep in mind the following when deploying the fabric components using generic mode:
- Ensure that the Traffic Acquisition Tunnel MTU is set to the default value of 1450. To edit the Traffic Acquisition Tunnel MTU, select the monitoring domain and click on the Edit Monitoring Domain option. Enter the Traffic Acquisition Tunnel MTU value and click Save.
- Before deploying the monitoring session ensure that the appropriate Traffic Acquisition Tunnel MTU value is set. Otherwise, the monitoring session must be un-deployed and deployed again.
- You can also create a monitoring domain under Third Party Orchestration and provide the monitoring domain name and the connection name as groupName and subGroupName in the registration data. Refer to Create Monitoring Domain for more detailed information on how to create monitoring domain under third party orchestration.
- User and Password provided in the registration data must be configured in the User Management page. Refer to Configure Role-Based Access for Third Party Orchestration for more detailed information. Enter the UserName and Password created in the Add Users Section.
In your OpenStack Dashboard, you can configure the following GigaVUE fabric components:
Configure G-vTAP Controller in OpenStack
You can configure more than one G-vTAP Controller in a monitoring domain.
To register G-vTAP Controller in OpenStack, use any one of the following methods:
Register G-vTAP Controller during Instance Launch
In your OpenStack dashboard, to launch the G-vTAP Controller and register G-vTAP Controller using Customization Script, follow the steps given below:
- On the Instance page of OpenStack dashboard, click Launch instance. The Launch Instance wizard appears. For detailed information, refer to Launch and Manage Instances topic in OpenStack Documentation.
- On the Configuration tab, enter the Customization Script as text in the following format and deploy the instance. The G-vTAP Controller uses this registration data to generate config file (/etc/gigamon-cloud.conf) used to register with GigaVUE-FM.
#cloud-config
write_files:
- path: /etc/gigamon-cloud.conf
owner: root:root
permissions: '0644'
content:
Registration:
groupName: <Monitoring Domain Name>
subGroupName: <Connection Name>
user: <Username>
password: <Password>
remoteIP: <IP address of the GigaVUE-FM>
remotePort: 443
The G-vTAP Controller deployed in OpenStack appears on the Monitoring Domain page of GigaVUE-FM.
Register G-vTAP Controller after Instance Launch
Note: You can configure more than one G-vTAP Controller for a G-vTAP Agent, so that if one G-vTAP Controller goes down, the G-vTAP Agent registration will happen through another Controller that is active.
To register G-vTAP Agent after launching a Instance using a configuration file, follow the steps given below:
- Log in to the G-vTAP Controller.
- Create a local configuration file (/etc/gigamon-cloud.conf) and enter the following Customization Script.
Registration:
groupName: <Monitoring Domain Name>
subGroupName: <Connection Name>
user: <Username>
password: <Password>
remoteIP: <IP address of the GigaVUE-FM>
remotePort: 443
- Restart the G-vTAP Controller service.
$ sudo service gvtap-cntlr restart
The deployed G-vTAP Controller registers with the GigaVUE-FM. After successful registration the G-vTAP Controller sends heartbeat messages to GigaVUE-FM every 30 seconds. If one heartbeat is missing ,the fabric node status appears as 'Unhealthy'. If more than five heartbeats fail to reach GigaVUE-FM, GigaVUE‑FM tries to reach the G-vTAP Controller and if that fails as well then GigaVUE‑FM unregisters the G-vTAP Controller and it will be removed from GigaVUE‑FM.
Note: When you deploy V Series nodes or G-vTAP Controllers using 3rd party orchestration, you cannot delete the monitoring domain without unregistering the V Series nodes or G-vTAP Controllers.
Configure G-vTAP Agent in OpenStack
Note: You can configure more than one G-vTAP Controller for a G-vTAP Agent, so that if one G-vTAP Controller goes down, the G-vTAP Agent registration will happen through another Controller that is active.
To register G-vTAP Agent using a configuration file:
- Install the G-vTAP Agent in the Linux or Windows platform. For detailed instructions, refer to Linux G-vTAP Agent Installation and Windows G-vTAP Agent Installation.
- Log in to the G-vTAP Agent.
- Edit the local configuration file and enter the following Customization Script.
- Restart the G-vTAP Agent service.
The deployed G-vTAP Agent registers with the GigaVUE-FM through the G-vTAP Controller. After successful registration the G-vTAP Agent sends heartbeat messages to GigaVUE-FM every 30 seconds. If one heartbeat is missing, G-vTAP Agent status appears as 'Unhealthy'. If more than five heartbeats fail to reach GigaVUE-FM, GigaVUE‑FM tries to reach the G-vTAP Agent and if that fails as well then GigaVUE‑FM unregisters the G-vTAP Agent and it will be removed from GigaVUE‑FM.
Configure GigaVUE V Series Nodes and V Series Proxy in OpenStack
Note: It is not mandatory to register GigaVUE V Series Nodes via V Series proxy however, if there is a large number of nodes connected to GigaVUE-FM or if the user does not wish to reveal the IP addresses of the nodes, then you can register your nodes using GigaVUE V Series Proxy. In this case, GigaVUE-FM communicates with GigaVUE V Series Proxy to manage the GigaVUE V Series Nodes.
To register GigaVUE V Series Node and GigaVUE V Series Proxy in OpenStack, use any one of the following methods:
Register V Series Nodes or V Series Proxy during Instance Launch
To register V Series nodes or proxy using the Customization Script in OpenStack GUI:
- On the Instance page of OpenStack dashboard, click Launch instance. The Launch Instance wizard appears. For detailed information, refer to Launch and Manage Instances topic in OpenStack Documentation.
- On the Configuration tab, enter the Customization Script as text in the following format and deploy the instance. The V Series nodes or V Series proxy uses this customization script to generate config file (/etc/gigamon-cloud.conf) used to register with GigaVUE-FM
#cloud-config
write_files:
- path: /etc/gigamon-cloud.conf
owner: root:root
permissions: '0644'
content:
Registration:
groupName: <Monitoring Domain Name>
subGroupName: <Connection Name>
user: <Username>
password: <Password>
remoteIP: <IP address of the GigaVUE-FM>
remotePort: 443
- You can register your GigaVUE V Series Nodes directly with GigaVUE‑FM or you can use V Series proxy to register your GigaVUE V Series Nodes with GigaVUE‑FM. If you wish to register GigaVUE V Series Nodes directly, enter the remotePort value as 443 and the remoteIP as <IP address of the GigaVUE‑FM> or if you wish to deploy GigaVUE V Series Nodes using V Series proxy then, enter the remotePort value as 8891 and remoteIP as <IP address of the Proxy>.
- User and Password must be configured in the User Management page. Refer to Configure Role-Based Access for Third Party Orchestration for more detailed information. Enter the UserName and Password created in the Add Users Section.
Register V Series Node or V Series Proxy after Instance Launch
To register V Series node or proxy using a configuration file:
- Log in to the V Series node or proxy.
- Edit the local configuration file (/etc/gigamon-cloud.conf) and enter the following customization script.
Registration:
groupName: <Monitoring Domain Name>
subGroupName: <Connection Name>
user: <Username>
password: <Password>
remoteIP: <IP address of the GigaVUE-FM>
remotePort: 443
- Restart the V Series node or proxy service.
출처: <https://docs.gigamon.com/doclib62/Content/GV-Cloud-third-party/Deploy_nodes_openstack.html>
ubuntu@vtap-ctrl:/etc$ more gigamon-cloud.conf
Registration:
groupName: kt
subGroupName: kt
auth: Basic a3Q6Z2lnYW1vbjEyM0EhIQ==
remoteIP: 172.25.0.17
remotePort: 443