반응형
SMALL

회사에서 스폰서로 참여한 행사다. 우리 회사는 클라우드에 올라가는 vFirewall(vTG), vIPS(vAIPS)를 Openstack에 설치하여 데모 시연을 했다.
오전에는 세션을 듣다가 오후에는 부스에서 마냥 서 있었다. 다리도 아프고 허리도 아프고...

반응형
LIST

'지금 이 순간' 카테고리의 다른 글

둘레길 걷기  (48) 2024.10.13
비가 왜 올까.  (39) 2024.10.06
아이패드 프로 11 M4 구매  (24) 2024.09.16
일요일 저녁 짧은 운동  (9) 2024.09.15
올해도 벌초 끝  (47) 2024.09.07
반응형
SMALL

Commands used to install default packages required and user created for installing stack.

=========================================================================================        

    1. vi /etc/netplan/......yaml  ===> Modify your NIC settings

    3  sudo add-apt-repository universe

    4  sudo apt install -y net-tools python3-pip socat python3-dev

    9  sudo reboot

   10  sudo apt update

   11  sudo apt upgrade

   12  ifconfig

   13  sudo useradd -s /bin/bash -d /opt/stack -m stack

   14  echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack

   15  sudo su - stack

Commands used to download devstack packages and add local.conf.

===============================================================

    1  git clone https://git.openstack.org/openstack-dev/devstack

    2  cd devstack/

    3  vi local.conf                    ====>  Please refer local.conf file below

    4  ./stack.sh                   ===> Which does openstack installation

Commands used to add network configurations:

============================================

 

   12  source admin-openrc.sh

   13  neutron net-create --provider:network_type flat --provider:physical_network public --router:external --shared public

   14  neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool start=7.7.101.101,end=7.7.101.200 --gateway=7.7.101.254 public 7.7.101.0/24

          neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool start=10.10.10.101,end=10.10.10.200 --gateway=10.10.10.254 public 10.10.10.0/24

   15  neutron net-create mgmt

   16  neutron subnet-create --name mgmt_subnet --gateway=192.168.89.1 mgmt 192.168.89.0/24

   17  neutron router-create router1

   18  neutron router-interface-add router1 mgmt_subnet

   19  neutron router-gateway-set router1 public

 

Local.conf

==========

 

stack@gigamon:~/devstack$ cat local.conf

[[local|localrc]]

ADMIN_PASSWORD=abcdefg

HOST_IP=10.10.10.100

SERVICE_HOST=$HOST_IP

MYSQL_HOST=$HOST_IP

RABBIT_HOST=$HOST_IP

GLANCE_HOSTPORT=10.10.10.100:9292

#GLANCE_LIMIT_IMAGE_SIZE_TOTAL=32768

GLANCE_LIMIT_IMAGE_SIZE_TOTAL=102400

ADMIN_PASSWORD=$ADMIN_PASSWORD

SERVICE_TOKEN=$ADMIN_PASSWORD

DATABASE_PASSWORD=$ADMIN_PASSWORD

RABBIT_PASSWORD=$ADMIN_PASSWORD

SERVICE_PASSWORD=$ADMIN_PASSWORD

ENABLE_HTTPD_MOD_WSGI_SERVICES=True

KEYSTONE_USE_MOD_WSGI=True

## Neutron options

Q_USE_SECGROUP=True

PUBLIC_INTERFACE=enx00e04e3bc05f

# Open vSwitch provider networking configuration

Q_USE_PROVIDERNET_FOR_PUBLIC=True

OVS_PHYSICAL_BRIDGE=br-ex

PUBLIC_BRIDGE=br-ex

OVS_BRIDGE_MAPPINGS=public:br-ex

LOGFILE=$DEST/logs/stack.sh.log

VERBOSE=True

ENABLE_DEBUG_LOG_LEVEL=True

ENABLE_VERBOSE_LOG_LEVEL=True

GIT_BASE=${GIT_BASE:-https://git.openstack.org}

 

MULTI_HOST=1

 

[[post-config|$NOVA_CONF]]

[DEFAULT]

firewall_driver=nova.virt.firewall.NoopFirewallDriver

novncproxy_host=0.0.0.0

novncproxy_port=6080

scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter

#[libvirt]

#live_migration_uri = qemu+ssh://stack@%s/system

##cpu_mode = none

#cpu_mode = host-passthrough

#virt_type = kvm

 

This is your host IP address: 7.7.101.2

This is your host IPv6 address: ::1

Horizon is now available at http://7.7.101.2/dashboard

Keystone is serving at http://7.7.101.2/identity/

The default users are: admin and demo

The password: gigamon

Services are running under systemd unit files.

For more information see:

https://docs.openstack.org/devstack/latest/systemd.html

DevStack Version: 2023.1

Change: 48af5d4b1bf5332c879ee52fb4686874b212697f Make rockylinux job non-voting 2023-02-14 17:11:24 +0100

OS Version: Ubuntu 20.04 focal

 

Nova.conf & Nova-cpu.conf

=========================

[libvirt]

live_migration_uri = qemu+ssh://stack@%s/system

#cpu_model = Nehalem

#cpu_mode = custom

cpu_mode = host-model

cpu_model_extra_flags = vmx

virt_type = kvm

 

glance usage

 

V-Series Image Settings

========================

kt@openstack:~$ openstack image set --property hw_vif_multiqueue_enabled=true b0181c20-d192-4006-b681-09fd2df65c5d

kt@openstack:~$ openstack image show b0181c20-d192-4006-b681-09fd2df65c5d

 

Next Step Create flavor for V-Series

=====================================

Configure flavor for V-series settings

=======================================

 (?)openstack flavor set vseries --property dpdk=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB --property hw:emulator_threads_policy=isolate

FM SSH credentials: admin/openstack123A!!

==============================

Commands to get the default FM GUI Password:  wget -q -O - http://169.254.169.254/latest/meta-data/instance-id

Above one could be used for first time FM Login

FM http credentials: admin/openstack123A!!

==================================================

If you're not using DNS server edit the file "/etc/hosts" and add the openstack server ip.

This will help in resolving the URL during monitoring domain creation

G-vTAP Agent

===================================================

download files

- strongSwan TAR Files

- gtap-agent_xxx.rpm

- gvtap.te file

 

# checkmodule -M -m -o gvtap.mod gvtap.te

# semodule_package -o gvtap.pp -m gvtap.mod

# semodule -i gvtap.pp

# yum install python3

# yum install python-urllib3

# yum install iproute-tc

# pip3 install urllib3

# pip3 install requests

# pip3 install netifaces

 

https://www.tecmint.com/disable-selinux-on-centos-8/

https://www.psychz.net/client/question/ko/turn-off-firewall-centos-7.html

 

 

# rpm -ivh gvtap-agent_xxx.rpm

# vi /etc/gvtap/gvtap-agent.conf

  eth0 mirror-src-ingress mirror-src-egress mirror-dst

# /etc/init.d/gvtap-agent restart

# tar -xvfpz strongswan-xxx.tar.gz

# cd strongswan-xxx

# sh ./swan-install.sh

 

 

 

[root@centos1 ~]# setenforce 0

[root@centos1 ~]# setenforce Permissive

[root@centos1 ~]# sestatus

SELinux status:                 enabled

SELinuxfs mount:                /sys/fs/selinux

SELinux root directory:         /etc/selinux

Loaded policy name:             targeted

Current mode:                   permissive

Mode from config file:          enforcing

Policy MLS status:              enabled

Policy deny_unknown status:     allowed

Max kernel policy version:      31

[root@centos1 ~]#

Tools vxlan 설정

ip link add vxlan199 type vxlan id 1005 dev eth0 dstport 4789

sudo ip link set vxlan199 up

tcpdump -nvi vxlan199

 

Tools L2GRE 설정

ip link add name gre1 type gretap local 10.0.0.2 remote 8.8.8.8 key 1234

ip link set gre1 up

sudo gvtapl mirror-list

 

V Series

apiv /stats

apiv /stats/teps

/var/log/로그

 sudo ovs-vsctl del-port vxlan0

  sudo ovs-vsctl del-port vxlan1

 sudo ovs-tcpdump -i tapd3eaa48f-ba

=========================================================

Use ip from iproute2. (You need to also specify the prefix length though.)

ip addr del 10.22.30.44/16 dev eth0

To remove all addresses (in case you have multiple):

ip addr flush dev eth0

반응형
LIST
반응형
SMALL

dnf install libguestfs-tools

virt-customize -a CentOS-7-x86_64-GenericCloud-2111.qcow2 --root-password password:DDYrTXJZTJldOqimb68ZK5KCmRpbdBOe

[ 0.0] Examining the guest ...

[ 6.4] Setting a random seed

[ 6.4] Setting passwords

[ 7.5] Finishing off

 

Now able to login to new guest as root / pw.

반응형
LIST
반응형
SMALL

Configure GigaVUE Fabric Components in OpenStack (gigamon.com)

GigaVUE Cloud Suite > GigaVUE Cloud Suite for Third Party Orchestration > Deploy GigaVUE Cloud Suite for Third Party Orchestration > Configure GigaVUE Fabric Components in OpenStack

Configure GigaVUE Fabric Components in OpenStack

This section provides step-by-step information on how to register GigaVUE fabric components using OpenStack or a configuration file.

Keep in mind the following when deploying the fabric components using generic mode:

  • Ensure that the Traffic Acquisition Tunnel MTU is set to the default value of 1450. To edit the Traffic Acquisition Tunnel MTU, select the monitoring domain and click on the Edit Monitoring Domain option. Enter the Traffic Acquisition Tunnel MTU value and click Save.
  • Before deploying the monitoring session ensure that the appropriate Traffic Acquisition Tunnel MTU value is set. Otherwise, the monitoring session must be un-deployed and deployed again.
  • You can also create a monitoring domain under Third Party Orchestration and provide the monitoring domain name and the connection name as groupName and subGroupName in the registration data. Refer to Create Monitoring Domain for more detailed information on how to create monitoring domain under third party orchestration.
  • User and Password provided in the registration data must be configured in the User Management page. Refer to Configure Role-Based Access for Third Party Orchestration for more detailed information. Enter the UserName and Password created in the Add Users Section.

In your OpenStack Dashboard, you can configure the following GigaVUE fabric components:

Configure G-vTAP Controller in OpenStack

You can configure more than one G-vTAP Controller in a monitoring domain.

To register G-vTAP Controller in OpenStack, use any one of the following methods:

Register G-vTAP Controller during Instance Launch

In your OpenStack dashboard, to launch the G-vTAP Controller and register G-vTAP Controller using Customization Script, follow the steps given below:

  1. On the Instance page of OpenStack dashboard, click Launch instance. The Launch Instance wizard appears. For detailed information, refer to Launch and Manage Instances topic in OpenStack Documentation.
  2. On the Configuration tab, enter the Customization Script as text in the following format and deploy the instance. The G-vTAP Controller uses this registration data to generate config file (/etc/gigamon-cloud.conf) used to register with GigaVUE-FM.

     

    #cloud-config

    write_files:

    - path: /etc/gigamon-cloud.conf

    owner: root:root

    permissions: '0644'

    content:

    Registration:

    groupName: <Monitoring Domain Name>

    subGroupName: <Connection Name>

    user: <Username>

    password: <Password>

    remoteIP: <IP address of the GigaVUE-FM>

    remotePort: 443
    The G-vTAP Controller deployed in OpenStack appears on the Monitoring Domain page of GigaVUE-FM.

    Register G-vTAP Controller after Instance Launch
    Note:  You can configure more than one G-vTAP Controller for a G-vTAP Agent, so that if one G-vTAP Controller goes down, the G-vTAP Agent registration will happen through another Controller that is active.
    To register G-vTAP Agent after launching a Instance using a configuration file, follow the steps given below:
  1. Log in to the G-vTAP Controller.
  2. Create a local configuration file (/etc/gigamon-cloud.conf) and enter the following Customization Script. 

     

    Registration:

    groupName: <Monitoring Domain Name>

    subGroupName: <Connection Name>

    user: <Username>

    password: <Password>

    remoteIP: <IP address of the GigaVUE-FM>

    remotePort: 443
  3. Restart the G-vTAP Controller service.


    $ sudo service gvtap-cntlr restart
    The deployed G-vTAP Controller registers with the GigaVUE-FM. After successful registration the G-vTAP Controller sends heartbeat messages to GigaVUE-FM every 30 seconds. If one heartbeat is missing ,the fabric node status appears as 'Unhealthy'. If more than five heartbeats fail to reach GigaVUE-FM, GigaVUE‑FM tries to reach the G-vTAP Controller and if that fails as well then GigaVUE‑FM unregisters the G-vTAP Controller and it will be removed from GigaVUE‑FM.

Note:  When you deploy V Series nodes or G-vTAP Controllers using 3rd party orchestration, you cannot delete the monitoring domain without unregistering the V Series nodes or G-vTAP Controllers.

Configure G-vTAP Agent in OpenStack

Note:  You can configure more than one G-vTAP Controller for a G-vTAP Agent, so that if one G-vTAP Controller goes down, the G-vTAP Agent registration will happen through another Controller that is active.

To register G-vTAP Agent using a configuration file:

  1. Install the G-vTAP Agent in the Linux or Windows platform. For detailed instructions, refer to Linux G-vTAP Agent Installation and Windows G-vTAP Agent Installation.
  2. Log in to the G-vTAP Agent.
  3. Edit the local configuration file and enter the following Customization Script.
  4. Restart the G-vTAP Agent service.

The deployed G-vTAP Agent registers with the GigaVUE-FM through the G-vTAP Controller. After successful registration the G-vTAP Agent sends heartbeat messages to GigaVUE-FM every 30 seconds. If one heartbeat is missing, G-vTAP Agent status appears as 'Unhealthy'. If more than five heartbeats fail to reach GigaVUE-FM, GigaVUE‑FM tries to reach the G-vTAP Agent and if that fails as well then GigaVUE‑FM unregisters the G-vTAP Agent and it will be removed from GigaVUE‑FM.

Configure GigaVUE V Series Nodes and V Series Proxy in OpenStack

Note:  It is not mandatory to register GigaVUE V Series Nodes via V Series proxy however, if there is a large number of nodes connected to GigaVUE-FM or if the user does not wish to reveal the IP addresses of the nodes, then you can register your nodes using GigaVUE V Series Proxy. In this case, GigaVUE-FM communicates with GigaVUE V Series Proxy to manage the GigaVUE V Series Nodes.

To register GigaVUE V Series Node and GigaVUE V Series Proxy in OpenStack, use any one of the following methods:

Register V Series Nodes or V Series Proxy during Instance Launch

To register V Series nodes or proxy using the Customization Script in OpenStack GUI:

  1. On the Instance page of OpenStack dashboard, click Launch instance. The Launch Instance wizard appears. For detailed information, refer to Launch and Manage Instances topic in OpenStack Documentation.
  2. On the Configuration tab, enter the Customization Script as text in the following format and deploy the instance. The V Series nodes or V Series proxy uses this customization script to generate config file (/etc/gigamon-cloud.conf) used to register with GigaVUE-FM

    #cloud-config

    write_files:

    - path: /etc/gigamon-cloud.conf

    owner: root:root

    permissions: '0644'

    content:

    Registration:

    groupName: <Monitoring Domain Name>

    subGroupName: <Connection Name>

    user: <Username>

    password: <Password>

    remoteIP: <IP address of the GigaVUE-FM>

    remotePort: 443
  • You can register your GigaVUE V Series Nodes directly with GigaVUE‑FM or you can use V Series proxy to register your GigaVUE V Series Nodes with GigaVUE‑FM. If you wish to register GigaVUE V Series Nodes directly, enter the remotePort value as 443 and the remoteIP as <IP address of the GigaVUE‑FM> or if you wish to deploy GigaVUE V Series Nodes using V Series proxy then, enter the remotePort value as 8891 and remoteIP as <IP address of the Proxy>.
  • User and Password must be configured in the User Management page. Refer to Configure Role-Based Access for Third Party Orchestration for more detailed information. Enter the UserName and Password created in the Add Users Section.

Register V Series Node or V Series Proxy after Instance Launch

To register V Series node or proxy using a configuration file:

  1. Log in to the V Series node or proxy.
  2. Edit the local configuration file (/etc/gigamon-cloud.conf) and enter the following customization script.


     

    Registration:

    groupName: <Monitoring Domain Name>

    subGroupName: <Connection Name>

    user: <Username>

    password: <Password>

    remoteIP: <IP address of the GigaVUE-FM>

    remotePort: 443
  3. Restart the V Series node or proxy service. 

 

출처: <https://docs.gigamon.com/doclib62/Content/GV-Cloud-third-party/Deploy_nodes_openstack.html>

 

 

 

ubuntu@vtap-ctrl:/etc$ more gigamon-cloud.conf

Registration:

groupName: kt

subGroupName: kt

auth: Basic a3Q6Z2lnYW1vbjEyM0EhIQ==

remoteIP: 172.25.0.17

remotePort: 443

반응형
LIST
반응형
SMALL

Commands used to install default packages required and user created for installing stack.

=========================================================================================          1. vi /etc/netplan/......yaml  ===> Modify your NIC settings

    3  sudo add-apt-repository universe

    4  sudo apt install -y net-tools python3-pip socat python3-dev

    9  sudo reboot

   10  sudo apt update

   11  sudo apt upgrade

   12  ifconfig

   13  sudo useradd -s /bin/bash -d /opt/stack -m stack

   14  echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack

   15  sudo su - stack

Commands used to download devstack packages and add local.conf.

===============================================================

    1  git clone https://git.openstack.org/openstack-dev/devstack

    2  cd devstack/

    3  vi local.conf                    ====>  Please refer local.conf file below

    4  ./stack.sh                   ===> Which does openstack installation

Commands used to add network configurations:

============================================

   12  source admin-openrc.sh

   13  neutron net-create --provider:network_type flat --provider:physical_network public --router:external --shared public

   14  neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool start=172.24.4.101,end=172.24.4.200 --gateway=172.24.4.1 public 172.24.4.0/24

          neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool start=172.24.4.101,end=172.24.4.200 --gateway=172.24.4.254 public 172.24.4.0/24

   15  neutron net-create mgmt

   16  neutron subnet-create --name mgmt_subnet --gateway=192.168.89.1 mgmt 192.168.89.0/24

   17  neutron router-create router1

   18  neutron router-interface-add router1 mgmt_subnet

   19  neutron router-gateway-set router1 public

Local.conf

==========

stack@gigamon:~/devstack$ cat local.conf

[[local|localrc]]

ADMIN_PASSWORD=openstack

HOST_IP=10.10.10.100

SERVICE_HOST=$HOST_IP

MYSQL_HOST=$HOST_IP

RABBIT_HOST=$HOST_IP

GLANCE_HOSTPORT=10.10.10.100:9292

#GLANCE_LIMIT_IMAGE_SIZE_TOTAL=32768

GLANCE_LIMIT_IMAGE_SIZE_TOTAL=102400

ADMIN_PASSWORD=$ADMIN_PASSWORD

SERVICE_TOKEN=$ADMIN_PASSWORD

DATABASE_PASSWORD=$ADMIN_PASSWORD

RABBIT_PASSWORD=$ADMIN_PASSWORD

SERVICE_PASSWORD=$ADMIN_PASSWORD

ENABLE_HTTPD_MOD_WSGI_SERVICES=True

KEYSTONE_USE_MOD_WSGI=True

## Neutron options

Q_USE_SECGROUP=True

PUBLIC_INTERFACE=enx00e04e3bc05f

# Open vSwitch provider networking configuration

Q_USE_PROVIDERNET_FOR_PUBLIC=True

OVS_PHYSICAL_BRIDGE=br-ex

PUBLIC_BRIDGE=br-ex

OVS_BRIDGE_MAPPINGS=public:br-ex

LOGFILE=$DEST/logs/stack.sh.log

VERBOSE=True

ENABLE_DEBUG_LOG_LEVEL=True

ENABLE_VERBOSE_LOG_LEVEL=True

GIT_BASE=${GIT_BASE:-https://git.openstack.org}

MULTI_HOST=1

[[post-config|$NOVA_CONF]]

[DEFAULT]

firewall_driver=nova.virt.firewall.NoopFirewallDriver

novncproxy_host=0.0.0.0

novncproxy_port=6080

scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter

#[libvirt]

#live_migration_uri = qemu+ssh://stack@%s/system

##cpu_mode = none

#cpu_mode = host-passthrough

#virt_type = kvm

 

This is your host IP address: 10.10.10.100

This is your host IPv6 address: ::1

Horizon is now available at http://10.10.10.100/dashboard

Keystone is serving at http://10.10.10.100/identity/

The default users are: admin and demo

The password: gigamon

 

Services are running under systemd unit files.

For more information see:

https://docs.openstack.org/devstack/latest/systemd.html

DevStack Version: 2023.1

Change: 48af5d4b1bf5332c879ee52fb4686874b212697f Make rockylinux job non-voting 2023-02-14 17:11:24 +0100

OS Version: Ubuntu 20.04 focal

Nova.conf & Nova-cpu.conf

=========================

[libvirt]

live_migration_uri = qemu+ssh://stack@%s/system

#cpu_model = Nehalem

#cpu_mode = custom

cpu_mode = host-model

cpu_model_extra_flags = vmx

virt_type = kvm

 

glance usage

V-Series Image Settings

========================

kt@openstack:~$ openstack image set --property hw_vif_multiqueue_enabled=true b0181c20-d192-4006-b681-09fd2df65c5d

kt@openstack:~$ openstack image show b0181c20-d192-4006-b681-09fd2df65c5d

Next Step Create flavor for V-Series

=====================================

Configure flavor for V-series settings

=======================================

 (?)openstack flavor set vseries --property dpdk=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB --property hw:emulator_threads_policy=isolate

FM SSH credentials: admin/***********

===============================

Commands to get the default FM GUI Password:  wget -q -O - http://169.254.169.254/latest/meta-data/instance-id

 

Above one could be used for first time FM Login

 

FM http credentials: admin/openstack123A!!

 

===================================================

 

If you're not using DNS server edit the file "/etc/hosts" and add the openstack server ip.

 

This will help in resolving the URL during monitoring domain creation.

 

G-vTAP Agent

===================================================

download files

- strongSwan TAR Files

- gtap-agent_xxx.rpm

- gvtap.te file

 

# checkmodule -M -m -o gvtap.mod gvtap.te

# semodule_package -o gvtap.pp -m gvtap.mod

# semodule -i gvtap.pp

# yum install python3

# yum install python-urllib3

# yum install iproute-tc

# pip3 install urllib3

# pip3 install requests

# pip3 install netifaces

 

https://www.tecmint.com/disable-selinux-on-centos-8/

https://www.psychz.net/client/question/ko/turn-off-firewall-centos-7.html

 

 

# rpm -ivh gvtap-agent_xxx.rpm

# vi /etc/gvtap/gvtap-agent.conf

  eth0 mirror-src-ingress mirror-src-egress mirror-dst

# /etc/init.d/gvtap-agent restart

# tar -xvfpz strongswan-xxx.tar.gz

# cd strongswan-xxx

# sh ./swan-install.sh

 

[root@centos1 ~]# setenforce 0

[root@centos1 ~]# setenforce Permissive

[root@centos1 ~]# sestatus

SELinux status:                 enabled

SELinuxfs mount:                /sys/fs/selinux

SELinux root directory:         /etc/selinux

Loaded policy name:             targeted

Current mode:                   permissive

Mode from config file:          enforcing

Policy MLS status:              enabled

Policy deny_unknown status:     allowed

Max kernel policy version:      31

[root@centos1 ~]#

 

 

Tools vxlan 설정

 

ip link add vxlan199 type vxlan id 1005 dev eth0 dstport 4789

sudo ip link set vxlan199 up

tcpdump -nvi vxlan199

 

sudo gvtapl mirror-list

 

Vseries

apiv /stats

apiv /stats/teps

 

/var/log/로그

 

 sudo ovs-vsctl del-port vxlan0

  sudo ovs-vsctl del-port vxlan1

 

 sudo ovs-tcpdump -i tapd3eaa48f-ba

 

=========================================================

Use ip from iproute2. (You need to also specify the prefix length though.)

ip addr del 10.22.30.44/16 dev eth0

To remove all addresses (in case you have multiple):

ip addr flush dev eth0

========================================================

 

 

반응형
LIST

+ Recent posts