X
NMS Prime
Stay informed. No spam. Just content. This is our promise.
I agree to the privacy terms. I can unsubscribe at any time.
X
Thank You!
X
NMS PRIME - Konferenz
Anzahl der Teilnehmer:
Datenschutz gelesen und akzeptiert
X
Vielen Dank!
X Sign up to comment and create new topics

Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • master: same as in standalone installation (every time a relevant database value has changed)
  • slave:
    • rebuild every n seconds (this can be configured via master GUI (Global config ⇒ ProvHA))
    • rebuild on changes of several database tables (cmts, ippool) 

DHCP

Anchor
dhcp
dhcp

We use the failover functionality of ISC DHCP which supports a setup with one master (primary) and on slave (secondary) instance, called peers. Each server handles by default 50% of each IP pool, the load balance is configurable. Servers inform each other about leases – if one instance goes down the failover peer takes over the complete pools. If both servers are active again the pools will be balanced automatically.

Configuration is done in /etc/dhcp-nmsprime/failover.conf , the pools in /etc/dhcp-nmsprime/cmts_gws/*.conf  are configured with a failover statement.

TFTP

Anchor
tftp
tftp

For TFTP we have to distinct between DOCSIS versions:

  • For DOCSIS versions less 3 one can only provide one TFTP server, realized via option next-server statement in global.conf. In our setup each of the two DHCP servers sets this to its own IP address.

    Info

    check: will this cause problems if the configured server goes offline? Or will CM get new values in DHCPACK from failover peer?


  • For higher versions the value in next-server  can be overwritten using option vivso (CL_V4OPTION_TFTPSERVERS) – there can be multiple TFTP servers configured. For our system each DHCP server provides its IP address first and the peer's IP address as second.

Time

Anchor
time
time

option time-servers accept a comma separated list of IP addresses. Each DHCP server provides its IP address first and the peer's IP address as second.

DNS

Anchor
time
time

option domain-name-servers accept a comma separated list of IP adresses. Each DHCP server provides its IP address first and the peer's IP address as second.

...

  • at the slave the database can not be changed
  • provisioning of CM/MTA/CPE will be done completely by the slave if the master fails

Database

Anchor
db
db

The database is set up as a galera cluster; this way all data is duplicated and stored e.g. at master and slave machine.

...

  • Create a ssh key (e.g. ssh-keygen -b 4096) at master machine
  • add public key of master to slave's /root/.ssh/authorized_keys
  • test if you can establish a ssh connection from master to slave: ssh root@<slave_ip>
  • rebuild config files: php artisan nms:dhcp && php artisan nms:config
  • on master execute cd /var/www/nmsprime/ && php artisan provha:sync_ha_master_files – this should rsync files to slave; it is expected that one sees no errors (smile)

rsyslogd

Anchor
rsyslog
rsyslog

To collect log entries of both instances configure like the following:

...

which converts the binary journal file to textfile /var/named/dynamic/nmsprime.test.zone

icingaweb2

Anchor
icinga
icinga

To monitor the slave machine from the master one has to configure icingaweb:

...

(warning) icinga itself is not redundant. So if the master machine goes down there is no monitoring at all. Consider to set up a second icinga instance to monitor the master (and inform you in case of a crash) at least.

cacti

Anchor
cacti
cacti

ATM there is only one place where .rrd files are stored  – by default at /var/lib/cacti/rra  at the master instance. We use sshfs to mount this directory at the slave to show cacti diagrams there, too. We highly recommend to use autofs to ensure that data is available when needed (see e.g. https://whattheserver.com/automounting-remote-shares-via-autofs).

...