Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set Cluster Configuration #57

Open
wdwill2wdwill2 opened this issue Feb 6, 2018 · 3 comments
Open

Set Cluster Configuration #57

wdwill2wdwill2 opened this issue Feb 6, 2018 · 3 comments

Comments

@wdwill2wdwill2
Copy link

I'm planning to upgrade from 9.0.2.1 to 9.0.4.0. The readme for the firmware talks about unconfiguring the masters before applying the firmware.

I have a primary, secondary, tertiary defined to the cluster.

I am trying to remove just the tertiary, so i created the following playbook:


Build out entire environment

  • name: Unconfigure Tertiary in Cluster Config
    hosts: primary
    connection: local
    roles:
    • role: set_cluster_config
      set_cluster_config_primary_master: "{{ cluster_config_primary_master }}"
      set_cluster_config_secondary_master: "{{ cluster_config_secondary_master }}"
      set_cluster_config_master_ere: "{{ cluster_config_master_ere }}"
      when: set_cluster_config_primary_master is defined and set_cluster_config_secondary_master is defined and set_cluster_config_master_ere is defined

When i run this playbook it does not remove the tertiary from the cluster configuration. It says "JSON provided is already contained in the current appliance configuration". if i look at the output, current config has the tertiary information, but the "to Apply" does not.

    "[ibmsecurity.isam.base.cluster.configuration]",
    "[_check():142]",
    "Appliance",
    "current",
    "configuration:",
    "{u'dsc_worker_threads':",
    "64,",
    "u'first_port':",
    "2020,",
    "u'hvdb_embedded':",
    "True,",
    "u'dsc_external_clients':",
    "False,",
    "u'secondary_master':",
    "u'xx.xx.xx.252',",
    "u'tertiary_master':",
    "u'xx.xx.xx.253',",
    "u'dsc_maximum_session_lifetime':",
    "3600,",
    "u'hvdb_max_size':",
    "40,",
    "u'primary_master':",
    "u'xx.xx.xx.254',",
    "u'master_ere':",
    "u'xx.xx.xx.1',",
    "u'dsc_client_grace_period':",
    "600,",
    "u'cfgdb_fs':",
    "None,",
    "u'cfgdb_embedded':",
    "True}",
    "[2018-02-06",
    "15:16:42,677]",
    "[PID:24963",
    "TID:140326626694976]",
    "[DEBUG]",
    "[ibmsecurity.isam.base.cluster.configuration]",
    "[_check():143]",
    "JSON",
    "to",
    "Apply:",
    "{'dsc_worker_threads':",
    "64,",
    "'dsc_client_grace_period':",
    "600,",
    "'first_port':",
    "2020,",
    "'cfgdb_embedded':",
    "True,",
    "'hvdb_embedded':",
    "True,",
    "'dsc_external_clients':",
    "False,",
    "'secondary_master':",
    "'xx.xx.xx.252',",
    "'dsc_maximum_session_lifetime':",
    "3600,",
    "'hvdb_max_size':",
    "40,",
    "'primary_master':",
    "'xx.xx.xx.254',",
    "'master_ere':",
    "'xx.xx.xx.1'}",
    "[2018-02-06",
    "15:16:42,677]",
    "[PID:24963",
    "TID:140326626694976]",
    "[DEBUG]",
    "[ibmsecurity.isam.base.cluster.configuration]",
    "[_check():162]",
    "JSON",
    "provided",
    "already",
    "is",
    "contained",
    "in",
    "current",
    "appliance",
    "configuration."
]

Any thoughts?

Based on what i am trying to do should I be using the POST or PUT method for updating the cluster configuration. I did try updating the configuration.py with the PUT method, but since the compare/check thinks there is no changes, it produces the same results.

Let me know if you need anymore information.

Thanks
Bill

@wdwill2wdwill2
Copy link
Author

If i change the force parameter to True, it updates the cluster config as expected.

Still like to understand why the "Check" doesn't see differences.

So here is the updated playbook.


Build out entire environment

  • name: Unconfigure Tertiary in Cluster Config
    hosts: primary
    connection: local
    roles:
    • role: set_cluster_config
      set_cluster_config_primary_master: "{{ cluster_config_primary_master }}"
      set_cluster_config_secondary_master: "{{ cluster_config_secondary_master }}"
      set_cluster_config_master_ere: "{{ cluster_config_master_ere }}"
      force: True
      when: set_cluster_config_primary_master is defined and set_cluster_config_secondary_master is defined and set_cluster_config_master_ere is defined

Thanks
Bill

@ram-ibm
Copy link
Collaborator

ram-ibm commented Feb 7, 2018

Bill - I looked at the check code and am not happy with it. It is too messy and may not be accurate. I have not been able to re-create it right now to fix it. There are a few other changes I want to merge into master before I start fixing it.

@ram-ibm
Copy link
Collaborator

ram-ibm commented Feb 7, 2018

Please use the force flag - which skips the check logic for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants