1351-1400 of 10000 results (66ms)
2022-08-03 ยง
20:34 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
20:34 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
20:33 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
20:31 <urbanecm@deploy1002> Synchronized php-1.39.0-wmf.23/extensions/CirrusSearch/: 70a18f5846111a0dfe8ba473daf384cbb8e88804: Add explicit partitioning key to ElasticaWrite (T314426) (duration: 03m 13s) [production]
20:28 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
20:28 <urbanecm@deploy1002> Synchronized php-1.39.0-wmf.22/extensions/CirrusSearch/: 9961e9bc8f5873f8ddc8a11108de0a7bfcb14ae6: Add explicit partitioning key to ElasticaWrite (T314426) (duration: 03m 23s) [production]
20:28 <cwhite@cumin2002> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host logstash2032.codfw.wmnet [production]
20:27 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
20:27 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
20:26 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1122 (T312972)', diff saved to https://phabricator.wikimedia.org/P32255 and previous config saved to /var/cache/conftool/dbconfig/20220803-202658-marostegui.json [production]
20:23 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
20:21 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1122 (T312972)', diff saved to https://phabricator.wikimedia.org/P32254 and previous config saved to /var/cache/conftool/dbconfig/20220803-202146-marostegui.json [production]
20:21 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1122.eqiad.wmnet with reason: Maintenance [production]
20:21 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1122.eqiad.wmnet with reason: Maintenance [production]
20:21 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182 (T312972)', diff saved to https://phabricator.wikimedia.org/P32253 and previous config saved to /var/cache/conftool/dbconfig/20220803-202125-marostegui.json [production]
20:14 <rzl@deploy1002> helmfile [codfw] DONE helmfile.d/services/mobileapps: apply [production]
20:13 <rzl@deploy1002> helmfile [codfw] START helmfile.d/services/mobileapps: apply [production]
20:13 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
20:12 <urbanecm@deploy1002> Synchronized wmf-config/InitialiseSettings.php: 195f8090b9694be65c937cea108ff4f6400972ec: Start writing to cuc_actor on test wikis (T233004) (duration: 03m 27s) [production]
20:09 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
20:09 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
20:09 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
20:08 <cwhite@cumin2002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) logstash2032.codfw.wmnet on all recursors [production]
20:08 <cwhite@cumin2002> START - Cookbook sre.dns.wipe-cache logstash2032.codfw.wmnet on all recursors [production]
20:08 <cwhite@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
20:07 <mutante> gerrit - adding second replica T313250 [production]
20:06 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P32252 and previous config saved to /var/cache/conftool/dbconfig/20220803-200619-marostegui.json [production]
20:04 <cwhite@cumin2002> START - Cookbook sre.dns.netbox [production]
20:03 <cwhite@cumin2002> START - Cookbook sre.ganeti.makevm for new host logstash2032.codfw.wmnet [production]
20:00 <rzl@cumin1001> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for kubernetes2012.codfw.wmnet [production]
20:00 <rzl@cumin1001> START - Cookbook sre.hosts.remove-downtime for kubernetes2012.codfw.wmnet [production]
20:00 <rzl@deploy1002> conftool action : set/pooled=yes; selector: name=kubernetes2012.codfw.wmnet [production]
19:51 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P32251 and previous config saved to /var/cache/conftool/dbconfig/20220803-195113-marostegui.json [production]
19:40 <ryankemper> T314078 Forgot to mention, restart is at `ryankemper@cumin1001` tmux session `codfw_restarts` [production]
19:39 <ryankemper> T314078 Rolling upgrade of codfw hosts; after this all of eqiad/codfw will have the new plugin version and we can resume the `search-loader` instances: `sudo -E cookbook sre.elasticsearch.rolling-operation search_codfw "codfw cluster plugin upgrade" --upgrade --nodes-per-run 3 --start-datetime 2022-08-03T19:38:10 --task-id T314078` [production]
19:38 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster plugin upgrade - ryankemper@cumin1001 - T314078 [production]
19:36 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182 (T312972)', diff saved to https://phabricator.wikimedia.org/P32250 and previous config saved to /var/cache/conftool/dbconfig/20220803-193607-marostegui.json [production]
19:33 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1182 (T312972)', diff saved to https://phabricator.wikimedia.org/P32249 and previous config saved to /var/cache/conftool/dbconfig/20220803-193354-marostegui.json [production]
19:33 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1182.eqiad.wmnet with reason: Maintenance [production]
19:33 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1182.eqiad.wmnet with reason: Maintenance [production]
19:33 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1129 (T312972)', diff saved to https://phabricator.wikimedia.org/P32248 and previous config saved to /var/cache/conftool/dbconfig/20220803-193334-marostegui.json [production]
19:25 <mutante> gerrit1001 - rsyncing /var/lib/gerrit/review_site/ over to gerrit2002 815401 [production]
19:18 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P32247 and previous config saved to /var/cache/conftool/dbconfig/20220803-191828-marostegui.json [production]
19:03 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P32246 and previous config saved to /var/cache/conftool/dbconfig/20220803-190321-marostegui.json [production]
18:56 <rzl@cumin1001> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for kubernetes2011.codfw.wmnet [production]
18:56 <rzl@cumin1001> START - Cookbook sre.hosts.remove-downtime for kubernetes2011.codfw.wmnet [production]
18:56 <rzl@deploy1002> conftool action : set/pooled=yes; selector: name=kubernetes2011.codfw.wmnet [production]
18:33 <rzl@cumin1001> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for mc[2027,2037].codfw.wmnet [production]
18:33 <rzl@cumin1001> START - Cookbook sre.hosts.remove-downtime for mc[2027,2037].codfw.wmnet [production]
18:23 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]