2020-09-21
§
|
07:34 |
<hashar> |
Upgrading https://integration.wikimedia.org/ci/job/integration-quibble-fullrun/ to Quibble 0.0.45 |
[releng] |
07:05 |
<XioNoX> |
upgrade FNM to 1.1.7 in ulsfo - T257035 |
[production] |
06:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Fully pool es2029 and es2030 T261717', diff saved to https://phabricator.wikimedia.org/P12677 and previous config saved to /var/cache/conftool/dbconfig/20200921-060053-marostegui.json |
[production] |
05:48 |
<marostegui> |
Set innodb_change_buffering = inserts; on db2129 (s6 master) for performance testing |
[production] |
05:47 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool es2029 and es2030 with more weight T261717', diff saved to https://phabricator.wikimedia.org/P12676 and previous config saved to /var/cache/conftool/dbconfig/20200921-054730-marostegui.json |
[production] |
05:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool es2029 and es2030 with more weight T261717', diff saved to https://phabricator.wikimedia.org/P12675 and previous config saved to /var/cache/conftool/dbconfig/20200921-052704-marostegui.json |
[production] |
05:18 |
<marostegui> |
Stop mysql on: es2013 es2016 es2019 to clone es2032 es2033 es2034 - T261717 |
[production] |
05:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool es2029 and es2030 with more weight T261717', diff saved to https://phabricator.wikimedia.org/P12674 and previous config saved to /var/cache/conftool/dbconfig/20200921-050632-marostegui.json |
[production] |
05:06 |
<marostegui> |
Deploy MCR schema change on s8 eqiad master, lag will appear on s8 (wikidata) on labsdb hosts T238966 |
[production] |
05:03 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool es2013,es2016 and es2019 to clone new hosts T261717', diff saved to https://phabricator.wikimedia.org/P12673 and previous config saved to /var/cache/conftool/dbconfig/20200921-050305-marostegui.json |
[production] |
05:02 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Set es2015 as es2 codfw master T261717', diff saved to https://phabricator.wikimedia.org/P12672 and previous config saved to /var/cache/conftool/dbconfig/20200921-050228-marostegui.json |
[production] |
04:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool es2029 and es2030 with more weight T261717', diff saved to https://phabricator.wikimedia.org/P12671 and previous config saved to /var/cache/conftool/dbconfig/20200921-045919-marostegui.json |
[production] |
04:37 |
<marostegui> |
Set innodb_change_buffering = inserts; on db2116 for performance testing |
[production] |
04:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool es2029 and es2030 for the first time with minimal weight T261717', diff saved to https://phabricator.wikimedia.org/P12670 and previous config saved to /var/cache/conftool/dbconfig/20200921-043154-marostegui.json |
[production] |
2020-09-18
§
|
21:48 |
<tzatziki> |
changed password for Millennium bug@ptwiki |
[production] |
19:41 |
<andrewbogott> |
repooling tools-k8s-worker-30, 33, 34, 57, 60 |
[tools] |
19:28 |
<eileen> |
process-control config revision is 739ea754ca |
[production] |
19:04 |
<andrewbogott> |
depooling tools-k8s-worker-30, 33, 34, 57, 60 |
[tools] |
19:02 |
<andrewbogott> |
repooling tools-k8s-worker-41, 43, 44, 47, 48, 49, 50, 51 |
[tools] |
18:52 |
<pt1979@cumin2001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
18:46 |
<pt1979@cumin2001> |
START - Cookbook sre.dns.netbox |
[production] |
18:44 |
<ryankemper> |
`sudo kill 254017 254018 254028 254029` to kill some dangling serdi / gzip processes, all the wikidata cleanup should be complete |
[production] |
18:38 |
<ryankemper> |
`sudo kill 126121 126122 126124 126128 249520 249521 254016 254027` on `snapshot1008` to terminate wikidata dump jobs that are in a bad state |
[production] |
18:10 |
<ryankemper> |
Removed stale `wikidatardf-dumps` crontab entry from `dumpsgen@snapshot1008`, stored backup of previous state of crontab in the (admittedly verbose) `/tmp/dumpsgen_crontab_before_removing_stale_wikidata_dump_entry_see_gerrit_puppet_patch_622342` |
[production] |
17:48 |
<andrewbogott> |
depooling tools-k8s-worker-41, 43, 44, 47, 48, 49, 50, 51 |
[tools] |
17:47 |
<andrewbogott> |
repooling tools-k8s-worker-31, 32, 36, 39, 40 |
[tools] |
17:15 |
<mutante> |
lists1001 - apt-get install pwgen to generate passwords (this was installed on previous list server but apparently not puppetized, puppet patch coming up) |
[production] |
16:40 |
<andrewbogott> |
depooling tools-k8s-worker-31, 32, 36, 39, 40 |
[tools] |
16:38 |
<andrewbogott> |
repooling tools-sgewebgrid-lighttpd-0914, tools-sgewebgrid-generic-0902, tools-sgewebgrid-lighttpd-0919, tools-sgewebgrid-lighttpd-0918 |
[tools] |
16:29 |
<Reedy> |
restarted due to huge phab lag to irc |
[tools.wikibugs] |
16:23 |
<pt1979@cumin2001> |
END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) |
[production] |
16:21 |
<pt1979@cumin2001> |
START - Cookbook sre.hosts.downtime |
[production] |
16:10 |
<andrewbogott> |
depooling tools-sgewebgrid-lighttpd-0914, tools-sgewebgrid-generic-0902, tools-sgewebgrid-lighttpd-0919, tools-sgewebgrid-lighttpd-0918 |
[tools] |
15:09 |
<mutante> |
restarting gerrit service to apply gerrit::628338 to make it dump heap if out of memory (T263008) |
[production] |
15:05 |
<elukey> |
systemctl reset-failed monitor_refine_eventlogging_legacy_failure_flags.service on an-launcher1002 to clear icinga alrms |
[analytics] |
14:15 |
<ladsgroup@deploy1001> |
Synchronized wmf-config/Wikibase.php: labs: Turn on termbox v2 on desktop for wikidatawiki -- noop for production, sanity sync (T261488) (duration: 00m 56s) |
[production] |
14:13 |
<ladsgroup@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: labs: Turn on termbox v2 on desktop for wikidatawiki -- noop for production, sanity sync (T261488) (duration: 01m 00s) |
[production] |
13:54 |
<andrewbogott> |
repooling tools-sgeexec-0913, tools-sgeexec-0915, tools-sgeexec-0916 |
[tools] |
13:50 |
<andrewbogott> |
depooling tools-sgeexec-0913, tools-sgeexec-0915, tools-sgeexec-0916 for flavor update |
[tools] |
13:32 |
<hashar> |
Building Quibble Docker images on contint1001 (cause it is faster than the primary contint2001 for some reason) |
[releng] |
13:30 |
<hashar> |
Tag Quibble 0.0.45 @ 70a3cfe8c8045582c1b1f49797af30a4deb15def |
[releng] |
13:25 |
<hashar> |
contint1001 pruning dangling docker images |
[releng] |