8201-8250 of 10000 results (54ms)
2021-03-24 ยง
12:32 <arturo> snapshot cinder volume `tools-docker-registry-data` into `tools-docker-registry-data-stretch-migration` (T278303) [tools]
12:32 <arturo> bump cinder storage quota from 80G to 400G (without quota request task) [tools]
12:30 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single for host ganeti2019.codfw.wmnet [production]
12:28 <effie> enabling puppet on mediawiki and memcached servers [production]
12:17 <arturo> attach the `toolsbeta-docker-registry-data` volume to the `toolsbeta-docker-registry-02` VM [toolsbeta]
12:11 <arturo> created VM `tools-docker-registry-06` as Debian Buster (T278303) [tools]
12:10 <jynus> restart dbprov200[12] T271913 [production]
12:09 <arturo> dettach cinder volume `tools-docker-registry-data` (T278303) [tools]
11:59 <marostegui@cumin1001> dbctl commit (dc=all): 'db1160 (re)pooling @ 100%: Slowly repool db1160 after schema change', diff saved to https://phabricator.wikimedia.org/P15076 and previous config saved to /var/cache/conftool/dbconfig/20210324-115940-root.json [production]
11:57 <Andrew-WMDE_> EU deploys done [production]
11:53 <jynus> restart dbprov100[12] T271913 [production]
11:51 <andrew-wmde@deploy1002> Synchronized php-1.36.0-wmf.35/extensions/MassMessage/: Backport: [[gerrit:674367|MassMessage: Unbreak remote content fetching (T276936)]] (duration: 01m 08s) [production]
11:49 <effie> disable puppet on all hosts running mediawiki+memcached to merge 674282 [production]
11:46 <arturo> attach cinder volume `tools-docker-registry-data` to VM `tools-docker-registry-03` to format it and pre-populate it with registry data (T278303) [tools]
11:45 <andrew-wmde@deploy1002> Synchronized php-1.36.0-wmf.36/extensions/MassMessage/: Backport: [[gerrit:674366|MassMessage: Unbreak remote content fetching (T276936)]] (duration: 01m 07s) [production]
11:44 <marostegui@cumin1001> dbctl commit (dc=all): 'db1160 (re)pooling @ 75%: Slowly repool db1160 after schema change', diff saved to https://phabricator.wikimedia.org/P15075 and previous config saved to /var/cache/conftool/dbconfig/20210324-114436-root.json [production]
11:41 <arturo> created VM toolsbeta-docker-registry-02 as Debian buster (T278303) [toolsbeta]
11:34 <arturo> attached cinder volume `toolsbeta-docker-registry-data` as /dev/vdb on toolsbeta-docker-registry-01 [toolsbeta]
11:29 <marostegui@cumin1001> dbctl commit (dc=all): 'db1160 (re)pooling @ 50%: Slowly repool db1160 after schema change', diff saved to https://phabricator.wikimedia.org/P15074 and previous config saved to /var/cache/conftool/dbconfig/20210324-112932-root.json [production]
11:24 <dcaro> Increase cinder volume quota to 200G (T277758) [iiab]
11:23 <arturo> created 2G cinder volume `toolsbeta-docker-registry-data` (T278303) [toolsbeta]
11:22 <andrew-wmde@deploy1002> Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:673326|Enable CodeMirror accessibility colors on initial wikis (T276346)]] (duration: 01m 08s) [production]
11:20 <arturo> created 80G cinder volume tools-docker-registry-data (T278303) [tools]
11:15 <jynus> restart serially db2097 db2098 db2099 db2100 T271913 [production]
11:14 <andrew-wmde@deploy1002> Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:673312|Enable bracket matching on group0 and wikitech (T273591)]] (duration: 01m 25s) [production]
11:14 <marostegui@cumin1001> dbctl commit (dc=all): 'db1160 (re)pooling @ 25%: Slowly repool db1160 after schema change', diff saved to https://phabricator.wikimedia.org/P15073 and previous config saved to /var/cache/conftool/dbconfig/20210324-111429-root.json [production]
11:10 <arturo> starting VM tools-docker-registry-04 which was stopped probably since 2021-03-09 due to hypervisor draining [tools]
11:01 <dcaro> Upgraded quota to 45 cores, 160GB cinder, 182GB ram (T277681) [dwl]
10:50 <jmm@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host irc1001.wikimedia.org [production]
10:48 <mbsantos@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'wikifeeds' for release 'production' . [production]
10:45 <mbsantos@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'wikifeeds' for release 'production' . [production]
10:44 <mbsantos@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'wikifeeds' for release 'staging' . [production]
10:39 <dcaro> increased floating ip quota by 1 (T277706) [k8splay]
10:36 <jmm@cumin1001> START - Cookbook sre.ganeti.makevm for new host irc1001.wikimedia.org [production]
10:31 <jynus> restart db1171 T271913 [production]
10:15 <akosiaris@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
10:14 <akosiaris@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
10:14 <jynus> restart db1145 T271913 [production]
10:06 <akosiaris@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
10:06 <akosiaris@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
10:03 <jynus> restart db1139 T271913 [production]
09:56 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1160 for schema change', diff saved to https://phabricator.wikimedia.org/P15072 and previous config saved to /var/cache/conftool/dbconfig/20210324-095655-marostegui.json [production]
09:56 <marostegui@cumin1001> dbctl commit (dc=all): 'db1149 (re)pooling @ 100%: Slowly repool db1149 after schema change', diff saved to https://phabricator.wikimedia.org/P15071 and previous config saved to /var/cache/conftool/dbconfig/20210324-095606-root.json [production]
09:51 <jynus> restart db1116 T271913 [production]
09:41 <marostegui@cumin1001> dbctl commit (dc=all): 'db1149 (re)pooling @ 75%: Slowly repool db1149 after schema change', diff saved to https://phabricator.wikimedia.org/P15070 and previous config saved to /var/cache/conftool/dbconfig/20210324-094102-root.json [production]
09:28 <jayme@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
09:28 <jayme@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
09:25 <marostegui@cumin1001> dbctl commit (dc=all): 'db1149 (re)pooling @ 50%: Slowly repool db1149 after schema change', diff saved to https://phabricator.wikimedia.org/P15069 and previous config saved to /var/cache/conftool/dbconfig/20210324-092558-root.json [production]
09:19 <dcaro> restarted wmcs-backup on cloudvirt1024 as it failed due to an image being removed while running (T276892) [admin]
09:10 <marostegui@cumin1001> dbctl commit (dc=all): 'db1149 (re)pooling @ 25%: Slowly repool db1149 after schema change', diff saved to https://phabricator.wikimedia.org/P15068 and previous config saved to /var/cache/conftool/dbconfig/20210324-091055-root.json [production]