1101-1150 of 10000 results (80ms)
2023-08-31 §
07:13 <marostegui@cumin1001> dbctl commit (dc=all): 'db1131 (re)pooling @ 75%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52142 and previous config saved to /var/cache/conftool/dbconfig/20230831-071343-root.json [production]
07:09 <kartik@deploy1002> abi and kartik: Continuing with sync [production]
07:07 <kartik@deploy1002> abi and kartik: Backport for [[gerrit:953216|Enable MinT translation service for testwiki]] synced to the testservers mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
07:05 <kartik@deploy1002> Started scap: Backport for [[gerrit:953216|Enable MinT translation service for testwiki]] [production]
07:01 <marostegui@cumin1001> dbctl commit (dc=all): 'db1182 (re)pooling @ 50%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52141 and previous config saved to /var/cache/conftool/dbconfig/20230831-070105-root.json [production]
06:58 <marostegui@cumin1001> dbctl commit (dc=all): 'db1131 (re)pooling @ 50%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52140 and previous config saved to /var/cache/conftool/dbconfig/20230831-065838-root.json [production]
06:57 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host idp1002.wikimedia.org [production]
06:53 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host idp1002.wikimedia.org [production]
06:46 <marostegui@cumin1001> dbctl commit (dc=all): 'db1182 (re)pooling @ 25%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52139 and previous config saved to /var/cache/conftool/dbconfig/20230831-064601-root.json [production]
06:43 <marostegui@cumin1001> dbctl commit (dc=all): 'db1131 (re)pooling @ 25%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52138 and previous config saved to /var/cache/conftool/dbconfig/20230831-064333-root.json [production]
06:30 <marostegui@cumin1001> dbctl commit (dc=all): 'db1182 (re)pooling @ 10%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52137 and previous config saved to /var/cache/conftool/dbconfig/20230831-063056-root.json [production]
06:28 <marostegui@cumin1001> dbctl commit (dc=all): 'db1131 (re)pooling @ 10%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52136 and previous config saved to /var/cache/conftool/dbconfig/20230831-062829-root.json [production]
06:15 <marostegui@cumin1001> dbctl commit (dc=all): 'db1182 (re)pooling @ 5%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52135 and previous config saved to /var/cache/conftool/dbconfig/20230831-061551-root.json [production]
06:13 <marostegui@cumin1001> dbctl commit (dc=all): 'db1131 (re)pooling @ 5%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52134 and previous config saved to /var/cache/conftool/dbconfig/20230831-061324-root.json [production]
06:00 <marostegui@cumin1001> dbctl commit (dc=all): 'db1182 (re)pooling @ 3%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52133 and previous config saved to /var/cache/conftool/dbconfig/20230831-060047-root.json [production]
05:58 <marostegui@cumin1001> dbctl commit (dc=all): 'db1131 (re)pooling @ 3%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52132 and previous config saved to /var/cache/conftool/dbconfig/20230831-055819-root.json [production]
05:48 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 100%: Repooling after maintenance ', diff saved to https://phabricator.wikimedia.org/P52131 and previous config saved to /var/cache/conftool/dbconfig/20230831-054805-root.json [production]
05:45 <marostegui@cumin1001> dbctl commit (dc=all): 'db1182 (re)pooling @ 1%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52130 and previous config saved to /var/cache/conftool/dbconfig/20230831-054542-root.json [production]
05:43 <marostegui@cumin1001> dbctl commit (dc=all): 'db1131 (re)pooling @ 1%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52129 and previous config saved to /var/cache/conftool/dbconfig/20230831-054314-root.json [production]
05:43 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1182 T344309', diff saved to https://phabricator.wikimedia.org/P52128 and previous config saved to /var/cache/conftool/dbconfig/20230831-054305-root.json [production]
05:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 75%: Repooling after maintenance ', diff saved to https://phabricator.wikimedia.org/P52127 and previous config saved to /var/cache/conftool/dbconfig/20230831-053300-root.json [production]
05:30 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1131 T345223', diff saved to https://phabricator.wikimedia.org/P52126 and previous config saved to /var/cache/conftool/dbconfig/20230831-053035-root.json [production]
05:28 <marostegui@cumin1001> dbctl commit (dc=all): 'Promote db1173 to s6 primary and set section read-write T345223', diff saved to https://phabricator.wikimedia.org/P52125 and previous config saved to /var/cache/conftool/dbconfig/20230831-052852-marostegui.json [production]
05:28 <marostegui@cumin1001> dbctl commit (dc=all): 'Set s6 eqiad as read-only for maintenance - T345223', diff saved to https://phabricator.wikimedia.org/P52124 and previous config saved to /var/cache/conftool/dbconfig/20230831-052825-marostegui.json [production]
05:28 <marostegui> Starting s6 eqiad failover from db1131 to db1173 - T345223 [production]
05:17 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 50%: Repooling after maintenance ', diff saved to https://phabricator.wikimedia.org/P52123 and previous config saved to /var/cache/conftool/dbconfig/20230831-051755-root.json [production]
05:02 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 25%: Repooling after maintenance ', diff saved to https://phabricator.wikimedia.org/P52122 and previous config saved to /var/cache/conftool/dbconfig/20230831-050250-root.json [production]
04:57 <marostegui@cumin1001> dbctl commit (dc=all): 'Set db1173 with weight 0 T345223', diff saved to https://phabricator.wikimedia.org/P52121 and previous config saved to /var/cache/conftool/dbconfig/20230831-045719-marostegui.json [production]
04:57 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 26 hosts with reason: Primary switchover s6 T345223 [production]
04:56 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on 26 hosts with reason: Primary switchover s6 T345223 [production]
04:47 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 10%: Repooling after maintenance ', diff saved to https://phabricator.wikimedia.org/P52120 and previous config saved to /var/cache/conftool/dbconfig/20230831-044746-root.json [production]
02:45 <jhancock@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jhancock@cumin2002" [production]
02:31 <jhancock@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on kubernetes2037.codfw.wmnet with reason: host reimage [production]
02:26 <jhancock@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on kubernetes2037.codfw.wmnet with reason: host reimage [production]
01:50 <jhancock@cumin2002> START - Cookbook sre.hosts.reimage for host kubernetes2037.codfw.wmnet with OS bullseye [production]
01:50 <jhancock@cumin2002> START - Cookbook sre.hosts.reimage for host kubernetes2038.codfw.wmnet with OS bullseye [production]
01:50 <jhancock@cumin2002> START - Cookbook sre.hosts.reimage for host kubernetes2039.codfw.wmnet with OS bullseye [production]
01:44 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host moss-be2003.codfw.wmnet with OS bullseye [production]
01:43 <jhancock@cumin2002> START - Cookbook sre.hosts.reimage for host moss-be2003.codfw.wmnet with OS bullseye [production]
01:43 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host moss-be2003.codfw.wmnet with OS bullseye [production]
01:43 <jhancock@cumin2002> START - Cookbook sre.hosts.reimage for host moss-be2003.codfw.wmnet with OS bullseye [production]
01:42 <jhancock@cumin2002> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['moss-be2003'] [production]
01:37 <jhancock@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['moss-be2003'] [production]
00:54 <jhancock@cumin2002> START - Cookbook sre.hosts.reimage for host moss-be2003.codfw.wmnet with OS bullseye [production]
00:50 <jhancock@cumin2002> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['moss-be2003'] [production]
00:43 <jhancock@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['moss-be2003'] [production]
2023-08-30 §
22:28 <krinkle@deploy1002> Synchronized php-1.41.0-wmf.24/extensions/WikimediaEvents/: 697ab03ae9a5d5ddb6 (duration: 06m 26s) [production]
22:09 <krinkle@deploy1002> Finished scap: Backport for [[gerrit:953649|mediawiki.util: Investigate when mw.util is compromised by third-party script (T343944)]] (duration: 35m 08s) [production]
21:57 <krinkle@deploy1002> krinkle: Continuing with sync [production]
21:55 <krinkle@deploy1002> krinkle: Backport for [[gerrit:953649|mediawiki.util: Investigate when mw.util is compromised by third-party script (T343944)]] synced to the testservers mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]